stp2y

30109 Posts
Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications

Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications

arXiv:2405.15877v1 Announce Type: new Abstract: Large language models (LLMs) significantly enhance the performance of various applications, but they are computationally intensive and energy-demanding. This makes it challenging to deploy them on devices with limited resources, such as personal computers and mobile/wearable devices, and results in substantial inference costs in resource-rich environments like cloud servers. To extend the use of LLMs, we introduce a low-rank decomposition approach to effectively compress these models, tailored to the requirements of specific applications. We observe that LLMs pretrained on general datasets contain many redundant components not needed for particular applications. Our method focuses on identifying…
Read More
IBM’s AI-driven storage solutions: Future-ready data management – SiliconANGLE

IBM’s AI-driven storage solutions: Future-ready data management – SiliconANGLE

As the digital landscape continues to evolve, IBM Corp.’s commitment to innovation and adaptability positions it as a key player in shaping the future of data management. The computing giant stands at the forefront of the technological landscape with its AI-driven approach to data storage. “This is a key part of the conversation,” said John Furrier (pictured, left), theCUBE Research executive analyst. “The future is coming here. It’s now. The future is now. AI’s now. Infrastructure is really hard.” Furrier spoke with theCUBE Research’s Dave Vellante (center), chief analyst, and Rob Strechay (right), principal analyst, at IBM’s “Future-Ready Storage Redefining…
Read More
Efficient Point Transformer with Dynamic Token Aggregating for Point Cloud Processing

Efficient Point Transformer with Dynamic Token Aggregating for Point Cloud Processing

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
What We’ve Learned From A Year of Building with LLMs

What We’ve Learned From A Year of Building with LLMs

Recently, a couple friends and I threw around the idea of writing about our experience with LLMs and AI Engineering (image below). One thing led to another and that’s how this three-part series came about. Here, we share our hard-won lessons, and advice to make it easier. This is also cross-posted on O’Reilly. We hope you’ll find it useful! Behind the scenes of how this write-up started It’s an exciting time to build with large language models (LLMs). Over the past year, LLMs have become “good enough” for real-world applications. The pace of improvements in LLMs, coupled with a parade…
Read More
Elon Musk and Yann LeCun’s social media feud highlights key differences in approach to AI research and hype

Elon Musk and Yann LeCun’s social media feud highlights key differences in approach to AI research and hype

Time's almost up! There's only one week left to request an invite to The AI Impact Tour on June 5th. Don't miss out on this incredible opportunity to explore various methods for auditing AI models. Find out how you can attend here. Over the Memorial Day weekend, while most Americans were firing up their grills and enjoying a cold one, Yann LeCun, Meta’s chief AI scientist, and Elon Musk, the enigmatic CEO of Tesla and xAI, were engaged in a no-holds-barred digital dustup on X.com (formerly Twitter). This clash of the AI titans exposed some of the key fault lines…
Read More
A hierarchical Bayesian model for syntactic priming

A hierarchical Bayesian model for syntactic priming

arXiv:2405.15964v1 Announce Type: new Abstract: The effect of syntactic priming exhibits three well-documented empirical properties: the lexical boost, the inverse frequency effect, and the asymmetrical decay. We aim to show how these three empirical phenomena can be reconciled in a general learning framework, the hierarchical Bayesian model (HBM). The model represents syntactic knowledge in a hierarchical structure of syntactic statistics, where a lower level represents the verb-specific biases of syntactic decisions, and a higher level represents the abstract bias as an aggregation of verb-specific biases. This knowledge is updated in response to experience by Bayesian inference. In simulations, we show…
Read More
OpenAI’s board allegedly learned about ChatGPT launch on Twitter

OpenAI’s board allegedly learned about ChatGPT launch on Twitter

Helen Toner, one of OpenAI’s former board members who was responsible for firing CEO Sam Altman last year, revealed that the company’s board didn’t know about the launch of ChatGPT until it was released in November 2022. “[The] board was not informed in advance of that,” Toner said on Tuesday on a podcast called The Ted AI Show. “We learned about ChatGPT on Twitter.”Toner’s comments came just two days after criticized the way OpenAI was governed in an Economist piece published on Sunday that she co-wrote with Tasha McCauley, another former OpenAI board member. This is the first time that…
Read More
Robocaller Who Spoofed Joe Biden’s Voice With AI Faces $6 Million Fine

Robocaller Who Spoofed Joe Biden’s Voice With AI Faces $6 Million Fine

The government wants to send "a strong deterrent signal to anyone who might consider interfering with elections."Deepfake DeterrentAs election season heats up, the Federal Communications Commission (FCC) is sending a strong message to anyone who seeks to use artificial intelligence on robocalls.Democratic political consultant Steve Kramer is, as the FCC announced in a press release, facing a whopping $6 million fine for sending thousands of New Hampshire voters robocalls featuring the deepfaked voice of president Joe Biden, encouraging them not to vote in the state's primary.Kramer, who'd been working for upstart presidential candidate Dean Phillips, commissioned the calls during New…
Read More
J.P. Morgan Launches ‘Containerized Data’ Solution in the Cloud

J.P. Morgan Launches ‘Containerized Data’ Solution in the Cloud

(Tee11/Shutterstock) Getting access to consistent, high-quality data ranks as one of the toughest challenges in big data, advanced analytics, and AI. It’s a challenge that is being taken up by Fusion by J.P. Morgan with its new Containerized Data offering, which provides institutional investors with consistent, enriched data that’s been standardized with a common semantic layer. The worst-kept secret in big data is that data prep consumes the vast majority of time in analytics, machine learning, and AI projects. Raw data does contain signals that data scientists so desperately want to leverage for competitive gain, but that data must be…
Read More
Creating ticking tables with pure Python functions | Deephaven

Creating ticking tables with pure Python functions | Deephaven

Creating and manipulating ticking tables is the bread and butter of the Deephaven experience. The wide array of table operations offered by the Python API enables real-time data manipulation and analysis at scale. So, it needs to be really easy to create a Deephaven table, and we need to be able to create one from many different kinds of data sources. With the introduction of function-generated tables, this has never been simpler.Function-generated tables allow you to write arbitrary Python functions for retrieving and cleaning data and use the results to populate a ticking table. The function is then re-evaluated at…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.