We want to hear from you! Take our quick AI survey and share your insights on the current state of AI, how you’re implementing it, and what you expect to see in the future. Learn More
In the age of AI, enterprises want to drive critical internal functions with large language models (LLMs). They are investing millions, but bringing those use cases to life – with ROI – is far from easy. Today, New York-based Hebbia, a startup simplifying this approach with a focus on information retrieval, announced it had raised $130 million in series B funding from Andreessen Horowitz, Index Ventures, Peter Thiel and the venture capital arm of Google.
What Hebbia is building is quite simple: an LLM-native productivity interface that makes driving value from data easier, regardless of its type or size. The company is already working with some large firms in the financial services industry, including hedge funds and investment banks, and is planning to take the technology to more enterprises in the coming days.
“AI is undoubtedly the most important technology of our lives. But technology doesn’t drive revolutions– products do. Hebbia is building the human layer –the product layer– to AI,” George Sivulka, the founder and CEO of the company, wrote in a blog post. Before this, the company raised $31 million across several rounds.
What does Hebbia have on offer?
While LLM-based chatbots can be grounded in internal documentation or prompted with documents, many have noticed that these assistants can fail at answering complex questions about business functions. In some cases, the problem is the context window, which fails to handle the size of the document provided, while in others, the sheer complexity of the query makes it impossible for the model to address it accurately. The errors can even affect teams’ confidence in the power of language models.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
Founded in 2020, Hebbia addresses this gap by providing enterprises with an LLM-linked agentic copilot called Matrix. The offering sits in the business environment of companies and allows knowledge workers to ask complex questions associated with internal documents — from PDFs, spreadsheets and Word documents to audio transcripts — with an infinite context window.
Once a user provides the query and the associated documents/files, Matrix takes the prompt and breaks it down into smaller actions that the LLM sitting under the hood can execute. This enables it to analyze all the information contained in the documents at once and extract exactly what’s needed in a structured form. Hebbia says the platform enables the model to reason over any amount (millions to billions of docs) and modality of data while also providing relevant citations to help users trace every action and understand how exactly the platform got to the final answer.
“Designed for the knowledge worker, Hebbia lets you instruct AI agents to complete tasks exactly the way you do them – no task too complex, no dataset too large and with full flexibility and transparency of a spreadsheet (or a human analyst!),” Sivulka explained in the post.
Significant impact in a few years
Sivulka reportedly started the platform with a focus on simplifying the life of financial industry workers who spent most of their time digging up relevant information from documents. However, over the years, the company has gained traction from other segments as well. It currently has over 1,000 use cases in production with multiple major enterprises, including CharlesBank, American Industrial Partners, Oak Hill Advisors, Center View Partners, Fisher Phillips and the U.S. Air Force.
“Over the last 18 months, we grew revenue 15X, quintupled headcount, drove over 2% of OpenAI’s daily volume, and laid the groundwork for customers to redefine how they work,” Sivulka noted. However, it remains unclear if OpenAI is the only model they use as part of the Matrix platform or if users have the option to choose other LLMs as well.
With the latest round of funding, the company hopes to build on this work and draw more large enterprises towards its platform to simplify how their workers retrieve knowledge.
“I’m excited for a world of unbound progress– one where AI agents contribute more to global GDP than every human employee. I believe that Hebbia is going to get us there,” Sivulka added while noting that the company is building the most important software product of the next 100 years.
However, it is important to note that Hebbia is not the only company in this space. Other enterprises are also exploring AI-based knowledge retrieval for enterprises, including Glean. The Palo Alto, CA-based startup hit the unicorn status in 2022 and has built a ChatGPT-like assistant specifically for workplace productivity. There are also players like Vectara that are working to enable gen AI experiences grounded in enterprise data.
Source link lol