AI

Messaging your AI pricing model

Messaging your AI pricing model

This article is based on Ismail Madni’s brilliant talk at the Product Marketing Summit in Austin, hosted by our sister community, Product Marketing Alliance. More and more AI capabilities are being added to product roadmaps every single day. Even companies that aren’t using true AI are still incorporating increasingly advanced capabilities into their product. This gives us all a golden opportunity to rethink how we price our offerings and the story behind that pricing – and that’s what I’m excited to talk to you about today.A brief history of software pricing modelsLet's start by taking a quick look at the history…
Read More
How Good Are the Latest Open LLMs? And Is DPO Better Than PPO?

How Good Are the Latest Open LLMs? And Is DPO Better Than PPO?

April 2024, what a month! My birthday, a new book release, spring is finally here, and four major open LLM releases: Mixtral, Meta AI's Llama 3, Microsoft's Phi-3, and Apple's OpenELM.This article reviews and discusses all four major transformer-based LLM model releases that have been happening in the last few weeks, followed by new research on reinforcement learning with human feedback methods for instruction finetuning using PPO and DPO algorithms.1. How Good are Mixtral, Llama 3, and Phi-3?2. OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework3. Is DPO Superior to PPO for LLM Alignment? A Comprehensive…
Read More
OpenAI’s Long-Term AI Risk Team Has Disbanded

OpenAI’s Long-Term AI Risk Team Has Disbanded

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power.Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will…
Read More
2024 BAIR Graduate Directory

2024 BAIR Graduate Directory

Every year, the Berkeley Artificial Intelligence Research (BAIR) Lab graduates some of the most talented and innovative minds in artificial intelligence and machine learning. Our Ph.D. graduates have each expanded the frontiers of AI research and are now ready to embark on new adventures in academia, industry, and beyond. These fantastic individuals bring with them a wealth of knowledge, fresh ideas, and a drive to continue contributing to the advancement of AI. Their work at BAIR, ranging from deep learning, robotics, and natural language processing to computer vision, security, and much more, has contributed significantly to their fields and has…
Read More
Building Generative AI prompt chaining workflows with human in the loop | Amazon Web Services

Building Generative AI prompt chaining workflows with human in the loop | Amazon Web Services

Generative AI is a type of artificial intelligence (AI) that can be used to create new content, including conversations, stories, images, videos, and music. Like all AI, generative AI works by using machine learning models—very large models that are pretrained on vast amounts of data called foundation models (FMs). FMs are trained on a broad spectrum of generalized and unlabeled data. They’re capable of performing a wide variety of general tasks with a high degree of accuracy based on input prompts. Large language models (LLMs) are one class of FMs. LLMs are specifically focused on language-based tasks such as summarization, text…
Read More
Modeling Extremely Large Images with xT

Modeling Extremely Large Images with xT

As computer vision researchers, we believe that every pixel can tell a story. However, there seems to be a writer’s block settling into the field when it comes to dealing with large images. Large images are no longer rare—the cameras we carry in our pockets and those orbiting our planet snap pictures so big and detailed that they stretch our current best models and hardware to their breaking points when handling them. Generally, we face a quadratic increase in memory usage as a function of image size. Today, we make one of two sub-optimal choices when handling large images: down-sampling…
Read More
Mixtral 8x22B is now available in Amazon SageMaker JumpStart | Amazon Web Services

Mixtral 8x22B is now available in Amazon SageMaker JumpStart | Amazon Web Services

Today, we are excited to announce the Mixtral-8x22B large language model (LLM), developed by Mistral AI, is available for customers through Amazon SageMaker JumpStart to deploy with one click for running inference. You can try out this model with SageMaker JumpStart, a machine learning (ML) hub that provides access to algorithms and models so you can quickly get started with ML. In this post, we walk through how to discover and deploy the Mixtral-8x22B model. What is Mixtral 8x22B Mixtral 8x22B is Mistral AI’s latest open-weights model and sets a new standard for performance and efficiency of available foundation models,…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.