Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More
Google says Gemma 2, its open lightweight model series, will be available to researchers and developers through Vertex AI starting next month. But while it initially only contained a 27-billion parameter member, the company surprised us by also including a 9-billion one.
Gemma 2 was introduced back in May at Google I/O as the successor to Gemma’s 2-billion and 7-billion parameter models, which debuted in February. The next-gen Gemma model is designed to run on Nvidia’s latest GPUs or a single TPU host in Vertex AI. It targets developers who want to incorporate AI into their apps or edge devices such as smartphones, IoT devices, and personal computers.
The two Gemma 2 model members follow their predecessors and reflect the current AI landscape. Technological innovations allow for smaller and lighter models to tackle different user requests. With a 9-billion and 27-billion parameter option, Google gives developers a choice in how they might want to use these models—either on-device or through the cloud. Because it’s open-sourced, it can be easily customized and integrated into various projects that Google may not usually foresee.
It will be worth seeing if the existing Gemma variants—CodeGemma, RecurrentGemma and PaliGemma—will benefit from these two Gemma 2 models.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
But Google isn’t stopping with just two Gemma 2 sizes. It says it’ll soon release a 2.6B parameter model designed to “bridge the gap between lightweight accessibility and powerful performance.”
Gemma 2 is available in Google AI Studio. Developers can download its model weights from Kaggle and Hugging Face. Researchers can use Gemma 2 for free through Kaggle or through a free tier for Colab notebooks.
Source link lol