Transforming the world through Google’s latest releases

Transforming the world through Google's latest releases


This article is based on Burak Gokturk’s brilliant talk at the AI Accelerator Summit in San Jose. As an AIAI member, you can enjoy the complete recording here. For more exclusive content, head to your membership dashboard.


The world of generative AI has been moving at a blistering pace, with new models and platforms seeming to launch every week. In my 20-plus years in the field, I’ve never seen such a whirlwind of innovation.

While the last couple of years have generated tremendous excitement around generative AI’s potential, actually deploying it to create business value is still a major challenge for many organizations.

Despite trying out generative AI experimentally, a lot of teams are struggling with how to effectively implement and operationalize it in production settings. There’s a clear need for comprehensive, enterprise-ready platforms that provide flexibility, customization, and robust model lifecycle management.

I’m Burak Gokturk, and I lead Google Cloud AI – you may know some of our products like Vertex AI, Vertex AI Vision, and Vertex AI Search. In this article, I’ll outline the key requirements we’ve identified for successfully deploying generative AI at scale based on working with hundreds of leading companies. I’ll then dive into how Google Cloud’s Vertex AI platform addresses those needs.

Let’s get to it.

Meeting enterprise needs for generative AI

Through discussions with hundreds of organizations building generative AI applications, we’ve identified several key requirements for an enterprise-grade platform:

  1. Flexibility: For many customers, it’s important to be on a platform with choice and flexibility. You’ve probably noticed there’s a new generative AI model launching every other week. Customers have seen that and they don’t want to get locked into just one model long-term.
  2. Customization: These generative AI models are essentially trained on mostly public data. But every customer has their own use case and data. You need a platform with tuning, grounding, and customization mechanisms to handle that.
  3. Deployment: Once you figure out a model and customize it with your data, how do you actually get it to production to create business value? Choosing a platform with deployment, evaluation, testing, monitoring capabilities, and all the necessary metrics, is going to be super critical.

With the customer needs I’ve just described in mind, we built Vertex AI. It has multiple layers, including Agent Builder, to make building generative AI applications and agents easy.

Another vital layer of Vertex AI is Model Garden; it offers over 130 models. That might not sound like many – after all, there are thousands of generative AI models out there – but we’ve curated the models we believe will be the most useful for our customers. We really believe in providing choice, with first-party models, partner models, and open-source options.

The evolution of Gemini Pro

Chances are, you’re aware of Gemini; we launched it on AI Studio and Vertex AI in December 2023. Since then, there’s been a lot of interest, with over a million developers using Gemini daily. But how did we get here? 

The first model we launched was Gemini 1.0 Pro, which has cool capabilities like multimodal support and high performance. Earlier this year, we announced a new version of Gemini 1.0 Pro, which is significantly faster and higher quality. But as I said before, you’ll see new models and revisions launching literally every week, not just from Google but across the globe. 

When we launched 1.0 Pro in December, we didn’t stop there. About six weeks later, we released Gemini 1.5 Pro. It’s a much bigger model with significantly better reasoning capabilities. It also has something no other model in the world offers – a large context window. That means the AI can recall much more information during a session.

Gemini 1.5 Pro is now in public preview on Vertex AI and AI Studio, so you can easily try it out. To give you an idea of what it can do, you can input an image or a video clip with no description, and it can analyze the contents.

For example, I gave it a short personal clip of Draymond Green talking to a referee at a Warriors game. I just asked, “What is this?” and it immediately responded, “This is Draymond Green from the Golden State Warriors, talking with a referee.”

Become a member to see the rest

Certain pieces of content are only to AIAI members and you’ve landed on one of them. To access tonnes of
articles like this and more, get yourself signed up.

Already a member? Awesome Sign in

Get product marketing certified.

PMMC™ unleashes product marketers’ potential. Lauded by leading lights like Facebook and HubSpot, it
offers expert insights, priceless tuition, and awesome resources. No topic missed. No page unturned.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.