How Microsoft sees its Models-as-a-Service feature democratizing access to AI

How Microsoft sees its Models-as-a-Service feature democratizing access to AI


Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.


Today’s tools make it easy to build AI-powered applications. But a complex area most, if not all, developers want to avoid is having to sort out how to host the models being used. It’s one thing to choose between OpenAI’s GPT-4o, Meta’s Lllama 3, Google’s Gemini or the many open-source models out in the marketplace. It’s quite another to deploy it.

Such necessary but head-scratching work could frustrate developers, turning them off to their entrepreneurial ideas. However, Microsoft has a solution that could make it easier to focus more on the creative process than the model housekeeping. Called Models-as-a-Service (MaaS), it’s the AI equivalent of cloud services, charging for access rather than infrastructure and is available through the company’s AI Azure Studio product.

Keep it simple

“If you’ve ever tried to deploy a model, there’s a series of combinations of incantations and Pytorch versions and CPU and GPU stuff,” Seth Juarez, the principal program manager for Microsoft’s AI platform, tells VentureBeat. “Models-as-a-Service kind of abstracts all of that away, so that if you have a model that you want to use, and that’s open source or that’s something that OpenAI built, we provide that in a catalog. You hit a button, and now you have an endpoint to use it.”

Developers can rent inference APIs and host fine-tuning through a pay-as-you-go plan—all without needing to use a virtual machine. Juarez explains that while Microsoft has over 1,600 models that do various things, it also wants to make it easier for developers to leverage the AI functionality into their software, and MaaS is a way to achieve that.

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Request an invite

From its inception in 2023 to today, Microsoft has made select models available through this program. Initially, Mistral-7B and Meta’s Llama 2 were available. This week, it added TimeGen-1 from Nixtila and Core42 JAIS and says those from AI21, Bria AI, Gretel Labs, NTT Data, Stability AI and Cohere are coming soon. It’s a small fraction of what’s available on AI Azure Studio, so how does one become a MaaS model?

Some result from company partnerships, which Juarez admits he’s not privy to how that happens. Others are supported because some API work has been done to make those models’ function signatures uniform enough to be part of Models-as-a-Service. There’s a unified way to access these models. Unfortunately, more specialized models are ineligible and must be deployed in another way. “That’s why you see some enabled as Models-as-a-Service and others you see you can push into your own container and run in what we call managed inference,” he says.

To ‘rent’ or ‘own’ your models

He believes in the future, we’ll see a bifurcation paradigm in which developers will choose models in a manner similar to being a homeowner or renter. “Basically, you own the container, and the model, and Azure ML, and you’re paying the rent and doing the upkeep, so to speak,” Juarez remarks. “In Models-as-a-Service, we do the upkeep. And the more of those models that we light up there, if you want to rent, that’s great. But there are other people who are very particularly behind a virtual network and need to run stuff on it.”

MaaS isn’t a unique…model. But what happened that made AI the most prominent technology to replicate the cloud computing business? Juarez suggests the status quo has been reversed—no longer are tech companies pushing out tech they think we need. Now, we are demanding features and services from tech companies. This is thanks to the research and the commercialization of AI being in near lockstep with each other. “At least, in my opinion, that’s why you’re seeing this weird inversion, where you have consumers demanding this kind of experience through numbers of usage of ChatGPT. And now, the enterprise is trying to catch up…the user is demanding the research experiences today.”



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.