High-performance computing, long confined to academic labs, has today become the backbone of AI-driven business transformations. But no matter the use case, a massive amount of processing is needed to handle the data and support the heavy demands of AI-driven data infrastructure and HPC tasks.
To handle these challenges, companies such s Super Micro Computer Inc. and its partners have sought to deliver AI-driven data infrastructure and storage solutions designed to meet the high-performance demands of modern computing. It’s an exciting time to be involved in the industry, according to CJ Newburn (pictured, back row, right), distinguished engineer at Nvidia Corp.
“One of the things that characterizes some of what’s going on now is the rate at which usage models are changing and evolving,” Newburn said. “Every couple of years, we have radical changes. That is leading to a need for new infrastructure. In order to be able to make best use of that, new technologies are needed and are quickly emerging out of that.”
Newburn; Randy Kreiser (front row, right), senior storage architect at Supermicro; Balaji Venkateshwaran (front row, left), vice president of product management at DataDirect Networks Inc.; and Bill Panos (back row, left), senior product marketing engineering manager at Solidigm, spoke with theCUBE Research’s Rob Strechay at the Supermicro Open Storage Summit event series, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the evolving role of AI-driven data infrastructure in HPC workloads and the critical importance of advanced storage solutions to support these demands. (* Disclosure below.)
AI-driven data infrastructure looks to support large-scale AI models
Recent usage models are defined by their large scale. They also involve increasingly fine-grained access patterns, according to Newburn.
“Some of the different applications that we see in this space are LLMs, large language models, GNNs, graph neural networks, and RAG, for retrieval augmented generation,” he said. “Those new applications really need new infrastructure. They operate at a big scale. You can see the Supermicro NVLink rack-level integration, where that whole rack acts as one GPU, all NVLink connected.”
As mentioned, the new world of AI and machine learning involves a large amount of data, as well as a variety of data, block, file and object. DataDirect Networks has been seeking to make data management easy and seamless for customers, according to Venkateshwaran.
“When a customer is buying a GPU infrastructure, computer infrastructure for their AI and ML application, what DDN wants to do is be the one-stop shop in terms of storage and data management,” he said. “There are a number of things we’ve done over the years and continue to do in partnership with everyone that’s here.”
Holistic AI infrastructure and tailored storage solutions
For Solidigm, the focus is at the physical level with the company’s SSDs. When considered holistically, there’s areas within the stage for certain media products that a company will engage with, according to Panos.
“Either with, say, DDN or with Nvidia or Supermicro, as you set up that infrastructure, it isn’t a one-size-fits-all, so you have to think about it holistically and look for each of the stages and what the necessary requirements might be,” Panos said. “You can, certainly, as you’re thinking about your infrastructure and rolling that out, engage with Supermicro, Solidigm or DDN or Nvidia to help you make those right decisions.”
At the end of the day, it’s about providing solutions for companies. It involves various new technologies from Supermicro and its partners, according to Kreiser.
“We’re talking about the ability to bring trillions of all of these threads …. in all of the modeling and so forth, especially with large language models and so forth, which I see just exploding all over the place,” he said.
That’s going to lead to a number of possibilities, in Kreiser’s view. The examples are numerous: “Whether it’s seismic processing to get to figure out where the next major drill place is going to be for oil, or whether that be medical reasons to find a discovery for cancer, or whatever it may be, the ability to take larger models and be able to just compute down to the nth degree to deliver effectively these results.”
Collaboration is also driving improvements in efficiency and performance for the AI data life cycles. That goes right to the heart of what the partnership is all about, according to Venkateshwaran.
“As use cases and data types explode, the goal is to make it simpler and abstract away the complexity from the customer, so the customer can focus on running their applications and doing what they like to do best,” he said. “What we are doing here is working behind the scenes to abstract away all that complexity for the customers.”
Stay tuned for the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the Supermicro Open Storage Summit event series.
(* Disclosure: TheCUBE is a paid media partner for the Supermicro Open Storage Summit event series. Neither Super Micro Computer Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU
Source link
lol