CoreWeave Inc., the operator of a cloud platform optimized for graphics card workloads, today announced that it has closed a $1.1 billion funding round.
The Series C raise reportedly values the company at $19 billion. That’s up from the $7 billion it was worth following a $642 million secondary sale in December. Fidelity Management, which led that deal, also joined in the funding round CoreWeave announced today along with Coatue, Lykos Global Management, Altimeter Capital and Magnetar.
CoreWeave operates a public cloud that provides access to about a dozen different Nvidia Corp. graphics processing units. It targets two main use cases: artificial intelligence and graphics rendering. CoreWeave claims that its platform allows customers to run such workloads more cost-efficiently than established public clouds and with better performance.
Some of the GPUs the company offers, such as the H100, are built from the ground up for AI workloads. Its cloud also features other Nvidia chips such as the A40, which is mainly geared towards computer graphics professionals.
Unlike their AI-optimized counterparts, the A40 and the other rendering-optimized GPUs that CoreWeave provides include RT Cores. Those are circuits optimized for ray tracing, a rendering technique used to simulate lighting effects such as shadows and motion blur. The method involves shining virtual light rays on an object and studying how those rays bounce back to find the most realistic-looking pixel settings.
CoreWeave’s cloud is based on Kubernetes. It uses a Google LLC-developed Kubernetes extension, Knative, to adjust the amount of hardware automatically in customers’ environments as application demand changes.
One of Knative’s flagship features is its so-called scale-to-zero mechanism. Companies often can’t completely shut down their GPU clusters when they’re not actively used, but must rather leave certain components running. Those unused components consume hardware resources and thus incur additional costs. Thanks to Knative, CoreWeave customers can shut down all the GPUs in their clusters when they’re not actively needed.
Building scale-to-zero GPU clusters was historically difficult because reactivating graphics cards after shutting them down can take a significant amount of time. That lengthy boot process, in turn, increases hardware costs and creates latency for users. CoreWeave has developed a software tool called Tensorizer to address the challenge.
One reason reactivating a GPU cluster takes time is that the AI model it’s running must be reloaded into the graphics cards every time. Because the most advanced AI models are multiple gigabytes in size, the loading process can be slow. According to CoreWeave, Tensorizer speeds up the workflow by pulling AI models into GPUs in small chunks rather than all at once like other tools.
The company has also equipped its platform with a number of other performance optimizations.
CoreWeave orchestrates the flow of data between graphics cards using GPUDirect RDMA, a Nvidia-developed network acceleration technology. Typically, network requests to a graphic card must go through its host server’s operating system and central processing unit. GPUDirect RDMA skips those pit stops, which allows data to reach GPUs faster.
The company installs its graphics cards in bare-metal servers. Those are machines that don’t run a hypervisor, an arrangement that avoids the hardware overhead associated with virtualization and leaves more resources for customer workloads.
It currently hosts its infrastructure in 14 data centers throughout the U.S., most of which were built over the past two years. The company will reportedly use the proceeds from its newly announced funding round to build additional cloud facilities in Europe. In the longer term, it plans to extend its data center network to additional markets and raise more funding to support its growth efforts.
Image: CoreWeave
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU
Source link
lol