Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Nvidia today unveiled Nvidia Project Digits, a personal AI supercomputer that provides AI researchers, data scientists and students worldwide with access to the power of the Nvidia Grace Blackwell
platform.
Project Digits features the new Nvidia GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models. The company made the announcement during CEO Jensen Huang’s opening keynote at CES 2025, the big tech trade show in Las Vegas this week.
With Project Digits, users can develop and run inference on models using their own desktop system, then seamlessly deploy the models on accelerated cloud or data center infrastructure. It is based on a “super secret chip called GB110, the smallest Blackwell we can make,” Huang said.
“AI will be mainstream in every application for every industry. With Project Digits, the Grace Blackwell Superchip comes to millions of developers,” said Huang. “Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the
age of AI.”
GB10 Superchip Provides a Petaflop of Power-Efficient AI Performance The GB10 Superchip is a system-on-a-chip (SoC) based on the Nvidia Grace Blackwell architecture and delivers up to 1 petaflop of AI performance at FP4 precision.
GB10 features an Nvidia Blackwell GPU with latest-generation CUDA cores and fifth-generation Tensor Cores, connected via NVLink-C2C chip-to-chip interconnect to a high-performance Nvidia Grace CPU, which includes 20 power-efficient cores built with the Arm architecture. MediaTek, a market leader in Arm-based SoC designs, collaborated on the design of GB10, contributing to its best-in-class power efficiency, performance and connectivity.
The GB10 Superchip enables Project DIGITS to deliver powerful performance using only a standard electrical outlet. Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage. With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation. In addition, using Nvidia ConnectX networking, two Project DIGITS AI supercomputers can be linked to run up to 405-billion-parameter models.
Grace Blackwell AI Supercomputing Within Reach
With the Grace Blackwell architecture, enterprises and researchers can prototype, fine-tune and test models on local Project DIGITS systems running Linux-based Nvidia DGX OS, and then deploy them seamlessly on Nvidia DGX Cloud, accelerated cloud instances or data center infrastructure.
This allows developers to prototype AI on Project DIGITS and then scale on cloud or data center infrastructure, using the same Grace Blackwell architecture and the Nvidia AI Enterprise software platform.
Project Digits users can access an extensive library of Nvidia AI software for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the Nvidia NGC catalog and on the Nvidia Developer portal. Developers can fine-tune models with the
Nvidia NeMo framework, accelerate data science with Nvidia Rapids libraries and run common frameworks such as PyTorch, Python and Jupyter notebooks.
To build agentic AI applications, users can also harness Nvidia Blueprints and Nvidia NIM microservices, which are available for research, development and testing via the Nvidia Developer Program. When AI applications are ready to move from experimentation to production environments, the Nvidia AI Enterprise license provides enterprise-grade security, support and product releases of Nvidia AI software.
Availability
Project Digits will be available in May from Nvidia and top partners, starting at $3,000.
“By making Llama models open source, we’re committed to democratizing access to cutting-edge AI technology. With Project Digits, developers can harness the power of Llama locally, unlocking new possibilities for innovation and collaboration,” said Ahmad Al-Dahle, Head of GenAI at Meta, in a statement.
“Advancing AI requires tools that empower researchers to experiment at scale, speed and precision. Nvidia’s Project Digits represents a significant leap forward. I’m excited to see how 128GB in such a small form factor can advance the future of enterprise AI,” said Silvio Savarese, Chief Scientist at Salesforce, in a statement.
“At Hugging Face, we want to make it easy for developers to build their own AI. Nvidia’s Project Digits will empower AI builders to build and run their own Gen AI models and systems at the edge. With 128GB of unified memory, AI builders can run 200B parameter models locally, and connect multiple Project Digits systems to scale from there. I can’t wait to see what the Hugging Face community will build with Nvidia Project Digits,” said Je Boudier, Head of Product at Hugging Face, in a statement.
“Nvidia’s Project Digits is a powerhouse you can hold in the palm of your hand. With two Project Digits units, developers can easily work with AI models up to 405B parameters in size. We can’t wait to see the apps people will build with this,” said Michael Chiang, cofounder of Ollama, in a statement.
Source link lol