Project Digits: a weapon to democratize AI and shape the future according to Nvidia

Deal Score0
Deal Score0

During its CES 2025 conference, Nvidia made a big splash with its new RTX 5000but he also dropped a little bomb: the Digits project, the definitive name of which is still to be found, moreover. A machine the size of a Mac mini, which puts, in the GPU giant's own words, “a Grace Blackwell AI supercomputer on your desk“. One sentence and a lot of keywords…

Nvidia Projects Digits

The Projects Digits is about the size of a Mac mini.

© Nvidia

Advertising, your content continues below

The meeting of two worlds

This formula nevertheless sums up quite well all the promise and reality of this new product. Let's first go back to what is hidden behind the two names Grace and Blackwell. Grace is an ARM processor, with 20 cores, in this case, derived from the eponymous architecture launched in March 2022 by Nvidia. But, this chip is not alone, it is accompanied by a Blackwell chip. This is nothing more and nothing less than the new GPU architecture that Nvidia has unveiled and also introduced in its new GeForce RTX 5000 graphics card series.

The Blackwell GPU therefore embeds “classic” CUDA cores and latest generation Tensor Cores, to tackle calculations linked to intelligent algorithms. An important new feature is that these AI cores are directly connected to the graphics calculation pipeline. Together, the Grace and Blackwell chips form the GB10 super chip. A giant chip that would provide up to 1 PetaFLOP (FP4) of computing power for AI.

Nvidia Projects Digits

This means that a single Project Digits device will be able to run large language models (LLMs), with 200 billion parameters, locally. To put things in perspective, the GPT-4o model, from ChatGPT, has 12 billion parameters. But even the “unoptimized” version of GPT-4 and its 175 billion parameters should be able to run on this machine…
And if a single Digits is not enough for you and you associate a second one, in theory it is models with 405 billion parameters that you should be able to run… That is, the number of parameters contained in a really large model like Llama 3.1 405B.

Advertising, your content continues below

This puts a little into perspective the hot questions about the fact that a platform built around an RTX 5090 (which “only” has 24 GB of video memory), a powerful processor, all stuffed with RAM, would be more efficient…

A first essential asset

Especially since it would be forgotten, that the Projects Digit has at least two material assets up its sleeve. The first is that the Grace chip and the Blackwell chip are connected using an NVLink-C2C connector. Resulting from the acquisition of Mellanox in 2019, this technology allows very efficient interconnections between chips, with “up to 25x more power efficient and 90x more areal efficient than a PCIe Gen 5 connector on NVIDIA chips“, explains the GeForce designer on his site. In other words, NVLink-C2C avoids the inevitable bottleneck between the CPU, the GPU and the RAM. It is this integration which should ensure rapid data transfers between the CPU and the GPU, and also efficient access to the unified memory And there are a total of 128 GB of LPDDR5X RAM, which are soldered onto the card, with 4 TB of NVMe SSD for it. storage.

Nvidia Projects Digits

The connection between the two chips allows, among other things, the use of unified memory.

© Nvidia

A second advantage for scaling

The second element is due to the fact that Nvidia sees further. It is of course possible to use one box alone, but for those who need to connect two or more, Nvidia has integrated an XConnect chip (also born from the acquisition of Mellanox). It allows machines to be connected and ensure that their chips and RAM are only seen as a single chip and the same memory pool.
In other words, it is possible to take full advantage of scaling, the famous “scalability”, to start small and expand your structure if necessary, to create your own little supercomputer…

Advertising, your content continues below

For whom?

Despite its resemblance to a Mac mini, the Digits project is clearly not a desktop PC like the others, a machine intended for the general public. The fact that it offers Wi-Fi, Bluetooth and USB connectivity could raise doubts. Its unit price could also be that of an Apple mini-PC, at $3,000. The price in euros is not yet known but should be communicated during its launch which should take place, according to Jensen Huang, boss of Nvidia, around next May.

But unless you have the means and desire to dabble in AI development, the Digits project is not for you. This product is intended for those who want to get started in AI or are already working in it. It is a tool for democratizing production and research in artificial intelligence, certainly, but which can be used for prototyping or adjusting models locally before their deployment in online structures. With Digits, Nvidia is taking a giant step toward reducing AI training's reliance on the always-expensive cloud.

The targets are therefore researchers, engineers, students, small businesses, giants who like secrecy, etc. A protean and endless list to which are added all the people who want to develop intelligent services and algorithms without necessarily having the means to afford a DGX, the first supercomputer dedicated to AI launched in 2016 by Nvidia. The first model of which, for the record, was delivered by Jensen Huang himself to a small start-up at the time, a certain OpenAI.

Nvidia Projects Digits

Advertising, your content continues below

The future is written in three letters

Project Digits is also an essential guarantee for the future, in the eyes of Nvdia: that of developing on an ARM platform. Because, for Nvidia, and its aborted attempt to buy ARM is proof, the future of AI passes through the ARM architecture.
To begin with, developing on Projects Digits is the certainty that your program will run smoothly on Nvidia's high-end platforms in the cloud, hosted by the overpowering and expensive DGX, powered by the 72-core Grace chips. Neoverse.
Logically reciprocal, thanks to Project Digits, Nvidia also intends to bring as many AI researchers and designers into its hardware and software universe as possible. This is why it highlights its entire software offering developed over the years. Whether it is its powerful tools dedicated to AI training (the NeMo framework or the RAPIDS libraries) or even DGX cloud. And Project Digits runs under GNU/Linux (DGX OS), a crucial point.

Behind Projects Digits, other futures…

But the Digits project also outlines new trends that Jensen Huang recognized during his keynote: “AI will be omnipresent in all applications across all industries“, he said, thus emphasizing the importance of opening the door to this world to as many people as possible and letting them enter through the Nvidia door.

De facto, beyond ChatGPT and other Stable Diffusion, most digital sectors are preparing their revolution. This is particularly the case for video games, Nvidia's original starting point.
More and more beautiful, more and more realistic, notably thanks to ray tracing, games are increasingly giving pride of place to artificial intelligence, and no longer just so that the reactions of non-player characters are credible, as we saw it with Nvidia ACE about a year ago.

Generative AI is now working to create persistent universes, 3D environments, whose pixels are no longer generated by classic GPU cores but directly by Tensor cores. A project like Oasis was an experimental illustration of this. But major studios are also adopting these generative AIs for rendering characters, and in particular their faces, in particular, thanks to technologies like RTX Neural Face, or even to improve renderings thanks to generative AI, as we have seen. seen recently with the Zorah demonstration, from Nvidia. It is then the Tensor cores which are in charge, plus (only) the CUDA cores.

And for the general public?

But, as in all good stories, there is also another future to be discerned behind this extraordinary machine. To produce the ARM Grace chip at the heart of its Project Digits, Nvidia teamed up with a processor giant, Taiwanese Mediatek.

Poulidor of chips for smartphones, Qualcomm's competitor has come back in force in recent years, most recently with its Dimensity 9400 SoCs. So what could the union of know-how in the optimization of low-consumption ARM SoCs bring, of Mediatek, and that of Nvidia in terms of GPU and AI? No need to search for hours, recurring rumors have been going strong for months already about the upcoming appearance of Windows PCs equipped with ARM Nvidia SoCs… History will perhaps record that the first step in this direction is not was other than a pocket supercomputer designed for AI, just hoping that Nvidia finally finds a commercial name that impresses as much as its machine…

Advertising, your content continues below

More Info

We will be happy to hear your thoughts

Leave a reply

Bonplans French
Logo