Nvidia Expands CUDA Platform to Embrace RISC-V Chips
Nvidia Expands CUDA Platform to Embrace RISC-V Chips

lipflip â€“ Nvidia is extending its CUDA platform to support RISC-V, signaling a strategic shift toward open-source computing. While Nvidia traditionally focuses on ARM-based architectures. It now recognizes the rising influence of RISC-V, especially in regions affected by U.S. export restrictions such as China.

Read More : Itel Super Guru 4G Max Debuts With AI Voice Support

This announcement was made by Nvidia’s Vice President of Hardware Engineering, Frans Sijstermans, during a RISC-V summit in China. Sijstermans, who also serves on the RISC-V board, revealed that Nvidia is officially supporting native CUDA integration on RISC-V processors. This marks a significant move toward expanding CUDA beyond its conventional x86 and ARM environments.

A diagram shared at the event demonstrated how RISC-V CPUs can interface with CUDA drivers at the OS level. Meanwhile, CUDA kernels run on Nvidia GPUs. Additionally, the setup features a DPU, which appears to be a product from Nvidia, indicating that the company envisions this architecture as a foundation for data center and HPC deployments.

Nvidia’s new direction may also reflect the increasing demand for RISC-V solutions in China. As U.S. export controls limit the availability of Nvidia’s most powerful AI accelerators like the GB200 and GB300, the company appears to be preparing alternative paths to remain relevant in restricted markets.

RISC-V Gains Momentum as Nvidia and AMD Expand Support

RISC-V is emerging as a serious alternative to x86 and ARM, with growing industry interest and technical demonstrations highlighting its capabilities. Back in 2021, researchers managed to run CUDA code on a RISC-V-based Vortex GPGPU using an OpenCL translator. Although not efficient, this experiment showed that RISC-V could support mainstream compute ecosystems with the right tools.

By introducing native CUDA support, Nvidia is now moving beyond experimental compatibility. This decision acknowledges RISC-V’s potential to power not only embedded systems but also high-performance computing applications. In a market where open-source flexibility is becoming more valuable, Nvidia’s move positions the company to tap into emerging demand, especially in regions like China where RISC-V adoption is growing rapidly.

At the same time, AMD is pushing its own open alternative to CUDA—ROCm—which already supports RISC-V in its seventh release. AMD aims to challenge CUDA’s dominance by offering an open, vendor-neutral ecosystem. However, ROCm adoption remains limited, and its ecosystem is still maturing.

Read More : Xiaomi Launches Second-Gen TV Stick 4K With Enhanced Features

The broader compute industry is beginning to see real momentum toward diversified architectures. Nvidia’s support for RISC-V not only widens its software base but also helps future-proof its business in uncertain geopolitical environments. The competition between CUDA and ROCm is heating up, but for now, Nvidia’s dominance in the GPU compute space remains largely unchallenged. With native CUDA support on RISC-V, developers and hardware makers may soon have greater flexibility in designing next-generation compute platforms tailored to local needs and global standards.