Nvidia announced plans to invest $26 billion over the next five years to develop open-weight artificial intelligence models, marking a major shift from chip manufacturing to becoming a leading AI research lab. The initiative, confirmed through 2025 financial filings and executive interviews, positions Nvidia as a serious player challenging established AI founders like OpenAI and DeepSeek by focusing heavily on transparency and openness within AI development.
Open-weight models provide publicly available parameters, allowing researchers and startups worldwide to customize and deploy AI more freely. Nvidia’s approach not only opens the weights but also shares the technical innovations driving model training and architecture, a move that could accelerate AI advancements built upon Nvidia’s ecosystem. This aligns Nvidia’s AI ambitions tightly with its hardware dominance, reinforcing its influence across AI and data center markets.
Nvidia’s Nemotron 3 Super and its training advances
Nvidia simultaneously unveiled Nemotron 3 Super, its most sophisticated open model, boasting 128 billion parameters-comparable to OpenAI’s top open variants but reportedly outperforming them across multiple AI benchmarks. The model scored 37 on the Artificial Intelligence Index, exceeding GPT-OSS’s 33, though some Chinese counterparts scored higher. Nemotron 3 Super also topped a test measuring control over robotics called PinchBench.
This model benefits from novel architectural tweaks enhancing reasoning, context handling, and reinforcement learning responses-areas crucial to making AI more usable in real-world scenarios. Nvidia’s research strides manifest a shift to more serious engagement with open models, supporting innovation beyond proprietary walled gardens. The company has already completed pretraining on a massive 550-billion-parameter model, signaling ambition beyond current releases.
Open AI model developments and geopolitical impact
While Meta’s Llama ignited early enthusiasm for open models, recent shifts suggest fewer fully open future releases, and OpenAI’s open-weight model remains less robust compared to its cloud-only versions. Meanwhile, Chinese firms including DeepSeek and Alibaba have popularized openly accessible models with competitive performance and lower training costs, capturing the attention of global developers. DeepSeek’s rumored use of Huawei chips for new models might encourage adoption of Chinese hardware amid US sanctions, potentially challenging Nvidia’s market share.
Nvidia’s push to release advanced open models from the US could serve as a strategic counterbalance, helping maintain American technological leadership and offering a strong alternative to China’s growing open-source AI ecosystem. This competition reflects broader tensions in AI development, where access to powerful models and efficient hardware intertwines with geopolitical considerations.
The future of open-source AI models and Nvidia’s role
Nvidia’s investment highlights the rising importance of open AI models as foundational infrastructure. Such openness fosters collaborative innovation, particularly benefiting startups and academic researchers who might otherwise lack access to advanced AI technology. Industry voices urge greater public funding for open models to complement corporate efforts, emphasizing balanced growth in the sector.
With its dominance in GPUs and supercomputing systems, Nvidia aims to integrate AI development tightly with hardware evolution, optimizing everything from processing power to data flow. This integrated approach strengthens Nvidia’s products and could also set standards for modeling and training large-scale AI-benefiting an ecosystem that spans climate science to robotics.
As open AI innovation accelerates globally, Nvidia’s $26 billion commitment signals a strong bet on transparency and competition. Whether this strategy will curb the momentum of Chinese open AI models or spur an even fiercer battle remains a central question in the unfolding AI century.

