Model Release Synthesized from 4 sources

NVIDIA Nemotron 3 Super Tops Agentic AI, $26B Push

Key Points

  • Nemotron 3 Super: 1.2T params, 120B active, 1M token context
  • 5x throughput, 2x accuracy vs previous generation
  • Hybrid Mamba-Transformer MoE architecture
  • Adopted by Perplexity, CodeRabbit, Palantir, Siemens
  • NVIDIA AI-Q ranks #1 on DeepResearch Bench
  • NVIDIA investing $26B in open-weight AI models
References (4)
  1. [1] NVIDIA Nemotron 3 Super Delivers 5x Higher Throughput for Agentic AI — NVIDIA AI Blog
  2. [2] NVIDIA GTC 2026: Live Updates on What's Next in AI — NVIDIA AI Blog
  3. [3] Nvidia to Invest $26B in Open-Weight AI Models to Compete with OpenAI, Anthropic — Wired AI
  4. [4] Introducing Nemotron 3 Super: An Open Hybrid Mamba-Transformer MoE for Agentic Reasoning — NVIDIA Technical Blog

NVIDIA has unveiled Nemotron 3 Super, a groundbreaking 1.2 trillion parameter open-source model designed specifically for large-scale agentic AI systems. The model delivers 5x higher throughput and 2x higher accuracy compared to previous generations, representing a significant leap in enterprise AI capabilities.

Technical Breakthrough

The Nemotron 3 Super features a hybrid Mamba-Transformer Mixture-of-Experts (MoE) architecture, combining 120 billion active parameters with a massive 1 million token context window. This innovative design directly addresses the core challenge in multi-agent workflows: context explosion. According to NVIDIA's technical documentation, multi-agent systems generate 15x more tokens than standard chat interactions, requiring each round to resend historical context, tool outputs, and reasoning steps.

"The model is engineered for agents that need deep specialization in reasoning, coding, and long-context analysis while remaining efficient enough to run at scale," NVIDIA stated in their official blog post.

Enterprise Adoption

Major technology companies have already deployed Nemotron 3 Super in production. Perplexity, CodeRabbit, Palantir, and Siemens are among the early adopters leveraging the model for advanced AI applications. Additionally, NVIDIA AI-Q achieved the top ranking on the DeepResearch Bench benchmark, demonstrating superior performance in research and reasoning tasks.

$26 Billion Strategic Investment

In a parallel development that signals NVIDIA's ambitious expansion beyond hardware, the company announced plans to invest $26 billion in building open-weight AI models. This strategic move positions NVIDIA as a direct competitor to leading AI laboratories including OpenAI, Anthropic, and China's DeepSeek.

The investment, disclosed in regulatory filings, represents a significant pivot from NVIDIA's traditional role as an AI infrastructure supplier into full-stack AI model development. Industry analysts note this marks one of the largest commitments by any company to open-source AI model development.

Market Implications

The dual announcements underscore NVIDIA's strategy to dominate the AI ecosystem on multiple fronts: providing both the computational infrastructure and the intelligent models that run on them. With agentic AI increasingly viewed as the next frontier for enterprise automation, Nemotron 3 Super's specialized architecture for multi-agent systems positions NVIDIA to capture significant market share in the rapidly growing AI agent space.

0:00