“Thanks to the advancement of open-source models, it is now clear that artificial intelligence (AI) will spread everywhere. This is the time for open source, open innovation, and innovation across all companies and industries worldwide to be activated.”
NVIDIA CEO Jensen Huang introduces the physical Vera Rubin chip / Source = NVIDIA
On January 5 (local time), one day before the opening of CES, NVIDIA CEO Jensen Huang delivered a keynote speech emphasizing that AI will become even more pervasive worldwide. The Consumer Electronics Show (CES 2026) is the world’s largest IT trade fair, where all major global big tech and consumer goods companies converge. This year’s event is held under the theme “Innovators Show Up,” featuring more than 4,100 exhibits related to accessibility, AI, digital health, energy, enterprise solutions, entertainment, robotics, and quantum technology.
Prior to this, during Media Days from January 4 to 5, major global companies held press conferences, and NVIDIA also hosted an event at Fontainebleau Las Vegas in the United States, unveiling its roadmap for 2026 and beyond.
NVIDIA moves to secure next-generation growth with autonomous driving technology
CEO Jensen Huang begins his presentation, noting that LLMs are impacting the entire technology landscape / Source = IT Donga
CEO Jensen Huang began his address by saying, “With BERT (Bidirectional Encoder Representations from Transformers, the model that laid the groundwork for the spread of large language models, LLMs) in 2017, transformer models emerged, and in 2022 AI awoke in the form of chat. Today’s artificial intelligence is about understanding the laws of nature, and physical AI refers to AI interacting with the world. The world itself has something like an information code, and physical AI is what responds to this.”
He continued, “In 2025, through open innovation, models such as DeepSeek R1 emerged, and not only startups but also large enterprises and even nation-states began to participate in the AI revolution, leading to a restructuring of entire industries.” In this context he added, “NVIDIA has been supporting open model development with DGX supercomputers for several years, and is working on frontier AI models in various fields. We are also conducting groundbreaking research using Nemotron,” presenting NVIDIA’s roadmap.
Alpamayo is a technology that supports 4th-generation autonomous driving and will be released as open source / Source = NVIDIA
At CES 2026, NVIDIA introduced “Alpamayo,” an open-source technology for autonomous driving launched in December, and announced that it would release the full training dataset as open source. Alpamayo includes a Vision Language Action (VLA) model that enables Level 4 autonomous driving, simulation blueprints, and datasets.
While Tesla’s Full Self Driving (FSD) is offered only for its own vehicles, applying Alpamayo allows any manufacturer to produce autonomous vehicles. CEO Jensen Huang stated that the technology will first be applied to the Mercedes-Benz CLA and that the company will work toward a future in which all cars and trucks are self-driving.
“Vera Rubin production begins… Product launch by year-end”
Performance specifications of NVIDIA Rubin GPUs / Source = NVIDIA
CEO Jensen Huang said, “Through reinforcement learning, where computers repeatedly attempt tasks on their own, the scalability of results has exploded. There are increasing cases where, even if it takes longer, better answers are produced,” adding, “The faster the computation speed, the quicker we reach the next stage, so we are releasing state-of-the-art technology every single year without falling behind. We started shipping the GB200 a year and a half ago and have begun full-scale production of the GB300. Production of the NVIDIA Vera Rubin chip is now fully underway.”
Vera Rubin is designed for zero-latency performance and delivers five times the performance of the previous generation. It integrates six types of chips: NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch, in addition to others. The Vera CPU is equipped with 88 NVIDIA custom Olympus cores, supports 1.5 TB of system memory—three times that of its predecessor—and incorporates LPDDR5X chips with up to 1.2 TB of bandwidth.
Compared with the previous generation, Vera Rubin reduces training time by up to 75% while increasing the number of tokens it can process by tenfold. Generation costs are also reduced by nearly tenfold, which is expected to contribute to the democratization of AI / Source = NVIDIA
The Rubin GPU, the core of AI computation, delivers 50 petaflops of NVFP4 inference performance—five times that of Blackwell—and 3.5 petaflops of NVFP4 training performance, as well as twice the NVLink bandwidth, and houses a total of 336 billion transistors, 1.6 times more than before. NVFP4 is a 4‑bit floating-point data format introduced by NVIDIA with the Blackwell architecture, designed to boost AI processing performance while conserving memory through dynamic handling at the hardware level.
NVIDIA also introduced a system that expands memory from Blackwell to accommodate longer context input / Source = NVIDIA
In addition, NVIDIA is implementing the “NVIDIA Context Memory Storage Platform” on the NVIDIA BlueField-4 DPU, compatible with Grace Blackwell. This device directly extends the GPU’s memory capacity, expanding the memory space available for AI development. It supports longer contexts, increases token throughput by up to fivefold per second, and improves power efficiency by a factor of five.
The DGX Vera Rubin NVL72, equipped with 72 Rubin GPUs and 36 Vera CPUs, and the NVIDIA DGX SuperPOD, which integrates eight NVL72 systems for a total of 576 Rubin GPUs, were also unveiled on site. Both products are scheduled for launch in the second half of this year.
In addition, NVIDIA’s DLSS (Deep Learning Super Sampling) upscaling technology, supported on its consumer RTX graphics cards, will be upgraded to version 4.5. DLSS 4.5 introduces a dynamic multi-frame generator and a new 6x multi-frame generation mode. This allows up to five additional frames to be generated per original frame, enabling frame rates of up to 240 fps even in demanding environments such as path tracing.
NVIDIA leverages massive liquidity to expand its market reach
NVIDIA also outlined collaboration plans with Synopsys and Siemens on the day / Source = NVIDIA
NVIDIA is pursuing various strategic investments and partnerships to contribute to AI market growth and broaden its business scope. In April last year, it acquired Lepton AI, a server rental and cloud platform provider, and in June it acquired CentML, which enhances the efficiency and cost-effectiveness of AI model training. In September, it purchased Solver, which provides AI-based prediction and analytics solutions, and Enfabrica, a chip startup for AI data centers. In December, it successively acquired generative AI company AI21 Labs, high-performance computing optimization firm SchedMD, and Groq, a company specializing in inference-only AI semiconductors.
On this basis, NVIDIA is not only improving the efficiency and competitiveness of its existing businesses but also substantially widening the scope of the markets it serves. It is strengthening its cooperation framework with large corporations such as Intel through equity purchases, and with semiconductor and manufacturing-based majors such as Synopsys and Siemens, while simultaneously reaching into broader markets by unveiling Alpamayo, its open-source autonomous driving technology. Effectively, it is expanding its business domain by joining hands with every sector that uses its hardware and software.
On the positive side, efficient AI market development centered on NVIDIA is anticipated. Conversely, it could lead to a playing field increasingly tilted toward the United States. Given current market trends, NVIDIA’s market expansion is expected to continue steadily, and attention is focused on how it will broaden its business direction in 2026.
Nam Si-hyun, IT Donga reporter (sh@itdonga.com)
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.
Popular News