“The pace of artificial intelligence innovation over the past few years has been astonishingly fast, but what I want to say today is that we have not seen anything yet. We are only beginning to realize the potential of AI.”
AMD CEO Lisa Su delivered the CES 2026 opening keynote / Source=AMD
On January 5 (local time), AMD CEO Lisa Su delivered the official opening keynote at the Consumer Electronics Show (CES 2026), sharing her views as a leader in AI and AMD’s future strategy. Under the theme “Innovators Show Up,” CES 2026, the world’s largest IT trade show, hosts 4,300 companies from more than 160 countries. Big Tech companies including Nvidia and Intel had already unveiled new products and strategies through separate events, while AMD took the official opening stage to deliver its keynote.
“AI will be everywhere within a few years… AI for everyone”
Dr. Lisa Su stated, “AMD’s top priority is AI. We support a wide range of fields including healthcare, science, manufacturing, and commerce, but what we are seeing is just the tip of the iceberg. In the next few years, AI will be everywhere and will become a technology for everyone. AI will make us smarter and more capable, and enable each of us to become more productive in our own positions,” adding, “AMD is the company that provides the compute foundation to make that future a reality,” as she opened her presentation.
In 2022, the scale of computing infrastructure was about 1 zettaFLOPS (1 zettaFLOPS is 1 billion times 1 teraFLOPS), and by 2025 it had grown to more than 100 zettaFLOPS. However, many companies still believe infrastructure performance is insufficient, and to implement AI everywhere, it will be necessary to increase capacity to more than 10 yottaFLOPS within a few years. YottaFLOPS is 10,000 times greater than 1 zettaFLOPS in 2022. Dr. Su emphasized that AMD is an infrastructure company that owns GPU (graphics processing unit), CPU (central processing unit), NPU (neural processing unit), as well as a full range of customized solutions and AI accelerators.
Dr. Lisa Su introduced the AMD Helios rack, which integrates AMD CPUs, GPUs, and network systems / Source=AMD
She continued, “All AI runs in the cloud, and 8 of the top 10 AI companies use AMD EPYC CPUs. Enterprises run advanced models with AMD Instinct AI accelerators, and demand for them is increasing. Over the past two years, the number of tokens required in the inference market has increased 100-fold, and a comprehensive ecosystem and system integration is needed to support this,” adding, “AMD presents ‘Helios,’ an open, modular rack design that integrates thousands of accelerators into a single system and advances with each generation. The next-generation Instinct MI455X accelerator uses 2nm and 3nm process nodes and incorporates HBM4 and advanced 3D chiplet packaging. It tightly integrates EPYC CPUs and Pensando networking chips, and runs 72 GPUs per rack,” she explained.
On the server side, AMD introduced the next-generation AI accelerator Instinct MI455X, the Zen 6 architecture-based CPU code-named “Venice,” and the 800-gigabit Ethernet chip Pensando Volcano. The MI455X incorporates 320 billion transistors, 70% more than the MI355, and features twelve 2nm and 3nm chips and 432GB of HBM4 memory. The next-generation server CPU “Venice” is built on a 2nm process with up to 256 cores, and doubles memory and GPU bandwidth to match the MI455X at full speed at rack scale. The Helios rack is equipped with 18,000 CDNA 5 GPU compute units, 4,600 Zen 6 CPU cores, and 31TB of HBM4 memory with 260TB of bandwidth.
AMD will launch the MI450X this year and introduce the MI500X series in 2027 / Source=AMD
In terms of lineup, AMD will add the Instinct MI430X and 440X families and launch the Instinct MI500 series in 2027. The MI500 chip will be based on the 2nm CDNA 6 architecture and HBM4E memory, and is expected to deliver up to 1,000 times the AI performance of the AMD Instinct MI300X. The newly unveiled AMD Helios rack will be released at the end of 2026.
AMD introduces Ryzen AI 400, 9850X3D and other consumer products
AMD also unveiled the Ryzen AI 400 series for mainstream consumer laptops and handheld game consoles / Source=AMD
The AMD Ryzen AI Pro series, launched last year, will also be upgraded. The AMD Ryzen AI Pro 400 series is based on the Zen 5 architecture and XDNA 2 NPU, delivering up to 60 TOPS (60 trillion operations per second, with 1 TOPS representing 1 trillion operations). The processor supports up to 12 cores and 24 threads, and provides 1.7 times the content creation performance and 1.3 times the multitasking performance of the Intel Core Ultra 9 288V.
Two new models have been added to the AMD Ryzen AI Max+ series with integrated graphics / Source=AMD
AMD Ryzen AI Max+, which uses unified CPU and GPU memory and is advantageous for AI productivity, has added two new models to the existing three-product lineup. The newly added AI Max+ 392 features 12 cores and 24 threads, while the AI Max+ 388 offers 8 cores and 16 threads. With this newly strengthened, price-competitive lineup, Ryzen AI Max+ is now more likely to be adopted not only in high-performance workstations but also in mobile workstations and handheld gaming devices. AMD also launched the Zen 5-based Ryzen 7 9850X3D processor.
AMD’s newly introduced ultra-small developer PC “AMD Ryzen AI Halo” / Source=AMD
AMD also unveiled Ryzen AI Halo for the first time. Dr. Su introduced it by saying, “Ryzen AI Halo is the world’s smallest AI development system and can store up to 200 billion parameters on the device. Inside, it is powered by an AMD Ryzen AI Max processor with 128GB of memory.” AMD Ryzen AI Halo offers full support for ROCm, AMD’s development library, and is optimized for AI development tools such as LM Studio, ComfyUI, and VS Code. In terms of performance, it supports demanding generative AI models such as GPT-OSS, FLUX.2, and Stable Diffusion XL. The product will be released in the second quarter.
AMD positions itself as an enabler for the yottaFLOPS era
At the conclusion of her presentation, Dr. Su said, “We are entering the era of yotta-scale computing, and to deploy more powerful models everywhere, a massive increase in computing resources is required. To meet the needs of the world’s largest companies, a broad solutions portfolio is needed,” and continued, “AMD spans everything from cloud systems to AI PCs and embedded computing, and is built on an open ecosystem based on industry standards. The world’s most important challenges are solved when the industrial ecosystem comes together as one, and we will build the future together with everyone.”
If Nvidia’s strategy is to directly partner with all AI-related industrial sectors and expand its business scope, AMD appears committed to continuing in a direction that supports the infrastructure and products required on the front lines of the AI industry. In a gold rush metaphor, while Nvidia is selling picks and then buying mines and carmakers to widen its business scope, AMD is concentrating on selling not only picks and other tools but also products needed to build the mines themselves. However, as companies such as Meta and Google move toward developing their own AI accelerators, AMD will likely need to strengthen its measures to keep pace with market trends.
Nam Si-hyun, IT Donga reporter (sh@itdonga.com)
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.
Popular News