The world’s largest information and communications industry exhibition, “Mobile World Congress Barcelona 2026 (MWC 2026),” opened on the 2nd. While CES, held in early January, is an event for consumer electronics, MWC is a venue for information and communications technology companies. MWC 2026 is hosting more than 2,800 companies from 205 countries around the world, and the Republic of Korea ranks fourth among participating countries with 182 companies on site. Korea’s three major telecommunications operators—KT, SK Telecom, and LG Uplus—as well as global smartphone manufacturer Samsung Electronics are participating, and 97 small and medium-sized enterprises and some 90 startups are also present.
Vivek Badrinath, Secretary General of the GSMA (Global System for Mobile Communications Association), delivers a keynote speech at MWC26 Barcelona / Source = GSMA
Previously, MWC was effectively a festival for mobile carriers and a fiercely contested arena for telecommunications companies. After COVID-19, however, autonomous driving and AI based on network technologies came to the fore, and the event has emerged as a platform where telecommunications chip manufacturers such as Qualcomm, Arm, and Intel unveil semiconductors. From 2024 to 2025, the importance of infrastructure and chips increased dramatically thanks to generative AI, and this year, under the theme of “The IQ Era,” the event has established itself as an exhibition that goes beyond simple chip design to showcase a wide range of semiconductor technologies, including semiconductor materials, glass substrates, and next-generation packaging. In particular, as CES encounters limitations in dealing with general consumer goods, MWC has enjoyed a spillover benefit.
Major domestic AI semiconductor companies such as FuriosaAI, Rebellions, DeepX, and Mobilint have set up separate booths, and MangoBoost CEO Kim Jang-woo and Rebellions CTO Oh Jin-wook also gave presentations on stage. Although not an AI semiconductor company, the firm showing the most notable results in the related industry is Panmnesia, an AI infrastructure link solution company.
Panmnesia collaborates with data centers as well as global AI semiconductor companies
Currently, all AI data centers connect tens of thousands of graphics processing units (GPUs) through parallel computing to operate them as a single massive computer. Nvidia refers to this as “accelerated computing.” Nvidia GPUs are interconnected using its proprietary NVLink technology, while CPUs, memory, and storage devices are connected via the PCIe standard. When connected through network switches, latency is long and data throughput cannot exceed the communication speed. Although computational performance is high, constraints in connection standards create structural performance limits and bottlenecks.
In November last year, Panmnesia announced the industry’s first PCIe 6.4/CXL 3.2 fabric switch silicon supporting port-based routing / Source = Panmnesia
Panmnesia’s PCIe 6.4/CXL 3.2 switch and PCIe 6.4/CXL 3.2 controller, launched in January this year, offer a solution to inter-server connection latency. Founded in August 2022, Panmnesia is developing interconnect technologies that hardware-connect all processing units, including CPUs, GPUs, and NPUs. In particular, because it connects devices via CXL-based port switches rather than through network switches, it can link devices across manufacturers as if they were a single unit.
From left: Lee Jae-shin, Head of Global Business Development at SKT; Cho Yong-jin, CPO of Panmnesia; Jung Myung-su, CEO of Panmnesia; Jung Seok-geun, Head of SKT AI CIC; and Jung Min-young, Head of SKT AIDC Solutions / Source = Panmnesia
Panmnesia is leveraging this technology to collaborate with SK Telecom in the area of data center architecture. In August last year, SK Telecom held a groundbreaking ceremony for “SK AI Data Center Ulsan” and plans to invest more than KRW 7 trillion to build a 103-megawatt hyperscale data center—with an initial 40-megawatt phase by 2027 and full completion by 2029. Approximately 60,000 GPUs will be installed in this data center, which is expected to become Korea’s first and largest AI-dedicated data center. Panmnesia’s CXL-based solutions are planned to be introduced for inter-device connections within this data center.
Overview of the CXL-based AI rack to be built by SK Telecom and Panmnesia / Source = Panmnesia
This collaboration brings significant benefits to both parties. For SK Telecom, it can further enhance the utilization rate of its AI data center and the efficiency relative to invested infrastructure. As competitiveness in today’s AI industry hinges on processing computations faster and more efficiently, SKT’s AI data center could become a globally watched case. For Panmnesia, securing a highly important deployment case in a large-scale data center by applying its technology in practice lays the groundwork for future contracts with other AI data centers.
In almost every sector, including IT, operators prefer proven, quantitatively assessable solutions to highly experimental new technologies. For new-technology companies like Panmnesia, starting a business is extremely difficult without successful deployment references, yet SK Telecom has committed to full-scale collaboration. After validating GPU and memory utilization, latency, throughput, and other metrics in server environments, the two companies plan to unveil an architecture for next-generation AI data centers within this year. The finalized architecture will undergo demonstration phases at large AI data centers before entering commercial operation.
Engagement not only with AI DC but also with next-generation semiconductor markets such as RISC-V
On March 5, the day after, Panmnesia signed a strategic partnership with Openchip, a European high-performance computing application accelerator chip design company. Openchip is a startup jointly established by GTD, a European application and system design and integration company, and BSC (Barcelona Supercomputing Center). It is developing full-stack processors based on RISC-V (Reduced Instruction Set Computer Five), a hardware design architecture that can be designed and sold by anyone without license fees.
From left: Cho Yong-jin, CPO of Panmnesia; Jung Myung-su, CEO of Panmnesia; Cesc Guim, CEO of Openchip; and Gaspar Mora, CTO of Openchip / Source = Panmnesia
Existing x86 and Arm architectures are owned by specific companies such as Intel and Arm, and users must pay license fees to use them. In contrast, RISC-V can be used by anyone free of charge, enabling customized design and allowing resale and design modifications without restriction. In particular, when configuring software, there is no need to know the vector length, as it is automatically optimized according to hardware specifications. A drawback is that RISC-V design engineers are scarce, making design costs extremely high and the technology itself quite challenging.
As its name suggests, Openchip is expanding its business across Europe. More than 300 employees work in Spain, where its headquarters is located, as well as in Italy, Poland, Belgium, France, Germany, and Ireland. Its project has been recognized by the European Commission as an Important Project of Common European Interest (IPCEI) and is closely linked to hardware and software independence across Europe and to achieving “sovereign AI” in the semiconductor sector.
Europe’s DARE (Digital Autonomy with RISC-V in Europe) is a large-scale project in which 38 technology companies and research institutions jointly develop high-performance chips for European supercomputers and AI / Source = Jülich Supercomputing Centre
For Panmnesia, which needs to collaborate with as many hardware companies as possible, cooperation with Openchip is highly meaningful. In the DARE (Digital Autonomy with RISC-V in Europe) project, which involves 38 European technology companies developing semiconductors for use in supercomputers and high-performance computing, Openchip is responsible for developing the RISC-V-based core processing unit, the vector accelerator (VEC). In addition, because Openchip’s parent organization is BSC, its technologies have considerable potential for application to supercomputers. Working with Openchip effectively means partnering with a key player at the core of European-built supercomputers.
Openchip’s RISC-V-based product roadmap / Source = Openchip
Openchip is also developing, based on RISC-V, a PCIe-based vector processing accelerator called VPU in partnership with Japan’s NEC. This product, when plugged into an existing server, enhances vector computing performance using RISC-V. Because it is also based on RISC-V, however, it cannot be considered highly compatible with existing x86-based systems. Here, one can foresee attempts to address scalability issues by incorporating Panmnesia’s technology. While RISC-V’s freely configurable design is an advantage, it is criticized for software fragmentation and for having an inconvenient ecosystem and scalability. In this respect, Panmnesia’s port-based switch could become a master key.
Panmnesia achieves strong results at MWC, likely to attract global attention
The key issue in the AI semiconductor industry recently has been securing deployment references. As the AI sector develops rapidly, it has become important to operate businesses with as few mistakes as possible. Adopting proven infrastructure equates to avoiding risk factors. Many hyperscalers and supercomputers prioritize Nvidia products precisely because they have been validated over a long period. To successfully establish initial deployment references, nearly all countries, including Korea, are using tax funds or in-house infrastructure to support validation of domestically produced products.
Through this MWC, Panmnesia has not only secured a collaboration case with a major Korean conglomerate but also established a strategic partnership with Openchip, a central player in Europe’s technological independence. This suggests that Panmnesia’s technologies and products are beginning to be regarded by global companies as assets they must secure early. From here, the remaining task is to successfully complete multiple deployment cases and, based on these, to create even more references.
Reporter Nam Si-hyun, IT Donga (sh@itdonga.com)
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.
Popular News