Go to contents

THE DONG-A ILBO Logo

Open Menu Close Menu Open Search Bar
검색창 닫기

Physical AI / MWC26

‘AI With Glasses’ Dominates MWC Wearables Battle

Dong-A Ilbo | Updated 2026.03.04
On the 3rd (local time) at the MWC26 exhibition hall in Barcelona, Spain, a test was conducted of Alibaba’s “Qwen AI Glass,” smart glasses loaded with a lightweight, optimized version of its hyper-scale AI model “Qwen 3.5.” Through displays in both lenses, users could receive Korean translation and navigation guidance. At the Meta smart glasses booth right next door (small photo on the right), there was also a line of people waiting more than 30 minutes just to reach the entrance, with the two queues almost touching like a frontline in a US-China smart glasses war. Barcelona = Staff Reporter Kim Jae-hyung monami@donga.com
 


On the 3rd (local time), at the “Mobile World Congress 2026 (MWC26)” exhibition venue in Barcelona, Spain, two particularly long lines were stretched out side by side in the outdoor space between Halls 3 and 5. One was for the experience booth of “Qwen AI Glass,” smart glasses equipped with a lightweight, optimized version of Chinese company Alibaba’s hyper-scale artificial intelligence (AI) model “Qwen 3.5.” Right next to it was Meta’s smart glasses booth. The two queues, each requiring more than 30 minutes of waiting just to reach the entrance, nearly touched like a frontline in a US-China smart glasses war.

● Hands-on with Qwen AI Glass: “Dedicated chipset generates the ‘optimal answer’”

Qwen AI Glass did not look much different from ordinary acetate-frame glasses. However, displays were built into both lenses, and a battery capable of operating for up to seven hours was hidden inside the temples. When tested, it could immediately capture objects in front of the user, and a navigation screen unfolded above the field of view like an automotive head-up display (HUD). When the translation app was launched, the staff member’s explanation in Chinese was converted in real time into Korean subtitles displayed on the lenses.

The core was the Qualcomm Snapdragon augmented reality (AR) and wearable dedicated chipset (NPU) embedded in the frame. The chipset performs primary processing of voice and visual information and, when necessary, routes it through a smartphone to generate the optimal answer using a retrieval-augmented generation (RAG) method. A Qwen representative said, “Unlike Meta Ray-Ban glasses, which remain at simple question-and-answer interactions, Qwen AI Glass is an active agent that can independently carry out tasks such as payments and restaurant reservations,” adding, “We plan to launch first in the Asian market and then seek entry into Europe.”

Qwen is not an unfamiliar name in Korea. It is the very model at the center of controversy when Naver applied for the “National Flagship AI (From Scratch)” project not long ago. To train visual language models (VLMs) and audio language models (ALMs), which serve as the eyes and ears of AI, an encoder is needed that converts external information into a form (tokens) that AI can understand. It was revealed that Naver had used Qwen open source for this encoder, resulting in its elimination in the first round of evaluation. Seen from another angle, this is evidence that Qwen’s technological capabilities are significant enough that even Korea’s top information technology (IT) company chose to rely on them.

● Glasses, phones, robots… an arena for physical AI

In fact, this year’s MWC exhibition hall as a whole served as an arena for “physical AI,” where AI steps off the screen and takes on a physical form. This is the result of full-scale fusion between hardware and visual language models (VLMs), which convert visual information into language, and vision-language-action (VLA) technologies, which connect this to physical behavior.

At the Qualcomm booth, when a beverage container in front of the user was captured while wearing smart glasses and a question was asked, an immediate reply came back: “A container for a highly nutritious drink.” This was enabled by an RAG structure in which the dedicated NPU chipset embedded in the frame processes audiovisual information and then, via a smartphone, searches for and generates external information.

The pace of integration among Chinese companies is also formidable. Honor’s “robot phone” uses an ultra-compact rear gimbal camera to autonomously track visitors’ movements and record them from the optimal angle. When asked a question, the internal VLM recognizes the situation and provides an answer, and even physical interaction is implemented with the camera joint nodding. ZTE also showcased an advanced humanoid robot.

● K-AI mounts counterattack with homegrown AI brain ‘EXAONE’

To counter the intensifying global AI offensive, Korean companies are mounting a counterattack with homegrown AI brains. LG AI Research announced that it will unveil “EXAONE 4.5,” a next-generation foundation model that combines visual intelligence for understanding the real world, within the first half of the year (January–June). EXAONE 4.5 is regarded as a key cornerstone that will go beyond language models to serve as the brain of a “Korean-style humanoid” and a self-evolving, multi-step, execution-type AI.

Co-Chief Research Officer Lim Woo-hyung said, “We are focusing on creating tangible value in the physical world of industrial sites and customers,” emphasizing, “We will prove top-tier performance and infrastructure efficiency.”

Barcelona=Kim Jae-hyung 기자 monami@donga.com

AI-translated with ChatGPT. Provided as is; original Korean text prevails.
Popular News