Noise that had occurred at the beginning of the year around the independent AI foundation business is returning to a normal track. On the 20th, the Ministry of Science and ICT issued a press release announcing that “Motif Technologies” had been selected as an additional participating company in the independent AI foundation model project. The independent AI foundation project began in April last year on the proposal of then-presidential candidate Lee Jae-myung and aims to allow all citizens to use AI at an advanced-country level free of charge.
Motif Technologies has been selected in the additional call for the “Independent AI Foundation” project to build AI for all citizens / Source=Motif Technologies
The independent AI foundation project allocates KRW 62.8 billion to the data segment, KRW 157.6 billion to GPU support, and KRW 25 billion to talent acquisition, with ▲ LG AI Research ▲ Upstage ▲ NC AI ▲ Naver Cloud ▲ SK Telecom participating as the initial consortium. The original plan was to conduct competitive evaluations every six months and select two final teams by 2027, but in an unusual move, the Ministry of Science and ICT eliminated two teams—Naver Cloud and NC AI—rather than one in the first evaluation, and decided to newly select one replacement team.
The reason two teams were eliminated contrary to the initial plan was that each company interpreted the criteria for “independent AI” differently. NC AI was dropped based on score analysis, while Naver Cloud was unusually eliminated for failing to meet the criteria for building its own AI. Naver Cloud had used the vision encoder from China’s Alibaba Qwen for the visual-recognition encoder in its HyperCLOVA X Seed 32B Think model.
In Naver’s case, it proposed an omni-model capable of comprehensively recognizing text, vision, audio, and more in the first round. At that time, the mixing of another base in its vision-related technology meant it was not recognized as independent / Source=IT Donga
A vision encoder is a component that converts images into a form that a computer can understand, and it is common to use a general-purpose encoder when building a large language model (LLM) based on open source. Naver also used Qwen’s encoder, but the Ministry of Science and ICT judged this to be inconsistent with the project’s independence requirement. The ministry then decided to offer a repechage opportunity to the initial participants, but no company applied. As a result, Motif Technologies and Trillion Labs entered the additional selection round, and Motif Technologies was ultimately chosen.
Competition becomes four-way with Motif’s entry, other consortia also begin joining midstream
Motif Technologies is a subsidiary of MOREH, an AI infrastructure specialist founded in February 2025. The Motif-2.6B small language model unveiled in August applied proprietary technologies to efficiently operate large language models, such as the PolyNorm activation function using polynomial composition, a Korean-optimized tokenizer, and differential attention that removes unnecessary noise by calculating two different attention maps and subtracting one from the other. In terms of performance, it showed an average performance improvement of about 87.2% compared with Gemma 1 2B, and 44.07% higher performance even compared with Gemma 2 2B.
A paper on group-wise differential attention published by Motif Technologies, containing foundational technology for making foundation models operate more efficiently / Source=Motif Technologies
In October, the company also introduced a new architecture called “Grouped Differential Attention,” which improves the inefficiency of the transformer structure, the core of LLM design. Current AI models distinguish between important and meaningless information by dividing it into core and non-core, but the attention heads must be split 50:50, which reduces operational efficiency. Motif increases the proportion of heads devoted to extracting core information and allocates fewer heads to non-core data, thereby enhancing efficiency. If the conventional 1:1 attention-head allocation is regarded as zero baseline, then setting the allocation ratio to 4:1 improves performance by about 2.54% over the baseline model and significantly improves training stability.
Artificial Analysis, one of the key benchmarks for global AI performance comparisons, has directly cited the Motif-2-12.7B model and highly rated its performance / Source=Artificial Analysis
The subsequently released Motif-2-12.7B model ranked first among Korean models on the intelligence index of Artificial Analysis and recorded the highest score among models in the 12B class. Motif is currently verifying its multimodal capabilities by developing proprietary models such as the Motif Image-6B image-generation model and the Motif-Video-1.9B video-generation model. It is understood that Motif’s selection was largely driven by its self-developed AI technology and multimodal capabilities.
The Motif Technologies consortium currently includes AI infrastructure specialist MOREH, data labeling specialist Crowdworks, AI-based design automation and synthetic data technology company NdotLight, robotics specialist XYZ, storage company FADU, HDC Labs, an affiliate of Hyundai Development Company, edtech firm Enuma Korea, Mathpresso, which operates the AI learning app “QANDA,” and automotive-electronics specialist Mobirus. Participating institutions include the Industry-Academia Cooperation Foundation of Seoul National University, the Korea Advanced Institute of Science and Technology (KAIST), the Industry-Academia Cooperation Foundation of Hanyang University, Samil PwC, the National Heritage Promotion Agency, Kyunghyang Shinmun, and Jeonbuk Technopark.
Additional participants are also anticipated to join the existing consortia of Upstage, SK Telecom, and LG AI Research. RLWRLD, which develops demonstration scenarios for a robotics foundation model, and AI semiconductor company HyperAccel announced their entry into the Upstage consortium after the first round of announcements. As midstream participation in consortia is allowed, the size of each consortium is likely to grow over time.
What tasks must be achieved in the second evaluation scheduled for August?
The original project goals are for the models to achieve at least 95% of the performance of the latest global AI models released within the preceding six months and to expand from building a large language model in the first phase to a multimodal model in the second phase, and an action model in the third. In the first announcement, Upstage built an LLM with 100B (10 billion) parameters, LG AI Research built one with 236B (23.6 billion parameters), and SK Telecom built one with 500B (50 billion) parameters. The second phase targets the advancement of LLMs with 200B–300B or more parameters, vision-language models, and vision-language-action models.
K-Exaone, developed by LG AI Research, uploaded to the open-source community “Hugging Face” / Source=Hugging Face
Motif Technologies, like the existing three consortia, will develop an independent AI foundation model from scratch and release it as open-source software that can be used for commercial purposes. Upstage’s SOLAR-Open 100B, LG AI Research’s K-Exaone, and SKT’s A.X-K1 are already available as open source on Hugging Face for anyone to download, and technical reports detailing their specific technologies have also been published.
The Motif Technologies consortium will be provided with 768 units of NVIDIA Blackwell B200, which are necessary for building an independent AI foundation model, KRW 1.75 billion for individual data construction and processing, and KRW 10 billion for joint data purchasing and utilization. It will also be granted the right to use the designation “K-AI” (Korean Artificial Intelligence) enterprise.
Independent AI foundation project now split between 2 startups and 2 conglomerates – what lies ahead?
The first evaluation of the Independent AI Foundation project was completed in December last year, and the second evaluation will be held in August / Source=IT Donga
With Motif’s entry, the independent AI foundation project has shifted to a competitive structure of two startups and two large corporate groups. Motif, which started later than its rivals, must exert strong late momentum, but its participation also sends a clear message to the other three. If the first round focused on competing over parameter counts, from the second evaluation onwards the key will be how to make the architectures more efficient. Instead of simply increasing data volume, it will be necessary to find ways to improve actual performance. Only by achieving this can the project secure sufficient efficiency relative to performance in future services.
From the second evaluation, participants will also need to demonstrate how they will implement data into vision, language, and action models with the third phase in mind. The first evaluation outlined the rough direction, but the second must prove feasibility. The independent AI foundation project, which became contentious with the departure of Naver Cloud and NC AI, is raising its curtain once again. Quiet support is in order for the fair competition that will unfold among Korea’s top AI talents.
IT Donga reporter Nam Si-hyun (sh@itdonga.com)
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.
Popular News