AI self-verifies calculations to reduce hallucinations Strong at handling Korean logical discourse structure
Kakao has unveiled a new artificial intelligence (AI) model that can handle everything from light everyday conversations to solving complex problems that require logical reasoning with a single model.
According to Kakao on the 5th, the newly self-developed AI model “Kananav-4b-Hybrid” is characterized by a process in which it synthesizes information, performs calculations, and then self-verifies its results in a human-like manner. Going beyond simply converting images into text or describing them, the model minimizes hallucinations by running its own self-check process. A Kakao official explained, “For receipts with complex tables, math problems, and similar tasks, it significantly reduced calculation errors and omissions of conditions, greatly improving accuracy.”
In particular, it has demonstrated competitiveness in Korean-language logical reasoning. Existing global models have shown limitations in that they lack context and logical coherence as they translate Korean questions into English for reasoning and then back into Korean for answers. By contrast, this model has been trained to understand and reason directly in Korean as it is asked. As a result, it scored 92.8 points on “KoNET,” an AI academic achievement evaluation benchmark based on the Korean education system.
Kim Byunghak, performance leader of Kakao’s Kanana, said, “By developing a proprietary AI model specialized in Korean, we will enhance our competitiveness on the global stage and continue to play a leading role in advancing the domestic AI ecosystem.”
Jeon Hye-jin
AI-translated with ChatGPT. Provided as is; original Korean text prevails.
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.