In response to Anthropic’s latest large-scale AI model “Mythos,” OpenAI has unveiled a security-specialized AI model. As a series of security-focused AI models are introduced, governments in major countries including Korea, the United States, and the United Kingdom are increasingly concerned that such AI could be misused for hacking, putting finance and information technology (IT)-based systems at risk. The concern is that instead of strengthening the “shield,” these models could merely sharpen the “spear” pointed at cyber systems.
On the 14th (local time), OpenAI first released “GPT-5.4-Cyber,” a model specialized in detecting and responding to software (SW) security vulnerabilities, to a verified expert group. This model identifies security loopholes using only software executables, without the “source code,” which is the blueprint of a program. It is akin to identifying a problem with the engine valves based solely on external noise and vibration, without opening the car’s hood.
However, OpenAI strictly limited distribution in consideration of potential misuse. The company will initially distribute the model only to several hundred top-tier customers participating in its cyber security research support program “Trustworthy Access for Cyber (TAC),” which was launched in February. Afterward, it plans to expand access to several thousand users within weeks, following identity verification and continuous monitoring.
Anthropic had previously taken preemptive measures under similar concerns. After Mythos, an AI model with high-performance security capabilities, swiftly uncovered a flaw that had been dormant for 27 years (since 1999) in “OpenBSD,” an operating system (OS) known for its ironclad security, the company limited provision of the model to 12 major big tech partner firms only.
The security threat landscape is already worsening under the influence of AI. According to the global security metrics site “Zero Day Clock,” the time between disclosure of a vulnerability and actual hacking attacks plunged from more than two years in 2018 to just 23 days last year.
The emergence of high-performance AI that is reshaping the cyber security landscape has placed governments under urgent pressure. Alarmed by the possibility that even physically isolated “closed networks” in the financial sector could be breached, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell recently convened chief executive officers (CEOs) of major banks and ordered an emergency review of security networks. In the United Kingdom, the central bank, the Financial Conduct Authority (FCA), and the National Cyber Security Centre (NCSC) are jointly conducting an in-depth assessment of Mythos’s impact on the financial sector.
South Korea has also entered an emergency response posture. On the morning of the 15th, the Ministry of Science and ICT held a meeting with CEOs of major information security companies that employ white-hat hackers, and in the afternoon urgently convened chief information security officers (CISOs) from 40 major companies. The discussions focused on reviewing joint public-private security readiness and exploring ways to automate defense systems in anticipation of a phase in which AI not only detects vulnerabilities but also autonomously designs hacking scenarios.
Baek Young-hoon, Deputy Prime Minister and Minister of Science and ICT, stated on the 14th, “High-performance AI security services like Mythos are both an opportunity to dramatically raise security standards and, if misused, a serious threat,” adding, “We must enhance our security capabilities while ensuring that domestic companies and infrastructure are not exposed to such threats.”
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.
Popular News