Anthropic co-founder and CEO Dario Amodei delivers a speech. ⓒGetty Images
Generative artificial intelligence (AI) company Anthropic has reportedly approached an annualized revenue run rate of USD 20 billion (approximately KRW 27 trillion). While revenue is surging on the back of rapid expansion in the enterprise AI market, recent conflict with the U.S. Department of Defense (the Pentagon) over AI safety standards is emerging as a variable for its long-term growth.
According to The Information, Bloomberg and others on the 4th, Anthropic’s revenue run rate is believed to have neared around USD 20 billion as of the first quarter of 2026. This is more than triple the level of a year earlier.
This growth trajectory is being driven by the expansion of the enterprise AI market. Anthropic’s AI model “Claude” is rapidly spreading among corporate customers via Amazon Web Services (AWS) and Google Cloud, and is particularly cited for its strengths in coding and data analysis functions.
Within the AI industry, assessments indicate that while OpenAI focuses on developing artificial general intelligence (AGI), Anthropic has moved quickly into corporate sectors such as finance and healthcare by emphasizing its “Constitutional AI” strategy centered on security and safety and its coding tool “Claude Code.”
● ‘AI safety principles’ at odds with the Pentagon
However, recent tensions with the U.S. Department of Defense have emerged as a new variable.
According to the Financial Times (FT), Anthropic’s efforts to pursue cloud computing and data analysis cooperation with the Pentagon have reached an impasse. At the core of the conflict is the scope of military applications for AI technology.
The Pentagon is reported to have demanded broad access to AI models to support military operations. Anthropic, by contrast, has maintained that under its “Constitutional AI” principles, its models cannot be used for direct targeting or operation of lethal weapons systems.
This conflict has drawn criticism in U.S. political circles. Some argue that despite receiving investment from Big Tech firms such as Amazon and Google, Anthropic is taking a passive stance on projects related to national security.
In response, Anthropic is reported to have stated that “unsafe military applications of AI could pose greater risks in the long term.”
AP Newsis
● The AI industry’s dilemma: between ethics and growthAs AI technology expands into the national security industry, analysts note a growing number of cases in which the ethical standards of technology companies clash with the strategic demands of governments. Reuters has assessed that the current conflict illustrates the “dilemma between ethics and markets” facing AI companies.
Although Anthropic is showing growth close to USD 20 billion in revenue, there are concerns that forgoing the vast defense sector market could restrict future increases in its corporate valuation.
In some parts of the market, there is also speculation that if Anthropic maintains a cautious stance on military applications, companies such as OpenAI or Palantir, which pursue relatively more flexible strategies, could reap windfall gains in Pentagon contracts.
With AI technology expanding into industries directly tied to national security, there are also projections that conflicts between companies’ ethical principles and governments’ strategic demands will become more frequent.
ⓒ dongA.com. All rights reserved. Reproduction, redistribution, or use for AI training prohibited.
Popular News