로그인|회원가입|고객센터|HBR Korea
페이지 맨 위로 이동
검색버튼 메뉴버튼

Friendly AI / Global

Friendly AI Partners with MS Cloud Firm Navius for Global Expansion

Dong-A Ilbo | Updated 2025.11.12
Supplying inference acceleration technology to Nebius AI cloud infrastructure, reducing GPU costs by up to 90%
 
FriendlyAI, an AI inference service company, announced a collaboration with Nebius, a global AI cloud infrastructure company, to provide faster and more efficient AI services.

Through this collaboration, FriendlyAI's optimized inference technology will be integrated with Nebius's large-scale AI cloud infrastructure. Companies operating customer support chatbots, coding assistants, and AI agent services on the Nebius infrastructure can now experience an improved inference environment with faster speed, cost efficiency, and stability through FriendlyAI's API.

Nebius, partnering with FriendlyAI, is a neo-cloud company providing full-stack infrastructure necessary for the AI industry. It is headquartered in Amsterdam, Netherlands, and is listed on NASDAQ. Currently, it provides infrastructure for high-performance AI workloads across Europe, North America, and Israel. Recently, it secured a USD 19.4 billion AI computing partnership with Microsoft, establishing itself as a key supplier in the global AI infrastructure market.

FriendlyAI offers world-class AI inference acceleration and optimization technology with its proprietary technology. It can reduce GPU costs, the biggest burden for companies in AI development and service operation, by up to 90%. Through proprietary technologies such as model infrastructure-level optimization and Continuous Batching, it also provides more than twice the inference speed. Its strength is further evaluated with a 99.99% uptime SLA guarantee, which is highly regarded by large clients in the current trend where stable operation of AI-dedicated hyperscale infrastructure is becoming increasingly important.

Additionally, FriendlyAI's inference platform, supporting over 460,000 Hugging Face models, facilitates rapid product launch and easy deployment throughout the entire cycle from prototype to large-scale operation.

Byung-Gon Jeon, CEO of FriendlyAI, stated, “Our goal is to make world-class AI inference technology easily accessible to all companies,” adding, “The combination of FriendlyAI's inference optimization technology and Nebius AI Cloud means that all customers can now deploy AI models with top-level latency, stability, and cost efficiency.”

Meanwhile, FriendlyAI is evaluated to have secured continuity in its global expansion strategy pursued since early this year and achieved another significant milestone with this collaboration. Earlier in January, the company drew industry attention by being the first Korean company to supply AI model deployment options to Hugging Face, the world's largest AI model platform. In May, it recruited Sang-Won Lee as COO, who has Silicon Valley exit experience, and widely publicized its technological superiority at numerous global AI conferences.

This led to numerous large partnerships and investment attractions domestically and internationally. FriendlyAI is currently the official distribution partner for LG AI Research's latest model Exaone 4.0 and a participant in the elite team of the government K-AI model project. Notably, in August, it secured an exceptional USD 20 million seed extension investment in a seed round with participation from a renowned Silicon Valley VC, once again recognizing its growth potential in the global market.

Choi Yong-seok

AI-translated with ChatGPT. Provided as is; original Korean text prevails.
Popular News

경영·경제 질문은 AI 비서에게,
무엇이든 물어보세요.

Click!