로그인|회원가입|고객센터|HBR Korea
페이지 맨 위로 이동
검색버튼 메뉴버튼

AI

Hypervisual AI Develops Next-Gen On-Device AI GPNPU

Dong-A Ilbo | Updated 2025.10.15
Achievements in Completing Next-Gen Processor HW/SW Architecture with Enhanced Development Convenience and Compatibility
㈜HyperVisual AI (CEO Jung Sam-yoon) announced that it has completed the basic hardware and software architecture for the next-generation processor GPNPU (General-Purpose Neural Processing Unit), which combines the high-performance computing capabilities of a GPU (Graphic Processing Unit) with the power efficiency of an NPU (Neural Processing Unit), and is currently advancing its intellectual property (IP) development.

Image provided by HyperVisual AI
The core features of the GPNPU architecture are its ease of development and broad compatibility. The GPNPU being developed by HyperVisual AI supports the same functions as the main function libraries of the NVIDIA CUDA environment, cuDNN (CUDA Deep Neural Network) and cuBLAS (CUDA Basic Linear Algebra Subprograms). This allows AI models developed using Pytorch in the existing CUDA environment to be easily ported to the GPNPU by simply adding the company's library code.

Additionally, it was revealed that a unified compiler is provided, allowing coding for both GPU and NPU in the same environment, simplifying the complex model porting process. This is expected to reduce the time and effort developers invest in porting models in the existing NPU environment.

Developers can train and optimize models on high-performance NVIDIA GPU servers and easily transfer these models to the GPNPU processor on edge devices, enabling immediate on-device AI implementation.

HyperVisual AI is also developing key features to enhance the performance of the GPNPU. Currently, it is developing a function to monitor the computational load of GPU and NPU processors in real-time and automatically distribute the workload. Once completed, this feature will allow the GPNPU to assign computations to the most efficient processor based on workload, thereby improving performance and power efficiency.

The GPNPU is expected to enhance the lifespan and value of AI devices by flexibly responding to various computational methods of rapidly advancing AI models and supporting continuous AI model upgrades through firmware or software updates even after device launch. HyperVisual AI has implemented demos running the Vision AI models ViT (Vision Transformer) and YOLO V.13 and plans to complete the GPNPU IP development by the end of the year.

Jung Sam-yoon, CEO of HyperVisual AI, stated, “Securing compatibility with existing AI model development environments is crucial for adopting new processors, and the scalability of the compiler and AI stack is a key competitive factor,” adding, “Our company will provide a development environment that allows models created in other DNN frameworks such as ONNX (Open Neural Network Exchange) and TensorFlow to be compatible with each other, and support on-device AI full-stack including GPNPU hardware, domain-specific AI model optimization, compiler, library, and AI stack, offering unprecedented freedom and efficiency to on-device AI developers.”

Meanwhile, HyperVisual AI participated in the AI Convergence and Physical AI Alliance led by KGround Ventures (CEO Cho Nam-hoon), a deep-tech technology commercialization specialized VC, last July, and is pursuing development through an agreement for joint IP development for Autonomous Mobile Robots (AMR) with Tira Robotics (CEO Kim Dong-kyung) and WIM (CEO Jeon Woo-jin).

Choi Yong-seok

AI-translated with ChatGPT. Provided as is; original Korean text prevails.
Popular News

경영·경제 질문은 AI 비서에게,
무엇이든 물어보세요.

Click!