KAYTUS Brings the Latest V2 Series Server Solutions for Emerging LLM/GAI Applications to AI EXPO KOREA 2024

2024. 4. 24. 14:37
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

SEOUL, South Korea -- Businesswire -- KAYTUS, a leading IT infrastructure provider, will participate in AI EXPO KOREA 2024 as a Gold Sponsor, bringing its latest V2 series servers developed for emerging application scenarios such as cloud, AI, and big data, with the performance per watt improved by more than 35%. KAYTUS offers full-stack AI solutions for popular LLM and GAI application scenarios, creating a self-adaptive, intelligent infrastructure for users.

AI EXPO KOREA, themed on Quantum Jump, brings together over 35,000 industry experts from approximately 300 organizations to demonstrate, discuss, and promote the development and application of cutting-edge technologies. Deawon CTS and Etevers eBT will join KAYTUS to showcase more agile, cutting-edge product solutions.

Emerging Large Model and GAI Applications Create the Need for a New Infrastructure

As generative AI (GAI) is booming, large models such as GPT, LLaMA, Falcon, and ChatGLM drive increased social productivity as well as the transformation and upgrading of traditional industries. They not only increase the demand for more computing power but also expose the issue of low computing efficiency. For instance, the GPT-3 large model only achieves a computing efficiency of 21.3% when training on its GPU clusters with an energy consumption of up to 284,000 kWh. As AI chips for computing (with a power consumption of up to 1,000 W) and the computing density are improved increasingly, improving computing efficiency and lowering energy consumption have become challenges that data centers must address.

KAYTUS believes that higher computing efficiency is achieved through improvements in both measured performance and resource utilization. Servers' software and hardware can be optimized collaboratively using application-oriented systematic design. The latest KAYTUS V2 series servers embrace diverse and heterogeneous computing, collaboratively optimizing software and hardware for real application scenarios to achieve higher computing efficiency, energy efficiency, and computing performance per unit power consumption with the performance per watt improved by more than 35%.

Full-Stack AI Solutions for Emerging AI Application Scenarios

KAYTUS provides full-stack AI solutions to meet the surging demand for computing power in LLM and GAI application scenarios, including cluster environment building, computing power scheduling, and large-model application development, to assist users in developing large model infrastructure.

MotusAI, the KAYTUS computing power scheduling platform, enables one-stop delivery for AI model development and deployment. By systematically optimizing resource usage and scheduling, training process and guarantee, and algorithm and application management in large model training, it is equipped with the ability of fault tolerance for training tasks, ensuring prolonged and continuous training. KAYTUS offers a diverse range of AI servers with industry-leading performance, increasing computing efficiency to 54% in large model training with thousands of billions of parameters involved and reducing training time by one week compared to the industry average. The AI inference capabilities are improved by 30%, maximizing the utilization of computing power.

· KR6288V2, featuring 8 GPUs in a golden 6U space, allows users to achieve excellent performance and maximum energy efficiency. It is suitable for a variety of applications in large data centers, including large model training, NLP, recommendation, AIGC, and AI4Science. · KR4268V2 is one of the most flexible AI servers in the industry by supporting 100+ configurations. It features 10 DS PCIe GPUs in a 4U space, suitable for complex application scenarios, such as deep learning, metaverse, AIGC, and AI+Science.

A Complete Family of Liquid Cooling Servers for Continuous Improvement in System Energy Efficiency

The complete family of KAYTUS V2 servers supports cold-plated liquid cooling, including general-purpose servers, high-density servers, AI servers, and rack servers. It features 1,000 W single-chip cooling, liquid cooling of all key components, high-density deployment, and a PUE of nearly 1.0.

In addition, KAYTUS All Liquid Cooling Cabinet utilizes cold plates and a liquid cooling rear door that fully leverages natural cooling to truly enable “air-conditioning-free operation” and achieves a PUE as low as 1.05. It can support a power density of 100 kW per cabinet, which is more than 10 times higher than that in traditional data centers, while increasing space utilization by 5 to tenfold.

Date: May 1st - May 3rd Location: Hall D COEX, Seoul Booth: #11

Date: May 2nd Location: Hall D COEX, Seoul Seminar: Generative AI and ChatGPT, the Game Changer of this era Spokesperson: EJ YOO, GM of KAYTUS Korea

About KAYTUS

KAYTUS is a leading provider of IT infrastructure products and solutions, offering a range of cutting-edge, open, and environmentally-friendly infrastructure products for cloud, AI, edge, and other emerging scenarios. With a customer-centric approach, KAYTUS flexibly responds to user needs through its agile business model. Learn more at KAYTUS.com

View source version on businesswire.com: https://www.businesswire.com/news/home/20240422074003/en/

이 뉴스는 기업·기관·단체가 뉴스와이어를 통해 배포한 보도자료입니다.

출처:KAYTUS

보도자료 통신사 뉴스와이어(www.newswire.co.kr) 배포

Copyright © 뉴스와이어. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?