Amazon Web Services promises energy efficiency through cloud computing, generative AI

이재림 2024. 5. 16. 17:59
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

Korean clients using cloud computing solutions and AI chips from Amazon Web Services (AWS) can prioritize sustainability and cost optimization through generative AI, AWS said on Thursday.
Amazon Web Services Korea Director Ham Kee-ho speaks during his keynote speech at the AWS Summit Seoul 2024 held in southern Seoul's Coex on Thursday. [AWS]

Korean clients using cloud computing solutions and AI chips from Amazon Web Services (AWS) can prioritize sustainability and cost optimization through generative AI, AWS said on Thursday.

Korean clients are particularly interested in reducing their carbon footprint, which can be achieved using the AWS Graviton processor, designed to offer a high-performance computing environment with an emphasis on energy efficiency.

“Korea has the second-largest user base within the Asia-Pacific region that utilizes Graviton,” AWS Korea Director Ham Kee-ho noted at the AWS Summit Seoul 2024 hosted in southern Seoul’s Coex. The annual event gathers IT experts and business leaders to showcase instances of clients utilizing AWS services.

Another realm of interest for Korean companies is minimizing costs when utilizing AI. Local demand for two of AWS’s homegrown AI chips, AWS Trainium and AWS Inferentia — both designed to optimize graphics processing unit (GPU) performance and reduce costs — is swiftly on the rise, according to Ham.

“Domestic virtual human startup Klleon is a representative example of using Inferentia,” he said.

SK Telecom, a major Korean mobile carrier, is another AWS client currently utilizing Amazon Bedrock, a fully managed cloud service that offers a range of high-performing foundation models available for use through a unified application programming interface (API).

Using Bedrock, clients can easily develop AI applications by fine-tuning existing large language models (LLMs) with their own data and adding functions like retrieval-augmented generation (RAG) technology.

“We are developing telco-specific LLMs through fine-tuning LLMs and using RAG technology on Bedrock,” said SK Telecom’s Chung Suk-geun, head of global AI tech division. “We plan to launch the telco LLM and personal AI assistant service based on this product within the second half of this year.”

BY LEE JAE-LIM [lee.jaelim@joongang.co.kr]

Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?