Nvidia’s H100 GPU, NeMo Framework power LG’s latest AI model

2024. 9. 24. 13:45
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

(Nvidia)
Nvidia Corp. announced on Tuesday that its H100 graphics processing unit (GPU) and NeMo Framework are being used in LG AI Research Institute’s latest generative AI model, EXAONE 3.0.

The model, which was released in August, is designed to push the boundaries of AI in both Korean and English language tasks.

According to Nvidia, the NeMo Framework provides an end-to-end solution for building and deploying generative AI models.

This allows users to train large language models (LLMs) quickly, customize them, and deploy solutions at various scales, ultimately reducing the time needed for model development.

Nvidia also highlighted that EXAONE 3.0 leverages its TensorRT-LLM software development kit (SDK), which accelerates and optimizes the inference performance of large language models on AI platforms.

The kit helps improve the efficiency of the latest LLMs, making them more adaptable for diverse applications.

EXAONE 3.0 has demonstrated superior benchmark performance in both Korean and English compared to open-source AI models of a similar scale, such as Meta’s Llama.

The model is available for free use for research purposes.

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?