Generative AI Company, FriendliAI Releases Public Beta of PeriFlow Cloud

2023. 7. 21. 09:07
자동요약 기사 제목과 주요 문장을 기반으로 자동요약한 결과입니다.
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.

Byung-Gon Chun, Founder & CEO of FriendliAI, emphasizes the significance of efficient LLM serving, stating "Generative AI is revolutionizing our lives, enabling more creative, intelligent, and productive services. Many organizations are now training their own models, but they have yet to fully realize how costly and painful it is to serve these models at scale for a large user base."

이 뉴스는 기업·기관·단체가 뉴스와이어를 통해 배포한 보도자료입니다.

글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

FriendliAI, a leading generative AI engine company, is proud to announce the public beta release of PeriFlow Cloud. This powerful platform empowers users to run PeriFlow, an engine for generative AI serving, within a managed cloud environment.

With its innovative approach specifically tailored to large language models (LLMs), the PeriFlow engine achieves remarkable improvements in throughput while maintaining low latency. This cutting-edge engine is built upon FriendliAI’s groundbreaking batching and scheduling techniques, which are protected by patents in the United States and Korea, including U.S. Patent No. 11,514,370, U.S. Patent No. 11,442,775, Korean Patent No. 10-2498595, and Korean Patent No. 10-2479264.

PeriFlow is fast and versatile, attracting a growing number of companies that develop their own LLMs through pretraining or fine-tuning open-source LLMs. Supporting a broad range of LLMs, including GPT, GPT-J, GPT-NeoX, MPT, LLaMA, Dolly, OPT, BLOOM, T5, FLAN, UL2, and more, PeriFlow offers diverse decoding options such as greedy, top-k, top-p, beam search, and stochastic beam search. Furthermore, it supports multiple data types, including fp32, fp16, bf16, and int8. With PeriFlow, users can optimize the balance between precision and speed.

FriendliAI also offers PeriFlow as a container solution, named PeriFlow Container, which has gained considerable traction among companies for LLM serving. For instance, Scatter Lab, a prominent social chatbot company in Korea, optimizes their high user traffic by leveraging PeriFlow Container to run multiple LLMs, including the popular Luda 2.0. As a result, Scatter Lab has achieved a remarkable 50% reduction in infrastructure costs associated with serving.

The Benefits of PeriFlow Cloud

PeriFlow Cloud simplifies the adoption of PeriFlow for organizations of any scale. With PeriFlow Cloud, users can enjoy exceptional speed at low costs (70~90% GPU savings) for LLM serving without the hassle of cloud resource setup and management.

Through PeriFlow Cloud, users can centrally manage every deployed LLM from anywhere. Users are able to effortlessly upload model checkpoints, deploy models, and instantly send inference requests. Comprehensive monitoring tools empower users to track events, errors, and performance metrics while interactively testing deployed LLMs in the playground. It dynamically handles performance and fault issues while auto-scaling based on traffic patterns, freeing users to focus on creating LLMs and driving innovation.

Byung-Gon Chun, Founder & CEO of FriendliAI, emphasizes the significance of efficient LLM serving, stating “Generative AI is revolutionizing our lives, enabling more creative, intelligent, and productive services. Many organizations are now training their own models, but they have yet to fully realize how costly and painful it is to serve these models at scale for a large user base.”

“We’re due for a significant transformation in the way we serve LLMs to empower organizations to fully harness the potential of their LLMs,” Chun adds. “PeriFlow Cloud is an instant and cost-effective solution. We are incredibly excited to see the innovative services users will develop with their generative AI models, powered by PeriFlow Cloud.”

Get Started with PeriFlow Cloud Today

The public beta version of PeriFlow Cloud is now available. Users can deploy their large language models (LLMs) on PeriFlow, the fastest generative AI inference serving engine, in a matter of minutes. Visit the official website to get started today.

About FriendliAI

FriendliAI is a leading provider of cutting-edge inference serving engines for generative AI. Our mission is to enable our customers to serve their generative AI models efficiently at low costs and minimal environmental impact. For more information, please visit FriendliAI website.

이 뉴스는 기업·기관·단체가 뉴스와이어를 통해 배포한 보도자료입니다.

출처:FriendliAI

보도자료 통신사 뉴스와이어(www.newswire.co.kr) 배포

Copyright © 뉴스와이어. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?