ASUS Answers the Need for an Asian Large Language Model

ASUS Answers the Need for an Asian Large Language Model

This year has seen explosive growth for generative AI, and 2024 looks to be no different. For example, a recent McKinsey survey shows that AI adoption has doubled over the past five years, with one-third of organizations reported to be now regularly using generative AI in at least one business function. 

It is then no surprise that governments too are ramping up their quest to develop LLMs specific to their countries’ needs. Importantly, language remains a major barrier given that LLMs require massive sets of data, with each LLM typically having at least one billion or more parameters. For the Asia-Pacific, an incredibly diverse region with more than 3,000 languages, LLMs that include and can accommodate multiple languages will be a key distinguishing factor – not to mention, essential for AI adoption to scale here.

Today, American and Chinese companies lead the pack with LLM development, but ASEAN remains an especially major gap to plug. As the first in the Asia-Pacific to launch a commercial AI high-performance computing supercomputer service, ASUS is disrupting the AI landscape with a full suite of LLM services. This includes no-code LLM services to lower the adoption curve, LLM production for the more advanced, and LLMs that include ASEAN languages like Vietnamese. This will rival the value of Western LLMs that lack the deep repertoire of ASEAN language-specific data needed for effective use in this part of the world.

The rise of large language models

Large language models (LLMs) are trained on extensive sets of data encompassing text and code, and use complex and sophisticated algorithms to process and analyze human languages to perform natural language processing (NLP) tasks. Compared to AI-based language models in the past, the modern LLMs require access to vast computational resources in order to learn and generate content. 

Related Reading: Empowering Smart Manufacturing with ASUS AISVision

Beyond using the data gathered to generate content, LLMs can also be customized for a specific use case through fine-tuning, which makes them faster and more efficient than general-purpose LLMs. This process involves training LLMs on domain-specific data and enhances their ability to perform tasks for a particular application. As such, LLMs with fine-tuning features have enormous versatility and potential to empower enterprise AI applications, ranging from language translation and malware analysis to functioning as a virtual assistant and conducting sentiment analysis for business operations. 

ASUS empowers Asia with its multilingual LLM

At ASUS, we are committed to creating AI technology for the public and to accelerating its development and applications across industries. Through our products and services, we aim to empower enterprises by eliminating entry barriers and increasing the widespread application of AI 2.0 technologies. 

Earlier this year, ASUS’ subsidiary, Taiwan Web Service Cooperation (TWSC), released the Formosa Foundation Model (FFM) – the world’s first LLM with advanced capabilities in traditional Chinese. Powered by the Taiwania 2 supercomputer, which was developed by the National Center for High-performance Computing (NCHC), the FFM boasts an impressive scale of 176 billion parameters. It also combines the ability to comprehend and generate text with traditional Chinese semantics and offers enterprise-level generative AI solutions. 

由 National Center for High-performance Computing - https://www.nchc.org.tw/Page?itemid=2&mid=4#, OGDL v1.0, https://commons.wikimedia.org/w/index.php?curid=109312082

But that is not all – as the FFM is based on the BLOOM and LLAMA2 open-source language model, which is optimized for multiple Asian languages, it demonstrates significant high-quality performance in the understanding of languages such as Japanese, Indonesian, Malay, Vietnamese, and more. The FFM has a deep understanding of the context and nuances of local languages in Asia and can be easily customized and tailored for generative AI applications in each region.

Additionally, the latest release in the FFM series offers various models suitable for use in the cloud or on-premises. This means that enterprises have the ability to directly deploy their own LLMs in private clouds or local data centers without having to worry about cybersecurity and privacy concerns. 

Related Reading: ASUS Empowers the Future with its Commitment to Continuous Innovation

As the language abilities of AI advance over time, ASUS hopes to expand on the capabilities of its LLMs to achieve seamless performance across languages and allow different industries to participate in developing their own generative AI. ASUS will also remain focused on offering generative AI solutions that prioritize reliability, controllability, and data security. 

By continually investing in AI and providing open-source resources, ASUS aims to leverage its expertise and drive AI application innovation to a broader and higher level. We will continue to improve and accelerate the application and realization of AI 2.0 technologies and work towards enabling and democratizing AI across the business landscape – both in Asia and beyond.  

About ASUS
About ASUS

ASUS is a global technology leader that provides the world’s most innovative and intuitive devices, components, and solutions to deliver incredible experiences that enhance the lives of people everywhere. With its team of 5,000 in-house R&D experts, the company is world-renowned for continuously reimagining today’s technologies. Consistently ranked as one of Fortune’s World’s Most Admired Companies, ASUS is also committed to sustaining an incredible future. The goal is to create a net zero enterprise that helps drive the shift towards a circular economy, with a responsible supply chain creating shared value for every one of us.

https://asus.com
IFA 2024 Featured Photos

IFA 2024 Featured Photos