TAIPEI (Taiwan News) — AI hardware manufacturer Nvidia launched four Nvidia Inference Microservices (NIMs) for training AI large language models (LLMs) in Taiwan on Monday (Aug. 26).
In a press release on Monday, Nvidia said NIMs support LLMs used by the AI community and are tailored to regional needs. NIMs are like containers that have instructions that speed up AI models. They provide more accurate responses that are aligned with local languages and cultural traditions, the company explained, per CNA.
Nvidia said Chang Gung Memorial Hospital in Taoyuan is developing a dedicated AI inference service (AIIS) to manage LLM applications within the hospital. By using Llama 3-Taiwan 70B, a NIM trained in Mandarin Chinese, the hospital aims to provide a more nuanced medical language that patients can understand, it reported.
Nvidia added electronics manufacturer Pegatron plans to use Llama 3-Taiwan 70B to develop applications for both internal and external use. Pegatron has integrated this microservice with its PEGAAi Agentic AI system to enhance manufacturing and operational efficiency, Nvidia said.
However, Nvidia cautioned that while regional AI models can provide localized responses, companies still need to fine-tune them. Meanwhile, the company said generative AI software revenue is expected to reach US$48 billion (NT$1.5 trillion) in the Asia-Pacific region by 2030.