TAIPEI (Taiwan News) — Foxconn announced on Monday that it has developed its own Chinese large language model (LLM), marking what the company calls a milestone for Taiwan’s AI sector.
The model, FoxBrain, was completed in four weeks using an efficient and cost-effective training method, according to a statement from Foxconn’s Hon Hai Research Institute. Nvidia assisted with training and technical support through its Taiwan-based supercomputer “Taipei-1” and NeMo AI service platform, DigiTimes Asia reported.
Initially developed for internal systems, Foxconn said the AI model will be open-sourced and publicly available. FoxBrain features strong comprehension and reasoning capabilities and is optimized for Taiwanese language patterns, the company said.
"In recent months, the deepening of reasoning capabilities and the efficient use of GPUs have gradually become the mainstream development in AI," said Hon Hai Research Institute AI Director Yung-Hui Li (李永輝). "Our FoxBrain model adopted a highly efficient training strategy, focusing on optimizing the process rather than blindly increasing computing power."
The company acknowledged that FoxBrain lags behind China’s DeepSeek in performance but is "close to world-leading standards." It also claimed the model surpasses Meta’s current LLMs and Taiwan Llama in mathematical reasoning ability.
Foxconn will present FoxBrain at Nvidia's GPU Technology Conference in California on March 20.