TAIPEI (Taiwan News) — Taiwanese computer maker Asus unveiled a broad lineup of new AI applications and laptops Tuesday at the Consumer Electronics Show in Las Vegas, highlighting its push to integrate AI into work, creativity, and everyday use.
Asus Co-CEO Samson Hu (胡書賓) in a press release said the company sees AI as a tool to simplify work, inspire creativity, and improve daily life. The company introduced a range of end-to-end AI experiences under three categories: Workspace AI, Creator AI, and Everyday AI.
In Workspace AI, Asus showcased MyExpert, an all-in-one platform designed to boost productivity and work quality. The system integrates five functions: AI Chat, Knowledge Hub, Advanced Tools, AI Writer, and Mail Master, aimed at streamlining office tasks. Asus also introduced AI ExpertMeet, which provides features such as meeting summaries, to-do lists, and real-time file retrieval, supported by data protection for secure collaboration across local and cloud environments.
For Creator AI, Asus combined hardware with AI applications such as StoryCube, MuseTree, and Creator Hub to build a smart content creation platform. Its ProArt product line is tailored for high-intensity editing and mobile content creation.
Under Everyday AI, the company said it is embedding AI into smart devices to support routine work, creative brainstorming, and daily tasks.

Asus also unveiled new laptops, including the Zenbook DUO, a dual-screen model featuring two 14-inch Lumina Pro OLED displays. The redesigned hinge reduces the gap between the screens by 70%, offering an improved dual-screen experience. The model was shortlisted for the CES 2026 Innovation Awards in the AI category.
Asus was not alone in spotlighting AI at CES. Rival Taiwanese PC maker MSI also announced a new line of AI desktop computers designed for edge computing, powered by AMD Ryzen AI Max+ 300 series processors with up to 126 trillion operations per second of AI computing power, per CNA.
MSI said its AI Edge systems use a new memory architecture that overcomes traditional video memory bottlenecks, enabling more efficient inference for large AI models.





