Huawei revealed its new AI accelerator, the Atlas 350, claiming it delivers nearly three times the compute power of Nvidia’s H20 chip tailored for China. Built around the Ascend 950PR chip, the Atlas 350 hits 1.56 PFlops in FP4 precision-an emerging low-precision format optimized for faster data throughput in AI inference workloads.
Targeted at search recommendations, multimodal AI generation, and large language model inference, the Atlas 350 is a dedicated hardware accelerator designed for server deployment. It rides on Huawei’s growing independence in semiconductor design, avoiding reliance on US technologies amid ongoing geopolitical tensions.
Atlas 350’s performance advantage over Nvidia H20 in AI inference
Zhang Dixuan, head of Ascend Computing, highlighted Atlas 350’s 2.8× performance boost over Nvidia’s H20 variant customized for the Chinese market. The Ascend 950PR chip, first unveiled last September, features prominently in Huawei’s three-year roadmap focused on AI inference and pre-processing tasks involving token-level analysis.
Huawei’s AI infrastructure and storage system upgrades in 2026
This development comes as Huawei ramps up AI infrastructure alongside the surge in agent-based AI systems, which demand exponentially higher computational throughput and sophisticated data management to enable autonomous planning and actions.
Looking ahead, Huawei plans major upgrades to its storage line this year, including all-SSD OceanStor Dorado and Pacific 9926 systems aimed at enterprise clients. Additionally, the FusionCube A1000 will be released to simplify rapid AI deployment in small and medium-sized businesses.
Yuan Yuan, president of Huawei’s storage solutions, described 2026 as a ”data-first” phase of the AI era, following an initial period focused mainly on computational power. The company intends to integrate closely with national data infrastructure projects, underscoring the strategic role of storage modernization alongside AI acceleration.

