Fujitsu is reportedly lining up a homegrown AI chip built on Rapidus’ 1.4nm process, with Japan’s government expected to foot most of the early bill. If that sounds ambitious, it is: the plan would tie together domestic design and domestic manufacturing for an inference chip aimed at servers and related systems, not the usual phone or laptop target.
The chip would be an NPU, or neural processing unit, which is the quieter cousin in the AI accelerator family. GPUs still dominate training, but inference is where NPUs make more sense because they can run model outputs more efficiently. That makes Fujitsu’s move less about chasing headline-grabbing training benchmarks and more about building the plumbing behind deployed AI services.
Fujitsu’s NPU design for servers, not consumer gadgets
NPUs usually show up in PCs and smartphones, where power efficiency matters and AI features are increasingly baked into the device itself. Fujitsu appears to be taking the opposite route by targeting server systems, which is a smart way to avoid fighting Nvidia on its strongest turf. It also hints at a broader shift: the real money in AI is increasingly moving from training showpieces to inference infrastructure that has to run all day, every day.
- Process node: 1.4nm
- Chip type: NPU for AI inference
- Target use: servers and related systems
- Estimated initial development cost: ¥58 billion ($363 million)
- Expected NEDO share: about two-thirds
Japan is trying to build a chip stack at home
The funding piece is almost as important as the silicon. Japan’s New Energy and Industrial Technology Development Organization is expected to cover roughly two-thirds of the project’s ¥58 billion initial cost, part of a wider state-backed semiconductor push that has already helped Rapidus secure about ¥1.7 trillion in combined government and private investment. Japan’s Ministry of Economy, Trade and Industry has also nearly quadrupled its support budget for advanced semiconductors and AI development to about ¥1.23 trillion for the current fiscal year.
That level of backing tells you this is about more than one chip. Japan wants a domestic chain that can compete with the Taiwanese and US giants that currently define advanced manufacturing, and Rapidus sits at the center of that bet. The catch, as always, is execution: making a 1.4nm process work is one thing; turning it into a commercially useful server chip is another.
Fujitsu’s existing AI chip partnerships
Even with this plan, Fujitsu is not cutting itself off from the rest of the market. It already works with Nvidia and plans to connect its CPUs with Nvidia GPUs on the same substrate by 2030, while also maintaining a separate AI chip partnership with AMD. That mix of alliances suggests a pragmatic strategy: keep the Western accelerators for breadth and volume, and pursue a domestic NPU where Japan can control more of the stack.
If the project stays on track, the interesting question is not whether Fujitsu can build another AI chip. It’s whether Japan can make this the kind of vertically integrated platform that actually keeps workloads, IP, and manufacturing inside the country instead of just funding another expensive science project.

