Fujitsu is reportedly working on a dedicated AI inference chip built on Rapidus’ 1.4nm process, with the entire project meant to stay in Japan from design to manufacturing. That is a pretty bold way to answer a very modern problem: if the AI boom is being powered by foreign GPUs, Japan wants a piece of the AI hardware stack it can actually call its own.

According to Nikkei Asia, the company is targeting servers and related systems rather than phones or laptops, which is the more interesting choice. NPUs are built for inference, not training, so they can do the repetitive AI work more efficiently than general-purpose GPUs – and in data centers, efficiency is the whole game.

Fujitsu’s 1.4nm AI inference chip plans

The reported chip would be fabricated on Rapidus’ advanced 1.4nm process, a node that still sounds like science fiction until you remember everyone in semiconductors is now racing toward ever-smaller numbers. NEDO is expected to cover about two-thirds of the estimated ¥58 billion ($363 million) initial development cost, which tells you this is as much industrial policy as product roadmap.

  • Chip type: NPU for AI inference
  • Target use: servers and related systems
  • Process: Rapidus 1.4nm
  • Initial development cost: ¥58 billion ($363 million)
  • Public funding expected: about two-thirds via NEDO

Japan’s semiconductor push is not subtle

This fits a broader pattern in Japan, where the government has been throwing serious money at chipmaking revival efforts. Rapidus has already secured roughly ¥1.7 trillion in combined government and private investment, and the Ministry of Economy, Trade, and Industry has nearly quadrupled support for advanced semiconductors and AI development to about ¥1.23 trillion for the current fiscal year.

That level of backing does not guarantee a breakthrough, of course. It does suggest Japan is tired of being a customer in somebody else’s compute market, especially as AI hardware demand keeps pulling attention toward inference chips, memory, and packaging rather than just the headline-grabbing GPU duopoly.

Fujitsu still isn’t leaving Nvidia behind

Fujitsu is not walking away from the broader GPU ecosystem, either. The company already works with Nvidia and plans to connect its CPUs with Nvidia GPUs on the same substrate by 2030, while also maintaining a separate AI chip partnership with AMD. In other words: build your own chip, but keep the door open to everyone else’s, because data centers tend to reward pragmatism over pride.

The open question is whether a domestically made inference chip can become more than a prestige project. If Fujitsu and Rapidus can deliver real performance and power-efficiency gains at scale, Japan gets a strategic foothold in AI hardware; if not, it joins the long list of chip ambitions that sounded better on paper than in production.

Source: Tomshardware

Leave a comment

Your email address will not be published. Required fields are marked *