On-chip AI drives up fresh demand for a new breed of NPUs
On-chip AI drives up fresh demand for a new breed of NPUs
  • Jason Jiang
  • 승인 2019.02.08 14:10
  • 댓글 0
이 기사를 공유합니다

The semiconductor industry is undergoing rapid paradigm shift, seeing AI rapidly emerging as a new growth driver for semiconductor chips.

Especially, demand for AI chips will rapidly grow to reach US$55 billion by 2022, according to SEMICO, as they will serve as brains of a wide range of devices from automobiles, IoT, and smart infrastructures like smart cities.

“AI will really open up new opportunities, changing the way we live and work over the next 5 years. By 2020, the AI chip market will be half the size that of micro logics like microprocessors, microcontrollers and DSPs,” said Jim Feldhan CEO with research and consulting group SEMICO.

He bets that AI in cars, IoT system, smart phones, and servers will drive up demand for semiconductor chips in the ways that PCs and smart phones have revved up chip demand back in early 2000s and early 2010s, respectively.

Jim Feldhan, CEO with SEMICO

“We started from desktop PC era and moved into mobile era with high penetration of smart phones. And we are moving into AI, driven by infrastructures like smart cities, IoTs, and automobiles. This is really changing the horizon, offering new market opportunities,” said he.

True enough, AI algorithm is everywhere from cars to smart phones to IoT to servers to vison system to smart speakers, guzzling tones of computational powers and memory bandwidth.

For example, AI algorithm is crying for a new breed of computing architectures other than traditional Von Neumann computing architecture that is built around CPU and memory chip hierarchy.

A case in point Google’s TPU or tensor processor unit-architecture AI chip that is a very dedicated and purpose-built AI chips. Domain-specific processor, high-performance computing, or HPC and quantum computing are among them, too.

“What we are seeing here that the industry is changing , and is not gonna to see Intel X86 architecture rule AI and autonomous driving, opening up the door to new architecture to be developed and new competition for high-performance computing arena,” added Jim Feldhan.

He said that Intel will see opportunities, too, but won’t grab 70% to 80% market share.

The ubiquity of AI is also clamoring for a new breed of faster and higher bandwidth memory chips, which the industry hopes will help reduce a mismatch in process cycles between CPUs and DRAM chips – the main bottleneck in Von Neumann’s traditional CPU and memory hierarchy-centric computing architecture.

AI in automotive is a prime example of how the self-learning and self-inference algorithm has been accelerating the penetration of silicon contents into cars, revolutionizing the way that cars interact with other cars, drivers, pedestrians, roads, and other physical objects like road blocks, traffic signal lights.

As cars are coming awash in sensors like radars, Lidars, and cameras, tones of AI algorithm are now being embedded into processor chips, creating a new breed of AI chips, - neural processing units or NPUs - to process tones of data-heavy sensing signals like voices and images in in real-time.  

AI algorithm is also often sitting in CPU, or GPU, or FPGAs to power a sensor fusion, ADAS (advanced driver assistant system) and car infotainment system. Yet, on-chip AI will go mainstream.

According to SEMICO, automotive chip market would grow at a compounded annual rate of 14% to hit US$77 billion in 2021, mainly led by rapid adoption of such silicon contents as processors and memory chips. AI semiconductor, or on-chip AI market is especially expected to grow at a CAGR of 67% from now to reach U$15.8 billion.

The rapid adoption of on-chip AI or AI-centric software will likely propel consumption of memory chips in automobile, too, as processor chips are increasing getting hungry for more of memory spaces to temporarily store tones of data before they process it.

AI in IoT system is also driving up demand for processor chips, memory chips, communications chips, and other sensors, as the system runs on Internet, sensor hub, cloud and edge computing platform to detect and recognize signals, store them and make a local decision at the edge , or send them to cloud to process on data center servers for analysis.

AI algorithm is especially getting embedded into FPGA, or other processors to work as a hardware accelerator to complement server CPUs.

Farming and caring cows with IoT system is a good example of how much silicon contents the IoT will consume.

“Lots of data are being developed at the edge like automobile, smart cities, sciences and finances. Imagine that each cow has a bio-sensor on it, which get connected to servers. Servers can tell that cows have stress, because they get overfed. Cows can go into giant shade area to rest near water baskets and large fans, which are all controlled through the servers. When they have stress, they put out more water and raise fan level, and cows become happy producing more of milk” said CEO Jim Feldhan.

“This is one example that IoT and AI start to help make all economies more productive and lower costs. We see lots of data go to the network to have a local decision at the edge, but critical data and analysis are being done at cloud, which is driving overall bandwidth.” He added.

True enough,  AI make almost all devices so user-friendly, so smart, and more importantly so appealing that consumers and enterprises won’t ’t help buying them. The purchasing power in turn will likely lead to more consumption of silicon contents across almost all devices.



삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
Share your opinion 0
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.