Google's Tensor Processing Unit TPU fits in a hard-drive slot of a server and is claimed to accelerate TensorFlow applications equivalent to skipping three generations of Moore's Law.

Advertising giant Google has unveiled a custom processor developed to speed up its TensorFlow machine learning platform: the Tensor Processing Unit, or TPU.

'We started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications.

'TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation.

Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly.

The TPU is far from a lab experiment, too: impressively, Google went from first tested silicon to running applications within its data centres in just 22 days and currently accelerates products from Street View and RankBrain to the machine intelligence which bested Go champion Lee Sedol recently.

The text above is a summary, you can read full article here.