Google has begun to use computer processors its engineers designed to increase the performance of the company s artificial intelligence software, potentially threatening the businesses of traditional chip suppliers such as Intel Corp. and Nvidia Corp.During the past year, Google has deployed thousands of these specialized artificial intelligence chips, called TensorFlow Processing Units, in servers within its data centers, Urs Holzle, the company s senior vice president of infrastructure, told reporters Wednesday at the company s developer conference.

Google declined to specify precisely how many of the chips it s using, but stressed the company continues to use many typical central processing units and graphics processing units made by other companies.

It s been in pretty widespread use for about a year.

Google has no plans to sell the specialized chips to third-parties, said Diane Greene, Google s senior vice president of cloud.Google and other big data-center operators are the largest consumers of server processors, the main engine of growth and profit for Intel, the world s biggest semiconductor maker.

Graphics maker Nvidia is also pinning much of its future growth ambitions on the bet that its chips will have a larger role to play in data processing, including artificial intelligence and machine learning.Google s chip connects to computer servers via a protocol called PCI-E, which means it can be slotted into the company s computers, rapidly augmenting them with faster artificial-intelligence capabilities.

As the field matures, Google might very well build more specialized processors for specific AI tasks, he said.Over time, Google expects to design more system-level components, Holzle said.Even Nvidia, which makes traditional graphics processing units that have been adopted for machine learning, is beginning to add more custom elements to its hardware.

The text above is a summary, you can read full article here.