Nvidia announced today that it has launched a number of efforts to speed deep learning inferencing, a form of logical reasoning that is critical for artificial intelligence applications.

Some of its advances will be able to cut data center costs by up to 70 percent, and its graphics processing unit (GPU) will be able to perform deep learning inferencing up to 190 times faster than central processing units (CPUs).

In the past five years, programmers have made huge advances in AI, first by training deep learning neural networks based on existing data.

Nvidia’s efforts are aimed at improving inferencing while slashing the cost of deep learning-powered services, said Jensen Huang, CEO of Nvidia, in a keynote speech at the GTC event in San Jose, California.

Thanks to these improvements, tech companies are making strides in speech recognition, natural language processing, recommendation systems, and image recognition.

“We are experiencing a meteoric rise in GPU accelerated computing,” said Ian Buck, vice president and general manager of accelerated computing at Nvidia, in a press event.

The text above is a summary, you can read full article here.