-- Today's data centers eat up and waste a good amount of energy responding to user requests as fast as possible, with only a few microseconds delay.

When users send requests to an app, bits of stored data are pulled from hundreds or thousands of services across as many servers.

Before sending a response, the app must wait for the slowest service to process the data.

This lag time is known as tail latency.

Current methods to reduce tail latencies leave tons of CPU cores in a server open to quickly handle incoming requests.

But this occurs over milliseconds -- around one-thousandth the desired speed for today's fast-paced requests.

The text above is a summary, you can read full article here.