PITTSBURGH (June 20, 2019) -- Self-driving cars rely on their ability to accurately "see" the road ahead and make adjustments based on what they see.

They need to, for instance, react to a pedestrian who steps out from between parked cars, or know to not turn down a road that is unexpectedly closed for construction.

New research from the University of Pittsburgh will develop a neuromorphic vision system that takes a new approach to capturing visual information that is based on the human brain, benefitting everything from self-driving vehicles to neural prosthetics.

Ryad Benosman, PhD, professor of ophthalmology at the University of Pittsburgh School of Medicine who holds appointments in electrical engineering and bioengineering, and Feng Xiong, PhD, assistant professor of electrical and computer engineering at the Swanson School of Engineering, received $500,000 from the National Science Foundation (NSF) to conduct this research.

Conventional image sensors record information frame-by-frame, which stores a great deal of redundant data along with that which is useful.

This excess data storage occurs because most pixels do not change from frame to frame, like stationary buildings in the background.

The text above is a summary, you can read full article here.