Self-driving cars look at the world using sensors. But how do they interpret all that data?
The industry’s word for efficiency is the key when it comes to driving — from street signs to pedestrians to surrounding traffic — to process and identify road data. With the power of AI Services, driverless vehicles can detect and respond to their environment in real-time, allowing them to safely navigate.
They accomplish this by using a series of algorithms known as deep neural networks or DNNs.
DNNs allow vehicles to learn how to navigate the world on their own, using sensor data, rather than having to manually write terms such as “Stop red.”
These mathematical models are inspired by the human brain — they learn through experience. If the DNN shows multiple images of stop signs under different conditions, it can learn to recognize the stop signals on its own.
Two keys to self-driving car safety: diversity and redundancy
But just one algorithm cannot do that thing by itself. The entire set of DNNs, for safe autonomous driving, is dedicated to a specific task.
These networks are diverse, covering everything from reading signals to identifying collisions and driving routes. They are also redundant, with overlapping capabilities to reduce the chances of failure.
No set number of DNNs are required for autonomous driving. And new capabilities often arise, so the list is constantly growing and changing.
To actually drive a car, the signals generated by individual DNNs must be processed in real-time. This requires a centralized, high-performance computing platform such as Nvidia Drive AGX.
Below are some of the major DNNs that Nvidia uses for autonomous vehicle awareness.
Pathfinders
DNNs that can help you figure out where the car is headed and safely plan the route:
Open Road Net identifies all the small space around the vehicle, whether in the car lane or in the neighborhood.
Although there are no lane markers, the PathNet highlights the path in which the vehicle is driven.
Lane Net Finds lane lines and other markers that define the car’s path.
magnet also identifies milestones along with lanes that can be used to create and update high-definition maps.
Object Detection and Classification
DNNs that identify potential obstacles, as well as traffic lights and signs:
Driven absorbs other cars, pedestrians, traffic lights and signs on the road, but does not read the color or type of light.
Lightnet classifies the state of a traffic light — red, yellow, or green.
A sign identifies the type of net sign — stop, yield, a path, etc.
WeightNet will detect situations where the vehicle must stop and wait for an intersection.
The List Goes On
DNNs that can detect the status of the parts of the vehicle and cockpit, as well as facilitate maneuvers like parking:
- ClearSightNet monitors how well the vehicle’s cameras can see, detecting conditions that limit sights such as rain, fog, and direct sunlight.
- Parkett identifies spots available for parking.
These networks are just a sample of the DNNs that make up the redundant and diverse DRIVE Software perception layer.