nvidiabackground2

Nvidia's contribution to the self-driving-future:

 

Based in Santa Clara, California, graphic card producer Nvidia is most known for its contributions to the computer industry. But seeing that the car of the future will likely be man’s most comprehensive computing device, Nvidia has started walking the automotive path the last decennium. Just like a computer, the car of the future will need computer chips, a processor, a graphics cards and networking, and Nvidia plans on being the company to provide them all.

 

Some of Nvidia’s car contributions have been straight transfers from other niches, like the Tegra-series mobile processors used in smartphones, laptops and later cars. Another example is the Titan X, Nvidia’s high performance graphics card, now also used for driverless deep learning. Other technology Nvidia specifically developed for self-driving implementation like the processors Drive PX and Drive PX2, software solution DriveWorks or neural vehicle network DriveNet. The next step would be a Nvidia self-driving car?

 

@cdn2

 

Tegra processors and Titan graphics cards

Driving the world’s most extreme gaming PC’s and supercomputers, Nvidia’s Tegra mobile processor chips have been used by several car manufactures to build their self-driving brains. In 2015 for example the Tegra K1 was the first to be implemented in the famous Tesla Model S. Later in 2016 the X1 was introduced, crushing the K1’s specs with multi-core 64-bit CPUs, 2.3 GHz Clock Speed, unbeatable 4K video capabilities and 256 GPU cores. This last means it supports Nvidia CUDA, the industry's most innovative Graphics Processing Unit, allowing automotive applications like obstacle recognition and customized heads-up displays. The Tegra X1 is used in driverless technology by Audi, BMW and Volvo.

 

To identify the enormous vocabulary of objects and situations on the road, Nvidia believes a separate but equally strong processor is required. The GPU or graphics processing unit does exactly that; processing the complex computer visualisation algorithms that the multiple cameras bring in, before sending it to the main processor. Nvidia’s most powerful GPU systems used in driverless car technology are Titans. Literally. Titan X (and later Titan Z) is built in Nvidia’s Drive PX series and DIGITS DevBox, and is used deep learning and neural network DGX-1. More on all of these below.

 

 

Drive PX and Drive PX2

The first autonomous driving platform Nvidia built was the Drive PX in the beginning of 2015. The system runs on cloud-based Nvidia’s DIGITS software and combines detection and tracking of a driverless car’s whereabouts and obstacles. The system runs on a deep learning Neural Network (based on Caffe) and can fuse data from 12 separate cameras, as well as lidar (Nvidia uses Velodyne’s Lidar), radar, and ultrasonic sensors, allowing algorithms to accurately understand the full 360 degree environment around the car. Drive PX also includes end-to-end HD mapping (similar to Here’s service), localization and path planning through Nvidia Driveworks.

 

Not only a year later Nvidia perfected the general idea of the PX and launched the PX2, giving it (a lot) more power. The PX2 sports 12 CPU cores and has 8 teraflops worth of processing power, similar to about 6 Titan X video cards. The system is as fast as 150 MacBook Pros and can achieve 24 trillion operations a second. This particular power increase was specifically effected envisioning a driverless car implementation. The PX and PX2 enable carmakers to implement the technology into their own driverless systems, read: hardware not included. Tesla, Audi, Ford and Mercedes-Benz all use(d) Nvidia’s Drive PX somewhere in their driverless system (according to Fool.com even more than 50 car makers used the PX). Volvo was the first to buy the new PX2. They will implement it in a hundred XC90 SUVs that will hit public roads in 2017.

 

@ytimg/deep learning process

Deep Learning and Dave2

Today’s advanced deep neural networks use algorithms, big data, and the computational power of the GPU making computers learn at a speed, accuracy and scale building true artificial intelligence. One can drown in Nvidia’s onsite deep learning attributions going from computer vision, speech recognition, and natural language processing. For the interested: Getting Started, Nvidia’s deep learning software, computer vision through Caffe and cuDNN,

 

One of Nvidia’s latest projects brings all this knowledge and preparation together. In April 2016 Nvidia posted the results of a 9 month end-to-end self-driving car project Dave2; Two Nvidia self-driving car testing models using deep learning (and only 1 camera each!) to drive themselves. With this project Nvidia wanted to see if they could bypass the need to hardcode features such as lane markings, guardrails or other cars and avoid creating a near infinite amount of “if, then, else” code. According to Nvidia the road is too random to be put in hypothetical coding.

 

The setup of Dave2 was pretty simple: input: video of the view of the road, output: steering wheel. The neural network in between learned to steer by being shown videos of a human driving; what the human driver did to the steering wheel as a result. You could say that the network learned to drive by sitting next to a human driver. Results in the video below, Nvidia on their own findings:

 

“Compared to explicit decomposition of separate driverless problems, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance."
 
 

DGX-1 and DriveNet

Google, Mercedes or Tesla do not give the public access to their algorithms. Luckily Dave2 showed us where Nvidia (and probably also the other main players) are moving towards: deep learning based on GPU. After reporting the findings of the project Nvidia made two important announcement. First of all they launched the DGX-1: according to them “the world’s first purpose-built system for deep learning”.

 

Secondly Nvidia introduced DriveNet, the name of their cloud based, deep learning Neural Network a.k.a the self-driving brain of the future. DriveNet has the equivalent of 37 million neurons (brain-like cells) collecting data from a car that taps the technology, then going back and teaching the network what it got right and what it got wrong. Each time the network runs, it gets smarter. In July 2015, it was successful in recognizing objects about 39 percent of the time. A year later that percentage went up to 88. And by then Nvidia had trained it on 120 million objects.

 

Last updated: 16/06/2016

Sources: Extremetech, V3, DigitalTrendsTechPowerUp, Fool.com, Popsci, DigitalTrends, i-Programmer, blogNvidia, Extremetech, Hexus, PocketLintblogNvidia, Nvidia.com