The Truth About Self Driving Cars

▶ Visit to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription

Almost a decade ago, a sizable list of tech companies, collectively wielding over $100 billion in investment, asserted that within five years the once-unimaginable dream of fully self-driving cars would become a normal part of everyday life. These promises, of course, have not come to fruition. Despite this abundance of funding, research and development, expectations are beginning to shift as the dream of fully autonomous cars is proving to be far more complex and difficult to realize than automakers had anticipated.

Much like how humans drive a vehicle, autonomous vehicles operate using a layered approach to information processing. The first layer uses a combination of multiple satellite based systems, vehicle speed sensors, inertial navigation sensors and even terrestrial signals such as cellular triangulation and Differential GPS, summing the movement vector of the vehicle as it traverses from its start waypoint to its destination. The next layer is characterized by the process of detecting and mapping the environment around the vehicle both for the purposes of traversing a navigation path and obstacle avoidance. At present, the primary mechanisms of environment perception are laser navigation, radar navigation and visual navigation.

In laser navigation, a LIDAR system launches a continuous laser beam or pulse to the target, and a reflected signal is received at the transmitter. By measuring the reflection time, signal strength and frequency shift of the reflected signal, spatial cloud data of the target point is generated. Since the 1980s, early computer based experiments with autonomous vehicles relied on LIDAR technology and even today it is used as the primary sensor for many experimental vehicles. These systems can be categorized as either single line, multi-line and omnidirectional.

The long-range radars used by autonomous vehicles tend to be millimeter wave systems that can provide centimeter accuracy in position and movement determination. These systems, known as Frequency modulated continuous wave RADAR or FMCW, continuously radiate a modulated wave and use changes in phase or frequency of the reflected signal to determine distance.

Visual perception systems attempt to mimic how humans drive by identifying objects, predicting motion, and determining their effect on the immediate path a vehicle must take. Many within the industry, including the visual-only movement leader Tesla, believe that a camera centric approach, when combined with enough data and computing power, can push artificial intelligence systems to do things that were previously thought to be impossible.

At the heart of the most successful visual perception systems is the convolutional neural network or CNN. Their ability to classify objects and patterns within the environment make them an incredibly powerful tool. As this system is exposed to real world driving imagery, either through collected footage or from test vehicles, more data is collected and the cycle of human labeling of the new data and training the CNN is repeated. This allows them to both gauge distance and infer the motion of objects as well as the expected path of other vehicles based on the driving environment.

At the current state of technology, the fatal flaw of autonomous vehicle advancement has been the pipeline by which they’re trained. A typical autonomous vehicle has multiple cameras, with each capturing tens of images per second. The sheer scale of this data, that now requires human intervention and the appropriate retraining now becomes a pinch point of the overall training process.

Even within the realm of human monitored driver assistance, in 2022 over 400 crashes in the previous 11 months involving automated technology have been reported to the National Highway Traffic Safety Administration. Several noteworthy fatalities have even occurred with detection and decision making systems being identified as a contributing factor.

While the argument could be made that human error statistically causes far more accidents over autonomous vehicles, including the majority of driver assisted accidents, when autonomous systems do fail, they tend to do so in a manner that would otherwise be manageable by a human driver. Despite autonomous vehicles having the ability to react and make decisions faster than a human, the environmental perception foundation these decisions are based on are so distant from the capabilities of the average human that trust in them still lingers below the majority of the public.