Self-Driving Cars Could Still Need Remote Human Supervision for ‘Edge Cases,’ Expert Claims

As self-driving cars will one day revolutionize the way we travel, experts are not sure that the vehicles' AI can handle all the billions of unexpected 'edge cases' that happen daily in traffic.
self-driving cars

Did you know that many startups working on self-driving cars use people as remote supervisors to step in when the software gives an error? Those remote humans are an additional expense, but they help self-driving cars handle edge cases. An edge case is a scenario in which a system does not work as expected. Edge cases are often unusual or unforeseen circumstances that can cause a system to break. Self-driving cars invariably stop when they cannot figure out what to do.

Human supervision is a backup for edge cases

These edge cases could include something as basic as an unfamiliar set of lane closures during road construction or erratic, unpredictable behavior by pedestrians or human drivers. One of the main benefits of self-driving cars is that they don’t require human supervision. However, many startups in this space are using humans as remote supervisors, which negates one of the key advantages of these vehicles. This is ironic because it defeats one of the primary purposes of these types of vehicles.

shutterstock 1546160024

Are you ready for a remote assistant to suddenly take control of your car?

There are many reasons why someone might not feel comfortable with a remote supervisor taking control of the car they are driving. Some may think that it is an invasion of privacy, while others may feel that it is unsafe or unnecessary. Additionally, some people may simply not trust the technology involved in such a system.


There are many reasons why someone might not feel comfortable with a remote supervisor taking control of their car during a hazardous situation. They may feel like they are not in control of the situation and that they are not able to make the best decisions for themselves. Additionally, they may feel like they cannot trust the remote supervisor to make the best decisions for their safety.

Are autonomous vehicle companies living up to their promises?

Autonomous vehicle (AV) startups have raised billions of dollars based on promises to develop genuinely self-driving cars. Still, according to one recent article in Reuters, industry executives and experts say remote human supervisors may be needed permanently to help self-driving cars in trouble.

University of Torontos four time winning Autonomous Vehicle Zeus
Autonomous Vehicle Zeus by University of Toronto aUToronto under CC BY-SA 4.0

It is challenging to create robot cars that can drive more safely than people. This is because self-driving software systems lack the ability of humans to predict and assess risk quickly, particularly when coming into contact with unexpected occurrences or “edge cases.”


Kyle Vogt, CEO of Cruise, a unit of General Motors, when asked if he could see a point where remote human overseers should be removed from operations, didn’t think it is necessary to remove remote human drivers from the AV business model. He sees remote human drivers as a way to provide peace of mind to customers, so they know there is always a human there to help if needed. He can’t understand anyone wanting to get rid of that.

Are edge cases rare enough for you to feel safe?

The problem is that there are “tens of billions of possible edge cases” that AVs could run into, said James Routh, CEO of AB Dynamics, which tests and simulates cars, including the advanced driver-assistance systems (ADAS) that are the basis of autonomous driving features.

It is clear that Kyle Vogt disagrees with the concerns of those who want to see remote human drivers removed from the AV business model. He sees remote human drivers as a way to provide peace of mind to customers, which is a valid point as to what other backup systems should be in place in case of emergencies.


GM recalled and updated software in 80 Cruise self-driving vehicles this month after a June crash in San Francisco left two people injured. US safety regulators said the recalled software could “incorrectly predict” an oncoming vehicle’s path, and Cruise said the unusual scenario would not recur after the update. Recalls of 80 GM self-driving cars is an insignificant number compared to 830,000 Teslas in June were facing potential recalls due to significant numbers of autopilot crashes.

shutterstock 1171593577

Would you be happy with a remote human assistant on a video console having only seconds to jump in and control your car remotely, rectifying a hazardous scenario where you are in an oncoming vehicle’s path?

Featured image credit: Self-driving cars by