Is self-driving technology flying too high?

by | Apr 2, 2018 | 0 comments

If the recent incidents involving self-driving cars have taught us anything it is that the technologies still have a long way to go before the dream of true, no-human-at-the-wheel autonomous vehicles is realized.

It should also serve as a reminder that requiring a human to sit is the driver’s seat of a self-driving vehicle is no guard against crashes.

For the lessons on the former we need look no further than the tragic death of a pedestrian in Arizona, run down by a self-driving Uber—with a “driver “ at the wheel–while she was walking her bicycle across the road, and for the latter, the latest inexplicable fatal crash involving the a Tesla Model X in autonomous mode.

That car hit a concrete abutment, but the driver had, according to Tesla, ignored repeated calls to put his hands on the wheel.

And how regulators, manufacturers, and early adopters see those issues will have a dramatic impact on the timeline for adoption and in turn the aftermarket.

There is surely much more investigation required of both incidents to be fair to both the Uber “driver” and the individual in the Tesla before determining what precisely happened in both cases. But you don’t need a coroner’s report to recognize that when crashes happen in a conventional vehicle it is almost always laid at the feet of human error and we should view these recent incidents no differently.

With very few exceptions, it is up to the driver to make decisions about how the vehicle, any vehicle, should be travelling.

I live in Northern Ontario where there is still snow and ice on the roads as of the start of April and there is no question that it is up to me to determine not just the speed and route I might travel in unfavorable conditions, but whether to be driving at all.

As automotive technologies start to sound a lot more like aerospace technologies, it makes sense to draw on parallels there.

When a plane crashes, it is almost always pilot error, even if that error was a failure to realize there was a problem early enough to avert the crash. In the not so recent early days of automated flying, there was a recognized danger that pilots would become complacent as the plane “flew itself.”

Way back in 1989, concern was already flying high about the impact of advanced technology on the ability of pilots to stay connected to what was happening with the aircraft. A NASA report said that the highly automated cockpits of the day could make pilots complacent and compromise safety.

Yes, the arguments were that the technology of the day required too much programming and introduced the possibility of more data entry errors, but there was already the seed of recognition that taking so much control out of pilots’ hands had a downside.

In an interview with Air & Space magazine in November 2016, more than a decade and a half after that NASA report, noted lecturer Tony Kern laid out the problem:

“Far too often, pilots allow automation to make decisions for them, and lose track of what is going on. A classic example of this was American Airlines Flight 965, where [in 1995] the crew crashed into a mountain in Colombia. They were descending, at night, in mountainous terrain, and didn’t know where they were. A simple decision to climb back to a safe altitude and figure it out would have saved them, but they had gotten behind the aircraft. A pilot should never let an airplane go somewhere his or her mind hasn’t already arrived at a few minutes—or at least a few seconds—earlier. “

And these are highly trained professionals who take their roles very seriously; one does not have to imagine the tremendous gulf between a pilot’s approach to their task at hand—checklists and procedures, strictly enforced limits on flying times etc.– and the attention paid by the average driver to their daily commute, often more concerned about their morning meeting and whether the lineup at Tim Horton’s will make them late than the task of wheeling their SUV down the road.

Take away the need to have active, second-by-second control of the vehicle and we are going to have crashes. This is a certainty, made all the more absolute by the fact that self-driving technologies will be sharing the road with unconnected, non-self-driving vehicles for a very long time.

Statistically, they will almost surely win the day in terms of overall numbers and severity, but that day is not today.

There is of course no getting the genie back into the bottle; there are already mass market vehicles touting hands-free highway driving capability.

The question is twofold then: when a self-driving vehicle crashes, who (or what!) is responsible—can you foresee a shared responsibility between the driver or drivers and the technologies that failed to prevent the collision?—and how do we ensure with some degree of certainty that the mandated driver behind the wheel is attentive when there is little need for them to be paying attention until a they a crash is imminent and they need to be pressed into action, probably too late.

I think we’re going to be picking up the pieces of that argument for some time.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *