Public streets are the lab for self-driving experiments

2
(Photo: NYT)
(NYT) — Tesla’s relentless vision for self-driving cars has played out on America’s public roads — with its autonomous driving technology blamed in at least 12 accidents, with one death and 17 injuries. Lawsuits and a government investigation have followed. But one question lingers: How is Tesla allowed to do that in the first place?اضافة اعلان

The answer is that there is no federal regulation to stop Tesla — or the many other autonomous vehicle companies — from using public streets as a laboratory. As long as a driver is ready to take over, the only thing that prevents a company from putting an experimental autonomous vehicle on a public road is the threat of a lawsuit or bad publicity.

In June when the National Highway Traffic Safety Administration ordered — for the first time — that car accidents involving driver assist or autonomous features be reported, many concluded that Tesla was the sole motivation. However, the order names 108 carmakers and tech companies, revealing how widespread unregulated autonomous road testing may be.

Any future regulation will be hammered out between diametrically opposed camps. On one side are safety advocates, who say autonomous driving features, like those that control speed, steering and braking, should be proved safer than drivers before they are allowed on public roads. On the other side are car and tech industry backers, who say those features cannot become safer than humans without unfettered testing in the real world.

The question facing regulators, carmakers and the public is: Does regulation makes us safer, or will it slow the adoption of technology that makes us safer?
Safety proponents may disagree over what testing should be required, but they agree that there should be some standard.

“You can’t anticipate everything,” said Phil Koopman, an expert on safety standards for autonomous cars. “But the car industry uses that as an excuse for doing nothing.”

Early days

While outrage over autonomous vehicle accidents has increased calls for regulation, technology has always moved ahead of the law. Just look at the introduction of the car.

John William Lambert is credited with building America’s first practical gas-powered car, and with surviving the first recorded crash, in 1891.

Later, New York recorded a couple of firsts: the first crash between vehicles — a car and a bicycle — in 1896 and the first pedestrian fatality, when an electric cab struck Henry Bliss in 1899. But it wasn’t until 1909 that New York introduced its first comprehensive traffic laws. Envisioned by William Phelps Eno, those laws, the basis of traffic laws today, lagged the advent of the car accident by 18 years.

“Cars were sharing the roads with people and horses,” said Bart Selman, a computer science professor at Cornell University with expertise in tech history. “There were a lot of accidents and figuring it out along the way.”
What’s different about autonomous driving technology is how quickly and widely it arrived.

“It’s fully reasonable for regulatory agencies and government to get involved in this process, due to the scale and speed in which this is happening,” Selman said.

Despite risks, the promise of autonomous features can be seen in an insurance data study that showed they reduce the number and severity of some accidents. Cars with forward collision warning plus automatic braking reduced front-to-rear crashes by 50 percent and front-to-rear crashes with injuries by 56 percent, for instance.

The levels of self-driving

While there are no federal restrictions on autonomous testing, many states have set limits on some levels of automation. The Society of Automotive Engineers, a standards organization usually just called SAE, defined six levels of automation, which entail an important legal distinction.

A Level 0 vehicle is entirely manually controlled. At Level 1, an automated system may help with one control, such as lane departure or cruise control. Level 2 lets two or more automated controls work at the same time as long as a driver is ready to take over. Up to this point, there is no limitation to driving on public roads.

At Level 3, the car can completely drive itself in some circumstances — say, on highways — but the driver must be ready to take control.

At Level 4, a car can also drive itself in limited circumstances, but without human intervention. At Level 5, the car drives itself entirely.

The distinction between the levels up to Level 2 and those above are important for legal reasons. In a crash of a Level 2 car, liability rests with the driver. For Levels 3 to 5, liability may be on the system and the companies that make it.

But a loophole lets car companies avoid liability. Carmakers themselves determine the level assigned their systems. It may explain why Tesla sells its automated driver assist system as Full Self-Driving Capability (charging $2,499 annually) but classifies it as Level 2. This lets developers have it both ways.

But marketing a system as full Self-Driving has safety consequences. A study last year by the AAA Foundation for Traffic Safety put 90 drivers in a 2018 Cadillac CT6 equipped with the Super Cruise Level 2 system. Some drivers were told that the system was called DriveAssist, with training that stressed system limitations, and others were told that it was called AutonoDrive, with training that stressed system abilities.

Those who were told that the system was called AutonoDrive were less attentive, and more often took their hands off the wheel and feet off the pedals. The 31-mile route also had a curve that would cause the driver-assist system to hand control back to the driver. Notably, both groups were similarly slow to retake control.

“We’ve known for years that humans are terrible at monitoring automated technology,” said Shaun Kildare, senior director of research for Advocates for Highway and Auto Safety.

The industry has addressed the attention issue with safeguards. But those systems aren’t foolproof. Videos on YouTube show drivers how to easily trick Tesla’s monitor. Even more advanced systems, like Cadillac’s, which uses a camera to ensure that a driver’s eyes are on the road, can fail.

“The problem with driver monitoring, people say if you have a good camera it’s fine,” said Koopman, who was the lead author of proposed engineering safety standards for fully autonomous cars for the standards body known as ANSI/UL.

In the 1990s, he said, a system to keep truckers alert beeped if their eyes closed. The result?

“They learned to go to sleep with their eyes open,” Koopman said.

Coming up with a useful test is difficult. Testing for specific tasks, like driving around a test track, makes a vehicle better only at that track, not at dealing with unpredictable drivers. The UL test is instead a 300-page list of engineering considerations, down to making sure celestial bodies don’t interfere. (Plausible: A video shows a Tesla braking for a yellow moon.)

If an engineer fulfills these requirements, “then we believe you have done a reasonable effort on trying to engineer your systems to be acceptably safe,” Koopman said. “It’s a methodical way to show use of best practices.”

Still, the engineering test alone is insufficient, he said. It is meant to be used along with other standards that cover equipment and driving proficiency.
Even comprehensive testing is no guarantee. The procedures of the Federal Aviation Administration have been held up as a model, yet it cleared the Boeing 737 Max, which was grounded for 20 months after two crashes of the airliner killed 346 people.

Regulations

It is easy to see why the pioneers of self-driving tech are wary of regulation. Their industry already faces a patchwork of state-by-state regulations, although they primarily require proof that a company is insured, rather than having met any safety standard.

California is among the stricter states, with its 132-page standards document for autonomous operation covering permits, insurance, data sharing and a requirement for a driver “competent to operate the vehicle,” as determined by the company. Florida, among the least restrictive states, passed legislation that allows Level 4 and 5 cars “to operate in this state regardless of whether a human operator is physically present in the vehicle.” Ride-hailing companies testing autonomous vehicles there are required to have liability insurance.
Add to that mix the likely number of agencies that may be involved. The Transportation Department’s online Automated Vehicle Hub lists 14 involved in developing automated driving systems.

A further complication is tension among agencies. The National Transportation Safety Board, which investigates accidents, has been especially vocal in calling for autonomous vehicle regulation from the National Highway Traffic Safety Administration, a sister agency that is broadly responsible for automotive safety standards.

In an NHTSA publication, “Automated Driving Systems: A Vision for Safety,” the agency wrote, “NHTSA offers a nonregulatory approach to automated vehicle technology safety,” saying it did not want to stifle progress.

Read more Drive