On Sunday evening March 18, 2017, a self-driving car piloted by Uber Technologies
Inc. struck and killed a woman walking across an Arizona road in what
may be the first pedestrian fatality associated with driverless vehicles.
While the crash is still under investigation, authorities have stated
that the 49-year-old victim was struck by a test vehicle operating in
autonomous mode. Although there was a “safety driver” in the
vehicle at the time, self-driving technology was handling all aspects
of the vehicle’s movement, and the driver did not intervene. Shortly
after the crash, Uber announced that it will halt driverless vehicle projects
in Arizona, as well as in Pittsburgh, San Francisco, and Toronto.
Uber’s fatal pedestrian accident is a terrible tragedy for all involved,
and one that raises many important questions and concerns regarding self-driving
vehicles, their introduction to our roadways, and how to best protect
Safety & Liability in a New Age of Driverless Cars
Driverless technology is slated as the future of American roadways, as
well as the future of commercial trucking and product shipping. While
amazing developments have been made, that advanced technology also introduces
a number of important questions, especially in relation to public safety
and liability in
car accidents – questions that will be spotlighted in the months following Sunday’s crash.
Below, we discuss some of the major concerns created by the recent Uber
pedestrian crash and driverless vehicle technology:
Technology tested on public roads – The self-driving vehicle involved in Sunday’s fatal accident
was part of a larger project organized by Uber, which has been testing
driverless cars and commercial trucks on public roadways across the country.
Uber is among several companies testing driverless vehicles in a public
setting, and many safety advocates have expressed concerns about whether
the technology is truly ready to be implemented on our roads, rather than
tested thoroughly in controlled environments, closed roads, or roadways
with minimal auto, bicycle, or pedestrian traffic. The recent crash and
Uber’s decision to pause testing of their vehicles will likely renew
the conversation over whether wide-scale testing should be allowed on
public roads, and if so, how and with what restrictions.
A lack of regulations – Uber’s self-driving vehicles, in addition to those developed
by other companies, have hit the roadways in many cities across the country.
However, it is in Arizona, where the crash took place, where driverless
vehicles have found their most positive welcoming, even as other states
like California have taken a tougher stance on issuing permits to such
vehicles for testing or commercial purposes. Unfortunately, that means
there has been little done to pass comprehensive laws and regulations
for the many driverless vehicles being tested and used on Arizona roadways.
In fact, some advocates have stated that the virtual lack of any regulations
at all have made places like Arizona the “wild west” for testing
driverless technology. In the wake of the crash, states across the country
may begin to more closely evaluate how to regulate these vehicles, and
focus on passing crucial laws and safety protections before letting driverless
cars back on the road.
Liability in accidents – In addition to a lack of regulations, there are also few laws and
little case law regarding issues like fault and liability in auto accidents
involving self-driving cars. The recent crash is a reminder that advanced
technologies are not infallible, that we still don’t know how good
or bad driverless technology truly is, and that our laws need to address
who can be held liable, and how, when self-driving cars cause or contribute
to accidents that result in injuries or death. If public citizens are
to accept and bear the risks that come with allowing driverless technology
on public roads, there need to be measures in place that ensure the rights
of injured victims when things don’t go as planned.
“Safety driver” technicians – Although the vehicle involved in Sunday’s accident was being
operated autonomously, there was a “safety driver” present
in the vehicle. These safety drivers function more like technicians in
that they handle certain tasks driverless cars can’t, such as driving
an autonomous commercial truck short distances to a loading dock, or taking
over the controls when emergency situations arise. According to initial
reports, however, the safety driver did not intervene prior to the vehicle
striking the pedestrian. This reintroduces concerns over ensuring that
driverless technology is capable of handling all tasks associated with
operating a vehicle, as humans are naturally inclined to become complacent,
less effective, and inattentive when computers handle their tasks. For
this reason, experts suggest that we can’t trust humans to intervene
and prevent crashes, and must ensure technologies powering self-driving
cars are developed in a way to handle critical safety issues on their own.
Decisions of life and death – Any “smart” technology or form of artificial intelligence
can introduce questions of ethics and morality, and that is especially
true of robot cars. Although humans may not always be able to make the
most appropriate decisions in situations where collisions, property damage,
injuries, and / or death may be inevitable, they are tasked with programming
self-driving cars to handle those types of situations. This means determining
whether our society and our lawmakers can agree on how these technologies
should address ethical dilemmas that involve hypothetical decisions between
the lesser of two evils, or situations where the least “immoral”
decision should be chosen. This can take the form of deciding whether
or not to swerve and cause property damage rather than hit a dog and likely
cause injury, or decisions involving whether or not vehicles should avoid
near-certain death of a pedestrian in favor of causing an accident where
multiple vehicle occupants can potentially suffer serious injuries. These
are by no means easy problems to solve, but in light of the recent crash,
more conversations should be had, as well as potential laws and regulations
that address them.
Sunday’s fatal pedestrian accident is still under investigation,
but it has already succeeded in targeting our national focus squarely
on driverless technology, its development and use on public roads, and
the many issues developers, our society, and our laws must address before
they become a constant presence in our lives. As a personal injury law
firm that has served victims and families throughout Los Angeles and Southern
California for decades, we at Biren Law Group have seen how new technologies
can revolutionize our culture, as well as how they create new questions
that must be addressed, particularly in terms of safety, liability, and
the rights of those who suffer harm.
Ultimately, there is no denying self-driving vehicles have the potential
to reduce risks associated with driver error and negligence – robots
don’t drink and drive or text message behind the wheel. However,
that new technology still poses risks of its own, and as with any new
advancements, those risks need to be carefully explored, evaluated, and managed
before new products are made widely available to the public. The desire to profit
or rapidly introduce new technology should never come at the expense of
public safety. It’s the same argument we have raised in many cases
throughout the years, and one we will likely continue to raise in our
legal battles to ensure accountability and protect the rights of victims.