Autonomous vehicles continue to evolve, raising new liability related to technology, human error and communication with cars’ surroundings.
By Lucy Aquino
The regulation of autonomous vehicles is a novel and challenging field that has received a great deal of media attention and public scrutiny in recent years, and for good reason. As technology surrounding autonomous vehicles continues to improve at an astounding pace, federal and state regulators have been struggling to pass legislation capable of keeping up not only with technological advances, but also with the demand for such technology.
The challenges posed by this burgeoning technology involve areas such as intellectual property, privacy, online security and, as can be expected, tort liability. Can we – better yet, should we – apply current tort liability principles to motor vehicles operated, wholly or partially, by autonomous systems? If so, who should be held liable when technology fails or when an accident occurs?
In order to address these questions, it is important to understand that not all autonomous vehicles have the same levels of autonomy, i.e., self-driving capability. The National Highway Traffic Safety Administration (NHTSA) classifies autonomous vehicles according to six possible levels of automation
While fully autonomous vehicles may one day be the norm, the models currently available on the market all require some degree of human interaction. The need for a human driver to step in and take over control of the vehicle in the event of an emergency poses an interesting question regarding the applicability of well-established concepts of tort negligence. As indicated above, all the vehicles currently available on the market require the driver to remain alert and ready to take control in case of an emergency or unforeseen event to which the vehicle is not designed to respond. It would be easy then to assume that, under these conditions, the human driver would remain solely liable in case of an accident. However, as some automobile manufacturers have recently learned, they may also be held liable if they design a vehicle that fails to verify whether its human driver is still paying attention.1 Tesla alone has been the subject of numerous lawsuits alleging that its system lacks safeguards to prevent misuse and properly monitor drivers.2
The problem with this concept is that, for the most part, humans can make notoriously terrible backup systems. We can be inattentive, easily distracted and slow to react. As such, when given the opportunity to watch a movie, play a game or even sleep while someone – or something – else does the driving, some people could lose focus and forget they should be paying attention to the road. In order to avoid potential liability, automobile manufacturers must find a balance between providing drivers with helpful and appealing assistive technologies and ensuring those same drivers understand they must remain vigilant behind the wheel.
When deciding whether current notions of tort liability should apply to autonomous vehicles, one must also consider the likely lengthy transition period during which autonomous and non-autonomous vehicles will share the roadways. A major safety feature of autonomous vehicles involves their ability to communicate with one another and with surrounding infrastructure, such as traffic lights and camera systems, to make communal driving decisions. Setting aside the question of whether autonomous vehicles from different manufacturers will be able to communicate with one another, there is little doubt that autonomous and non-autonomous vehicles will not be able to communicate in order to reach these crucial driving decisions. During this transition period, fully autonomous vehicles will be required to anticipate and respond to human drivers and their propensity for human error. If an autonomous vehicle operates exactly as intended but fails to anticipate and respond to an error on the part of another vehicle’s human driver, is the automobile manufacturer liable for not using the proper machine learning algorithms when developing the autonomous vehicle? Such a question will not likely be answered for a long time, but there are certainly arguments to be made on both sides of this issue.
Overall, the fundamental challenges facing the safe and widespread introduction of autonomous vehicles go way beyond technology. With the imminent and inevitable availability of vehicles capable of operating without any human driver intervention, now is the time to engage in lengthy, and likely difficult, conversations regarding safety, regulation, liability and social acceptability.
Luciana “Lucy” Aquino is an Atlanta-based attorney on Swift Currie’s litigation team, representing clients in an array of matters including premises liability, construction litigation, automobile/trucking litigation and insurance coverage. She may be reached at lucy.aquino@swiftcurrie.com.
1 https://www.nytimes.com/2021/08/17/business/tesla-autopilot-accident.html
2 https://www.nytimes.com/2021/07/05/business/tesla-autopilot-lawsuits-safety.html
Tune in to hear from Chris Brown, Vice President of Sales at CADDi, a leading manufacturing solutions provider. We delve into Chris’ role of expanding the reach of CADDi Drawer which uses advanced AI to centralize and analyze essential production data to help manufacturers improve efficiency and quality.