Covering the intersection between technology, the law and society around the world

  • White Twitter Icon
  • White Facebook Icon
  • White Instagram Icon
  • White Tumblr Icon
  • Medium

Legal Disclaimer

The content displayed on this website does not constitute legal advice. Please consult a qualified legal expert if you are seeking legal advice or information on your rights. 

The Complexity of Self-Driving Cars

March 2, 2016

 Feature Article 

 

The vision of self-driving cars is fast approaching, but plenty of obstacles still lie ahead before reaching the finish line

Since experiments and the first developments of an autonomous vehicle were initiated as early as the 1920’s, the idea of the world’s roads consisting of driverless cars become a long-term vision. This idea, once an ambitious proposition years away from reality, is, right now, close to coming into existence. The US Secretary of Transportation stated during the Frankfurt Auto Show in 2015 that he believed that driverless cars will be in use across the world within the next ten years, and other experts and researchers have made similar predictions. The day is coming soon, and it seems as though the progression towards this ultimate goal is not showing any signs of slowing.

 

Cars, motor vehicles of which its control is primarily left to the initiative of a human being behind the wheel, have over time adopted various technologies further advancing the capability of cars and shifting the conventional ways of driving automobiles. Though, with all these advancing technologies being implemented, there exists is an important distinction to be made between autonomous cars and self-driving cars, eagerly brought up by car manufacturers such as Volvo and Renault-Nissan.

 

Autonomous vehicles very much resemble the cars which can be seen on the road today, consisting of the traditional forward seats and a steering in front of one of the two seats, along with brake and accelerate pedals and other familiar traits. The only differences lie in some of the functionalities, of which used to be fully controlled by the driver, now have a mind of their own thanks to advancement of technology. For example, features becoming increasingly common amongst newer models including adaptive cruise control and even automated braking make up what is known as an autonomous vehicle. In contrast, self-driving cars, as the name suggests, completely take over the role of the driver and operates with the aid of sensors, radars and GPS mapping systems to steer its way to locations inputted from a mobile device such as a smartphone or tablet.

 

This drastic development follows the flow of the emergence of the ‘internet of things’, which have gradually invaded and taken responsibility for many of the tasks that businesses, governments and other parts of society have always been used to carrying out manually. Driving is now about to join that list. Despite this natural transition, car manufacturers are, thus far, not entirely keen on the idea of self-driving cars. This is because sole ownership of cars is likely to fall, as they are replaced with shared vehicles or ‘robotaxis’, and impact sales. Barclays, a bank, says that granted that the cost can be brought down and that the legal and regulatory issues can be sorted, self-driving cars will start to grow rapidly in popularity and cause a 40% decline in personally owned vehicles. Even Silicon Valley has made room for itself within the new market and has been busy in the development of new models. Uber, an online taxi dispatch company, has invested in developing its own self-driving vehicles in Pittsburg. Alphabet too has also put effort into its own driverless car projects, and other tech companies have also shown signs of doing the same. 

 

However, the legal and regulatory issues are significant, enough to grind the distribution of this new generation of motorised vehicles to a sudden halt even when the technology has been developed. Several legal issues have brought uncertainty self-driving cars.

 

To begin with, there is the issue of liability; who is exactly at fault in a crash involving a self-driving car? Secondly, there is the issue of cybersecurity; one frightening characteristic of the ‘internet of things’ is the seemingly lack of adequate security measures implemented into the devices from the start, and the regulation to ensure that is not the case with self-driving cars is critical to avoid it becoming a dangerous machine. There other regulatory issues to consider also, for example reforming the way driving licenses are issued. 

 

Beyond the current legal constraints, there is also the prevalent skepticism of robot cars the general population have. “There’s something scarier about a machine malfunctioning and taking away control from somebody,” says Bryant Walker Smith, a fellow at Stanford University’s Centre for Automative Research. With that, the road to establishing self-driving cars into the modern economy will be rocky.

 

Who Is To Blame?

It is an expectation that self-driving cars are less likely to cause accidents because they are not prone to human errors. They can be more consistent and reliable when travelling on the road. Nevertheless, accidents are still inevitable, even if they are rare, because if computers can crash, so can then so can the vehicles they wield. Thus, a question of liability is brought up; who is liable in an accident?

 

Traditionally, the driver has always been the primary controller of a car when operating it. Autonomous vehicles have taken some responsibilities, but the important main functions of controlling the steering wheel and the floor pedals. Self-driving cars, though, take full control, therefore throwing the idea of liability up in the air. 

 

Recognising this, car manufacturers, including Volvo Cars, have responded by volunteering to take responsibility for crashes involving self-driving cars. The incentives driving this decision are likely to be economical; car makers recognise that self-driving cars are not likely to crash or cause accidents as much as human drivers, as they are not prone to frequent mistakes and can be far more consistent and, therefore, safer, and thus, the lower chances of accidents means less havoc for the manufacturer.

 

Though this does not necessarily help to clarify the legal issues which would remain regardless. The trickiness lies in the restructuring the laws already in place dealing with car accidents. Normally, in the event of an accident, car owners are responsible. If the owner, however, believes that the fault originates from the fault of the manufacturer rather than themselves, they would have to sufficiently prove that it was the manufacturer that was, in fact, negligent. Withal, the problem here is that the law does not account for driverless vehicles. In such a scenario, the owner of the car, or at least the person(s) occupying the vehicle, would not actually be controlling the car, but instead, they may be sleeping, talking on the phone, flicking through Twitter, or a range of other activities of which would be possible to pursue inside a self-driving car. With this being the case, car owners or occupiers would not feel responsible for any coach or accident which may occur, since they are not in any reasonable position to avoid such an event, therefore giving reason to shift the blame to manufactures by default. “There’s going to have to be some changes to the laws,” says David Strickland, a partner at the Venable LLP law firm in Washington DC. “There is no such thing right now that says the manufacturer of the automated system is financially responsible for crashes.”

One solution to this could be to require to someone to readily available to take control of the wheel when encountering danger. This has been initiated by regulators in California, proposing new rules requiring licensed owners to be ready at the wheel whilst on the road. Additionally, the rules would also require companies to provide regular reports on their performance. Though these rules have had a negative response by both tech firms and carmakers, with Google stating that it was “gravely disappointed” in the proposed new rules.

 

Despite this, placing liability upon companies may be justified. Firms will be far more reliable in receiving compensation from since they would have much deeper pockets than most car owners. This provides the basis for California’s enterprise liability theory, which proposes that it ought to be socially beneficial to assert liability on enterprises since they are capable of pulverising losses. With this, car makers and tech companies may expect to take on the responsibility of driverless car accidents. 

 

The first few owners of self-driving cars who are unfortunate enough to experience an accident involving their vehicles, and the court cases and suits following it, will most likely provide a template for future cases. The uncertainty for now, however, will make the deployment and availability of self-driving cars to the general public, even when the technology has been fully developed, difficult and slow. Even so, this will not be the only barrier to overcome.

 

Securing Safety

Any device that is capable of connecting to the internet, and managing outgoing and incoming flows of data of which is utilised to perform tasks instructed by an input or command is vulnerable to cyber attacks. Thus, people with self-driving cars join the long list of possible targets for hackers and malefactors. Cars pose a particularly dangerous threat since the effects of malicious software can be far more detrimental than that of a laptop or smartphone. In the summer of 2015 security researchers were able to show how they could take control of a Jeep as it travelled on the highway at 70 mph. Not only did they manage to take control of the steering wheel, but they also demonstrated how they could lock the doors and windows, as well as the floor pedals. If control was in the hands of an unpredictable malicious hacker, the potential dangers are frightening.

 

Many of the ‘internet of things’ products lack adequate security measures to protect against such eventualities. Thus, before self-driving cars can take their place on the roads, security needs to be implemented to ensure that hackers cannot easily break through. One of the ways to achieve this is to make the software running the vehicles in completely walled gardens. Essentially, software and even hardware could just be provided by one agency or company, such as the Ministry of Transport, of can be the only be compatible software for the cars. These digital locks, known as DRM (Digital rights management) prevent the blueprints of the software and code constructing it from floating around on the internet where hackers and other malefactors can obtain it, analyse it, exploit loopholes and create malicious code to attack the owners of self-driving cars with. This is the same system used for various devices, such as game consoles and iPhone’s. 

 

However, even with such robust measures, devices can still be vulnerable to hacks. With all the complexities, features and capabilities which the software must handle to operate a self-driving car, gaps in the code are bound to exist somewhere. Hackers would just need to find it. Just as it is very straightforward to download software from the net to unlock or ‘jailbreak’ an iPhone, the same could happen with the software with self-driving cars. Thus, the unfortunate reality of cyberspace prevails; no devices are immune to attacks.

 

Yet, even so, governments may want backdoors or, at least, the ability to control self-driving cars to prevent crime. The new surveillance laws proposed by Theresa May, the UK Home Secretary, requires backdoor access or weakened security to allow law enforcement to prosecute suspected criminals, for example, the ability to force cars carrying criminal suspects to stop or pull up. But this naive approach to security is utterly naive; leaving any kind of vulnerabilities, with the hope that only law enforcement agencies will use it, poses the great risk of allowing bad actors to exploit those same loopholes for their own, unpredictable and potentially dangerous means. The cybersecurity dilemma is an extremely challenging one, and a solution is critical in order for self-driving cars to actually be able to roam the roads anytime soon.

 

A Rocky Road Ahead

Self-driving cars are, despite the difficulties of implementing it into the current legal and regulatory conventions, still a great idea. It has been a dream for the many involved with its gradual development to see it eventually become commonplace, with the highly advanced vehicles flowing through the world’s roads, while employing a system of constantly flowing traffic and efficient manoeuvring. Yet the legal constraints and the uncertainty of exactly how to integrate the concept into modern society smoothly are still currently unavoidable problems which need to be faced. To deal with the cybersecurity issues, tech companies may have to cooperate with car manufactures to ensure that the most robust security parameters are in place to better guarantee the safety of the vehicles. Combined with that, there needs to be new laws dealing specifically with the new technologies need to be passed, or, at least, old laws need to be updated, to allow the vehicles work with the modern economy and society. Though even making these changes may not guarantee sustainability, and so the road ahead will be rather unpredictable.

Please reload

  • Twitter - Black Circle
  • Facebook - Black Circle
  • Instagram - Black Circle
  • Tumblr - Black Circle
Recent Posts

November 1, 2019

September 9, 2019

Please reload