Select Page

[ A guest post by Bob Hand ]

HAL (source Pixabay)

HAL (source Pixabay)

“Open the pod bay doors, HAL.”

“I’m sorry, Dave. I’m afraid I can’t do that.”

This moment from the 1968 classic 2001: A Space Odyssey captures moment an artificial intelligence acts against the wishes of a human. Among the dilemmas the film presents is a serious question: how much responsibility should be given to an artificial intelligence?

The most overused plotline in science fiction is the betrayal of a creator by his creation. Stories like this have made generations of readers and moviegoers distrust technology. However — given modern technological advances — they are now fears rooted in reality.

The ethical considerations raised in these tales are quickly becoming a reality that we have to contend with. Over the past couple years, stories of driverless car accidents, fatal mistakes by robots in hospitals or factories, and even law-breaking A.I. have made the headlines. The legal and ethical lines surrounding robotics and artificial intelligence are becoming increasingly blurred.

The Frankenstein complex is no longer a mere philosophical problem. Here are three ways that robotics and artificial intelligence technology can break the law today:

Driverless cars can disobey traffic laws

While car manufacturers are currently looking for tech solutions to lessen the effects of distracted driving, keeping driverless cars reliable and safe will be the next challenge. Within the last couple months, news from Pittsburgh and Singapore has casted doubt on the viability of driverless cars.

A group of self driving Uber vehicles position themselves to take journalists on rides during a media preview at Uber's Advanced Technologies Center in Pittsburgh, Monday, Sept. 12, 2016. Starting Wednesday morning, Sept. 14, 2016 dozens of self-driving Ford Fusions will pick up riders who opted into a test program with Uber. While the vehicles are loaded with features that allow them to navigate on their own, an Uber engineer will sit in the driver’s seat and seize control if things go awry. (AP Photo/Gene J. Puskar)At the beginning of October, pedestrians in Pittsburgh reported that Uber taxis were disobeying traffic laws, travelling down one-way streets in the wrong direction. Recently, another self-driving taxi was involved in a traffic accident in Singapore. NuTonomy ceased operation of its self-driving taxi service until more information is gathered about the incident.

Such accidents are caused by a lack of information; automated systems can run into accidents when there are unknown variables. Road conditions, weather changes, and complex inner-city roads can cause automated vehicles to disobey traffic laws and behave in erratic ways.

The question of liability is complex. Who should be held culpable in such an accident? The passenger? The auto manufacturer? The car itself? Some of these questions may seem absurd, but the legal consequences could be a real concern. Google has predicted that driverless cars will be on the open market by 2020 — so answers need to come sooner than later.

Robots can make life-threatening mistakes

There are many stories of medical errors caused by healthcare technology. Since 2000, surgical robot errors resulted in over 144 patient deaths and 1,391 patient injuries. A tenth of these errors are attributed to “unintended movement”. Most of them occurred during complicated heart, head, and neck surgeries. While robots might be ideal for performing precise movements, they are clearly not infallible. These incidences have resulted in costly lawsuits for hospitals across the world.

Another industry that sometimes relies on robotics is auto manufacturing. One type of robot used in Volkswagen’s production plants is programmed to perform a variety of tasks during assembly. Last year, in an incident at a plant in Germany, the robot grabbed a 22 year old man and fatally crushed him against a metal plate.

While robots are typically kept behind cages to prevent contact with workers, the employee was within the designated safety zone. Prosecutors were left confused as to whether or not they should raise charges — and, if so, against whom.

Artificial intelligence can break the law

At a Swedish art exhibit, a robot named “Random Darknet Shopper” was apprehended for purchasing illegal drugs. The bot was programmed to automatically shop online using a budget of $100 in Bitcoins every week. The program randomly chose one item on the deep web and purchased it. When ten pills of ecstasy were sent to the artists behind the installation, the police got involved.

In this case, the artists and the A.I. were not charged. Since the investigators determined that the artists never intended to consume or distribute the drugs, they were not punished — though the pills were destroyed.

While this may seem like an innocuous mistake, crimes like this could become more commonplace. Artificial intelligence continually improves, and users must be wary. If a program like this could be spread to other devices, crimes could be committed with no obvious culprit. As smartphones get smarter and advanced wearable technology gets more popular, this problem might escalate.

Over the years, humanity has increasingly relied on technology to take control of major parts of our lives. The benefits have been tremendous; the impact robotics has had in the fields of manufacturing or healthcare can attest to that. Driverless cars will soon populate the roads. Furthermore, A.I. will transform the workplace in the next several years. As technology takes on bigger roles, there can be a drastic cost in the event of an error.

How much responsibility will we allow artificial intelligence to bear? No robot is foolproof. As we rely on robotics for increasingly complex tasks, the possible negative consequences grow. When it comes to errors made by robotics, the question of liability is unaddressed. In the near future, legislation may shape the way we interact with technology.