Understanding the Laws of Robotics in the World of AI

March 8, 2024

We all love the idea of robots. Just look at how many are featured in television, films, books, and even music and social media. Since the development of human-like robots in the 1950’s, our fascination for what they could do to benefit our lives has only grown.

But – as we know from countless science fiction stories – fear of robots being used for evil rather than good or revolting against their human overlords also intrigues us.. 

These are questions that scientists and civilians alike continue to ponder, particularly with the rapid development of artificial intelligence (AI) that uses of robotic process automation (RPA) software to perform routine tasks each day. To guide these discussions, a handful of questions from the sci-fi world have found their way into the mainstream as the “Three Laws of Robotics”.

Since the broad adoption of the laws of robotics in the technological world, they have formed the basis for many discussions around the ethics and safety of generative AI and machines. Although there has been criticism of the simplicity of the laws, particularly in recent years, they continue to act as a starting point for important conversations.

What exactly are Asimov's "Three Laws of Robotics"?

As outlined in Asimov’s book, the “Three Laws of Robotics” are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders of humans except when those orders conflict with the First Law.
  • A robot must protect its own existence unless in a situation that would conflict with the First or Second Law.

Asimov later added a follow-up to the First Law, known as Zeroth Law, to more broadly encompass the whole world – a robot may not harm humanity or, through inaction, allow humanity to come to harm.

Why the laws of robotics are important

The primary goal that Asimov detailed when creating the laws of robotics was to safeguard humanity against possible harm. Although fictional, they’re still essential when thinking about technology, especially during the planning and development stages.

Ethics of robotics

One of the biggest takeaways from the laws are how they relate to the ethics of building intelligent machinery to support human life. In many cases, we use robotics to help us automate our systems at work and home, freeing us up for other tasks or hobbies instead. 

But what happens when we misuse machines? The First Law explicitly concerns the safety of humanity, which raises questions around the responsibility of both humans and the robots they create to be used appropriately. 

75%

of US organizations placed significant importance on AI ethics in business in 2021. This increased from under 50% in 2018. 

Source: IBM

Balancing humanity and technology

As automated technology becomes more integrated into our lives, we must think about not only our reliance on it but how we can use it for actual good. 

Without a moral compass to guide it, technology can only respond in the ways that humans have designed it to do. This leads to questions about how we balance our use of this technology with the values and principles we hold, along with how it may affect our own well-being.

Maintaining accountability

Like questions around the ethics of robotics, having these laws makes accountability a critical point. What are the broader possibilities and implications that come from using this technology? Who should be held responsible for its impact? Designers, developers, programmers, and even manufacturers all hold some of this weight when it comes to making morally correct decisions about how their technology is used. 

But there are also problems with this. For instance, how do we respond to situations where people or entities use robotics in ways they weren’t meant to be.? As technology continues to evolve, the responsibility question becomes more important.

28%

of workers think that company CEOs should be accountable for AI ethics in their companies. 

Source: IBM

Inspiring future development 

One of the most positive outcomes of the “Three Laws of Robotics” has been the attempts made to develop ethical technology as a result. Even with their fictional origins, the questions the laws raise and the discussions we’ve had because of them have meant that engineers often begin with ethical responsibility about their creations in mind.

Influencing public perception

Whether we like it or not, the world of fiction is reflective of our existing culture and helps shape it. As a prolific writer in the sci-fi genre, Asmiov’s laws have become part of a broader narrative around robotics and technology.

These laws have unquestionably guided how and what the modern world thinks about robotics. And with the rise of AI in recent years, it’s easy to see how Asimov’s laws have continued to play a critical role in how the general public understands and responds to new technology.

What problems are there with the laws of robotics?

As with anything adapted from the fictional to the real world, there are significant criticisms about how the laws of robotics apply to the technology of the 21st century. Many stem from the complexity of modern robotics, much of which was not accounted for in Asimov’s 1942 book.

They're too simple

Not only were the robots of Asimov’s time simpler than those of today, but the ethical issues they bring up are also much more complicated in a technology-reliant world. For example, a robotic home cleaning device is unlikely to cause serious harm to the wider human population. But when compared to military robotics that are ultimately designed as weapons, significant ethical issues surface. 

Many of these devices are designed with efforts to reduce the impact on human lives in high combat areas, so still fall under the Three Laws. Yet they also undoubtedly harm and destroy human lives at the same time. Particularly in warzones, the use of robotics is never a simple answer that fits within the original laws.

They're too broad

While there are upsides to having laws that aren’t specific or rigid, especially when it comes to technology, problems arise when people interpret the laws differently. What may be considered ethical to one person could be seen as highly immoral by another. 

Definitions are crucial when attempting to outline rules, so questions around what’s considered “harm” or how robots should prioritize the First and Second Laws are all issues engineers and scientists have wrestled with regarding the “Three Laws of Robotics.”

They only focus on human safety

A major criticism of the laws of robotics is the strict focus on human life prevailing over anything else. The distinct lack of instruction for how to treat non-human life is a problem. 

The use of robotic technology also afflicts animals and the environment, yet there’s no guidance on the ethics of these life forms. This human-centric perspective leaves room for exploitative and destructive technologies that still comply with the letter of the laws.

Another important grievance is that, even when discussing the ethics of humanity, we must account for thousands of years of our biases. Throughout history, we’ve seen countless examples of dehumanization of races, genders, and religions deemed different from the dominant culture. 

Since humans program these robotic devices, it’s inevitable that bias appears in their functioning as well. In fact, discussions around training materials for generative AI and their implicit biases are already being discussed.

Do the laws of robotics apply to AI?

Like any other form of technology, AI has been routinely studied under the lens of the “Three Laws of Robotics” to see how it stacks up. Discussions around the development and use of AI have found their way into workplaces, classrooms, and even our homes.

Currently, AI largely complies with the laws Asimov laid out. It follows the rules, or inputs, provided by human creators and has no inherent desires of its own that pose a significant threat to humanity. Even when requests are denied, which would seemingly break the Second Law, a carefully reworded prompt can usually get around this.

The potential for unethical and harmful uses is still there, which makes AI fall outside of these laws. This is no different, though, to many of the other robotic technologies available today. Humans are flawed, and so is the technology we create.

The rise of robots

Despite their flaws, Asimov’s laws of robotics are a helpful starting point for many of the important discussions we must have around the exponential development of new technology. As things stand, we have a long way to go before robots take control and become more intelligent than even the smartest humans on Earth. So until then, we simply keep using them to make our lives a little bit easier.

Interested in developing your own AI technology? With machine learning software, you can build automation by using algorithms to produce defined outputs to increase your accuracy at work.

Robotic Process Automation (RPA) Software Lay down the law!

Need some help with your daily admin tasks? See how robots can take on the work with process automations software.

Understanding the Laws of Robotics in the World of AI The “Three Laws of Robotics” are over 75 years old. Do they still apply to modern technologies like AI? Learn more about their importance and limitations. https://learn.g2.com/hubfs/G2CM_FI838_Learn_Article_Images-%5Blaws_of_robotics%5D_V1a.png
Holly Landis Holly Landis is a freelance writer for G2. She also specializes in being a digital marketing consultant, focusing in on-page SEO, copy, and content writing. She works with SMEs and creative businesses that want to be more intentional with their digital strategies and grow organically on channels they own. As a Brit now living in the USA, you'll usually find her drinking copious amounts of tea in her cherished Anne Boleyn mug while watching endless reruns of Parks and Rec. https://learn.g2.com/hubfs/holly-landis-headshot.png