5 – AI Ethics

🎯 Learning Objectives

Develop the Information Technology Learning Strands:

  • develop an understanding of what ethics are
  • develop an understanding of how we can apply ethical standards to AI
💬 Key Vocabulary

  • artificial intelligence (AI)
  • intelligence
  • ethics
  • rights
  • The Trolley Problem
  • machine
📝 Starter Activity – Laws of AI

  • As we have seen in this topic, there may come a point in time in which AI have outsmarted us.
  • To make sure they don’t get any funny ideas some people believe AI should be programmed with certain “laws” to make sure humans are not harmed by AI.
  • Discuss in pairs what kind of law you would program into a robot or AI to keep you safe.

🤖 The Three Laws of Robotics

  • Isac Asimov was a prolific writer of science fiction stories.
  • One of the themes that frequently occurred in his novels and short stories was the interaction between humans and robots.
  • One of the central plot devices used in many of his stories were The Three Laws of Robotics which are:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The order of these laws is very important, click here to see why.

📖 Learn It – The Trolley Problem

  • Watch the video below to learn about an ethical dilemma called The Trolley Problem.
  • MIT designed a Moral Machine platform to gather human perspectives on moral decisions made by machines, such as self-driving cars. Have a go and also see how other people respond to the same situations.

🏅 Badge it

🥈 Silver Badge

  • Google, Tesla and many other car manufacturers are currently working on self-driving cars.
  • These cars will be controlled by AI programs.
  • Now imagine this situation:

You car is driving at a moderate speed down a road. It turns a corner and there are five pedestrians standing in the road. Even if the car were to apply it’s brakes, it would still hit them, so the only way to avoid killing the pedestrians is to mount the pavement. Unfortunately there is a single pedestrian standing on the pavement, who will be killed if the car chooses to swerve.

  • Should the car continue in a straight line and kill 5 people, or should it make a decision to intentionally kill a single individual?
  • Write down what you think the car should do and why?
  • If it was later discovered that the car should have been driving more slowly, who is to blame? Is it the fault of the owner of the car, the manufacturer of the car, the programmer of the car, or the program?
  • Should computers be programmed to always serve humanity and therefore choose options that lead to the greatest good?
  • Compile answers to all these questions and upload them to www.bournetolearn.com.

📖 Learn It – Killer Bots

  • Consider the following facts:
    • The first actual recorded death attributed to a robot occurred in 1979, although the robot in question was not Artificially Intelligent.
Sentry guns like this one (deployed along the border of the Korean demilitarised zone) are capable of autonomously killing humans.
Drones like this one are remotely controlled by human pilots, and are currently in heavy use in Iraq, Pakistan and Yemen. They could easily be controlled by an AI.
  • Discuss in pairs, what you think about the ethics of allowing computers to take a human life.
  • Should we be deploying more autonomous killer robots in the battlefield so that our soldiers are not put in harm’s way?
  • Should the option to take a human life only ever be decided by another human?
🥇 Gold Badge

  • In thirty years time, military robots capable of autonomously killing humans could be common.
  • Imagine you are the Prime Minister of the day.
  • Would you order the army to purchase such robots, in the interests of defending the country, or do you think that taking a human life is a decision that only another human should make.
  • Justify your opinion as well as you can.
  • Compile answers to all these questions and upload them to www.bournetolearn.com.

📖 Learn It – Should AI have rights?

  • Prior to 1835, there was no law in the United Kingdom that prevented cruelty to animals.
  • In the 1500s you would have been laughed at if you were to suggest that animals had rights and shouldn’t be treated cruelly.
  • Sports such as Cock Fighting, Bear Bating or Fox tossing have been common throughout European history and it is only relatively recently that we have decided that an animal’s welfare needs to be protected by law.

  • Ethicists are today thinking about Robot Rights in much the same way as people once thought about animal rights.
  • As AI become more and more sophisticated, and able to imitate humans with greater and greater degrees of accuracy, do we have to start thinking about robot rights?
  • Have a read of this short story about The Turing Test.
🥉 Platinum Badge

  • Does a truly intelligent AI (as demonstrated in the works of fiction above) have rights?
  • The European Convention on Human Rights, lays down several articles, detailing the rights of all people.
  • For each of the Articles listed below, state whether you think that in the future, these rights should be extended to intelligent AIs. Use the link to Wikipedia to see what each article covers.
  • Articles – 2, 3, 4, 5, 6, 9, 14
  • Upload your answers to to www.bournetolearn.com.

In this lesson, you…

  • Looked at The Trolley Problem and how modern AI will have to take life and death decisions.
  • Decided whether you could allow AI to kill humans to defend your home.
  • Justified whether or not intelligent AI should have similar rights to humans.

In this unit you

  • Discovered what AI and intelligence is and how you can test for it.
  • Looked at the history of AI and how they came to where they are today.
  • You worked as a group to create a presentation about a current AI.
  • You looked into possible futures of the technological singularity.
  • Finally you looked at the ethics of AI and how we should treat them and use them.