Self-awareness in AI

Mea
5 min readOct 28, 2020

--

What does being self-aware mean? Do we have self-aware robots? Both of these are key questions in the field of artificial intelligence, and questions that will be covered in this article. I will also explain the difference between a robot and AI, what self-awareness is, and some examples of self-awareness in robots.

Self-aware AI

ROBOTS, AI & SELF-AWARENESS

Robotics and artificial intelligence (AI) are two separate fields of engineering; a robot is a machine, whereas an AI is a program. Even then, artificial intelligence or AI, is what gives robots their ‘intelligence’; robots perform their tasks by machine learning, which is a part of AI. Because of that, the terms ‘robots’ and ‘AI’ will be used interchangeably in this article.

AI can be divided into four main types, one of which is a self-aware robot.

  1. Reactive: these are purely reactive machines. They do not remember the past and do not have the ability to use past experiences to help the present.
  2. Limited memory: a type of AI that can use information that they have recently learned; an example would be a self-driving car, which monitors the speed and direction of cars over time.
  3. Theory of mind: a type of machine that can make decisions as well as a human. We have not quite reached this stage yet; this is also the stage in which a robot may be mistaken for being self-aware when it is not.
  4. Self-awareness: these are systems that can form representations of themselves. These would be machines that are ‘aware’ of themselves and know their internal states.

To first understand what a self-aware robot or AI is, we must understand what being self-aware means. Self-awareness, or self-consciousness, is usually defined as being aware of one’s own personality or individuality. It is important to note, though, that while consciousness is being aware of one’s own environment and body, self-awareness means recognizing that awareness.

DETERMINING ‘CONSCIOUSNESS’

Ever since the beginning of robotics, scientists have always wanted to create a conscious mind. Since ‘being conscious’ is very vague, and there is no known method of testing consciousness, Alan Turing devised a test that could determine whether a computer can think by trying to imitate a person. The basis of the test is that a human interrogator must distinguish whether the subject is a human or a computer within a timeframe by asking questions.

Turing, 1950, talking about his test:

‘I believe that in about fifty years’ time it will be possible … to make them [computers/robots] play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’

Turing’s predictions, however, have not been fulfilled. Although some AI systems have passed the Turing Test, they have later proven to have cheated, or, they have yet to be called intelligent or able to think by the majority of AI researchers.

There have been many objections to Turing’s test, one of the most famous being John Searle’s philosophical thought experiment, “The Chinese Room.”

‘Suppose a human who knows no Chinese is locked in a room with a large set of Chinese characters and a manual that shows how to match questions in Chinese with appropriate responses from the set of Chinese characters. The room has a slot through which Chinese speakers can insert questions in Chinese and another slot through which the human can push out the appropriate responses from the manual. To the Chinese speakers outside, the room has passed the Turing test. However, since the human does not know Chinese and is just following the manual, no actual thinking is happening.’

The fact that Turing’s test has not really been passed, may lead us to the question of: was Turing’s test too hard? Or, is artificial consciousness not possible?

‘Consciousness’, or even ‘think’ are very vague terms and even harder to test. Therefore, the line between a simple machine and a machine that can ‘think’, is not clear cut, but instead, very hazy. We can already see this since there are a number of cases in which robots are called ‘self-aware’ by some, but others argue that they are not.

One such example is an experiment on a robot led by Professor Selmer Bringsjord at the Rensselaer Polytechnic Institute in New York. He performed a version of the ‘wise man riddle’ on the robot, that goes as such:

‘…all three robots were programmed to believe that two of them had been given a “dumbing pill” which would make them mute. Two robots were silenced. When asked which of them hadn’t received the dumbing pill, only one was able to say “I don’t know” out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn’t received the pill.’

The fact that the robot managed to realize that he had not been given the ‘dumbing pill’, means that he shows a degree of self-awareness. But whether that means that the robot is self-aware, is yet to be agreed upon.

Another example of a robot that is considered to be self-aware by some is a robot ‘arm’ made by a group from Columbia University. The group created a robot that learns what it is by itself; the robot has no prior knowledge, but after a day of ‘babbling’, the robot creates a self-simulation of itself. This means that the robot learned what it was, and how it functioned, from scratch.

Robot ‘arm’ from Columbia University

Hod Lipson, talking about self-simulating during a press release:

‘This is perhaps what a newborn child does in its crib, as it learns what it is. We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.’

As Lipson said, I believe that this is only a step on the path to self-aware machines.

CONCLUSION

The definition of “self-aware” is not clear cut, so there have been some disagreements on whether the robot is self-aware, or not. Self-aware robots are not quite within our grasp yet, but we are steadily—or at least trying to—progress toward that goal.

Some discussion questions:

  • Why do you think Turing’s predictions have not been fulfilled?
  • Do you think self-aware AI systems/robots are possible? If so, when will they be?
  • Does having emotions mean that the AI system is self-aware?

Any comments and feedback are welcome.

Image credits:

https://images.pexels.com/photos/2599244/pexels-photo-2599244.jpeg?auto=compress&cs=tinysrgb&dpr=2&w=500

https://www.engineering.columbia.edu/files/seas/styles/816x460/public/content/cs_image/2019/05/lipson-robot1-1600.jpg?itok=A5x05udH

--

--

Responses (1)