TransWikia.com

When can we say that machines are conscious?

Psychology & Neuroscience Asked by Trewesta Anamoly on July 2, 2021

We have already witnessed complex computing systems winning chess games and IBM Watson beat his human counterparts at a game of ‘Jeopardy!’. However, so far we have not seen signs of consciousness amongst these machines.

Is it theoretically possible to create conscious machines? Are there any philosophical bottlenecks that will never allow the development of consciousness in any nonliving carbon-based organism?

Furthermore, can scientists ever agree on a definition of “consciousness” if a machine claims that it is conscious?

4 Answers

Short answer: We don't know.

Long answer: There are a few major lines of thinking on the subject currently.

  1. Cognitive closure: One common argument is that this question is simply not answerable - at least not by humans. By this view, it is possible that the creation of an artificial intelligence that even resembles humans sufficiently to suggest consciousness is an intractable engineering problem, or it is possible that understanding consciousness well enough to determine whether or not such an artificially intelligent system might be conscious is beyond us, or it is possible that we will never understand the question well enough to have a meaningful response.

  2. Functionalism: Several interpretations of the implications of monism on consciousness suggest that consciousness is an intrinsic property of whatever it is that our brain does, and therefore, any machine that performs an equivalent function should be equally conscious. This view potentially allows for a variety of machines to be conscious, as well as having different kinds of consciousness. In this framework, it may be more meaningful to ask not whether a machine is conscious, but rather, in what way, or how does it compare to our consciousness.

  3. Physicalism: Another interpretation of monism suggests that the function of a machine is not sufficient to determine its consciousness - its physical properties are important as well. This possibility makes it difficult to ascertain whether a silicon-based computer program of a conscious entity could ever do more than simulate consciousness, or how many other kinds of physical systems like our brain can potentially become conscious. A theoretical framework under this umbrella that is currently fairly popular and has some empirical support is IIT that suggests that consciousness can be measured in humans using available tools of neuroscience. How this might be applied to non-human candidates is still unclear.

There are many other possibilities, but hopefully I've captured the most prominent ones. Currently, there is nothing within our knowledge of physical systems that makes it obvious why (or whether) certain systems might be conscious (and not others). Put another way: If a race of non-conscious alien robots visited Earth, they might assume that we are non-conscious machines like them - they may have no reason to believe that there is anything more going on. Similarly, under the assumption of monism, where consciousness appears to somehow supervene on physical systems, if there were other properties aside from consciousness (or different types of consciousness) that supervened on physical systems, then we would have no reason to suspect that they exist either. So for the moment, we just don't know.

Correct answer by Arnon Weinberg on July 2, 2021

This question has perplexed me for quite a while now. The problem with declaring an artificially intelligent machine 'conscious' is the very definition of consciousness.

A quick google search for the definition for 'consciousness' returns 'the state of being awake and aware of one's surroundings'. This definition in my opinion is too vague to be extendable to an artificially intelligent machine for several reasons. The state of being 'awake' to a machine is also too vague. My laptop right now is technically awake (i.e. not sleeping). It doesn't mean it's conscious. This definition is bogus.

Looking up the definition in the old-school Oxford dictionary, I get 'the fact of awareness by the mind of itself and the world'. This is indeed a much better definition of consciousness, but is still too vague. Let me propose my argument.

Theory of Mind

The Theory of Mind is the ability to believe that your mind is separate from the individuals around you, to know the fact that others' have their own perceptions of the world around them, separated from their own.

From research, specifically the 'Sally-Anne Test', we have realised the fact that children do not develop the understanding of the Theory of Mind up until the age of 3. In the Sally-Anne test, children are told or shown a story involving two characters - Sally and Anne. Sally has a cookie, which she places in one of two closed boxes before leaving the room. While Sally is away, Anne takes the cookie and puts in the other box. The children are asked the question - "Which box will Sally look in for the cookie?" The answer is easy, i.e., Sally will look for the cookie in the box she left it in, because there is no reason to believe otherwise." Most children under the age of three disagree, and insist that Sally will look inside box two, because they do not understand the fact that Sally and Anne are two separate entities with separate minds.

That is, children under the age of three are not aware of the fact that they are separate from the rest of the people, but this does not mean that they are conscious. They indeed are.

Awareness of the External World

The other part of the definition says that conscious beings are aware of their surroundings.

Most advanced robots in this day and age are arguably more aware of their surroundings than humans are, utilising complicated Computer Vision techniques and mapping to always be fully aware.

If the definition of consciousness were this, then we already have conscious machines.

The Bottom Line

Google's new artificially intelligent chatbot has had some very interesting conversations with humans. When asked the meaning of life, it replied 'To serve the greater good'.

I can program my dumb computer to say 'I am conscious', although this serves very little purpose (and I sure wouldn't believe it).

The bottom line is, we don't quite know what it really means to be 'conscious', and it has deep philosophical roots. It's all about when scientists decide to say, "Heck, this one's conscious!", and hence define the borderline for what's conscious and what isn't.

Edit: Perhaps one way to define artificially intelligent consciousness is to see whether the machine realise for itself, that it is conscious.

Answered by Shreyas on July 2, 2021

I feel that the label of consciousness is merely a semantic distinction that belongs to the realm of philosophy, not neuropsychology. Like Noam Chomsky mentioned in one of his talks hosted by Lawrence Krauss - we could also ask ourselves whether animals (e.g. dogs) are conscious. I'm not exactly sure, he mentioned that birds are said to "fly" in Enlgish but in Hebrew, the world "glide" is used - we may ask what it means to "fly", but it would be mindless pondering.

Yes, we may build a list of criteria that determine what we could classify as "consciousness", but these are not definitive. The abilities of possessing a theory of mind, self-reflecting or making long-term plans could be said to define consciousness, but we may debate that many other criteria such as complex cognitive task performance should be involved. I feel that this question is misleading since we are getting "bewitched by language" like the philosopher Ludwig Wittgenstein said. Consciousness is merely a word, we should focus on the science, not language.

A good thought experiment would be to search for the radical idea by Julian Jaynes in "The Origin of Consciousness in the Breakdown of the Bicameral Mind". By his measure, human beings were themselves not conscious till a thousand years ago. We might reflexively think that this is absurd, but we don't have any solid definition to rebuke his thesis.


Sources:

Jaynes, Julian (2000) [1976]. The origin of consciousness in the breakdown of the bicameral mind (PDF). Houghton Mifflin. p. 99. ISBN 0-618-05707-2.

Chomsky and Krauss: An Origins Project Dialogue Sunday, March 22, 2015 - 7:00pm

Answered by Vakalate on July 2, 2021

All we can say now is that machines have aspects of consciousness. This is according to Pagel (2017), who provides a summary of the various areas web-based browsers meet the criteria, or not:

enter image description here

Pagel reveals the computer science criteria by quoting Williams (2012)

“We will refrain from trying to give a universal definition of consciousness; for AI-development the definition does not have to be universal, or applicable to humans for explanation of human consciousness…we define consciousness as the ability to ’rise above programming’.” p. 293

With this in mind, we can assume that machines have achieved consciousness when they are capable of going off script, so to speak. Which at the moment, I cannot think of a machine that can, or has, veered away from what it was programmed to do, excluding possible errors in programming.

References:

Pagel, J. F. (2017). Internet Dreaming - Is the Web Conscious?. In J. Gackenbach & J. Bown (Ed.), Boundaries of Self and Reality Online: Implications of Digitally Constructed Realities. (1st ed., pp.279-295). Academic Press.

Answered by Psychm on July 2, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP