Will We Ever Make an AI With Consciousness?

Check out this answer from Consensus:

While current AI systems do not possess consciousness, ongoing research suggests that future advancements could potentially lead to the development of conscious AI. This would require overcoming significant technical and theoretical barriers, including replicating the complex neural structures and self-organizing capacities of the human brain. The pursuit of conscious AI also necessitates careful consideration of ethical and philosophical implications.

The question of whether artificial intelligence (AI) can achieve consciousness has been a topic of significant debate among researchers. Consciousness, a complex and multifaceted phenomenon, is often considered a unique feature of biological organisms. This synthesis explores various perspectives from recent research papers on the potential for AI to develop consciousness.

Key Insights

  • Current AI Systems Lack Consciousness:
    • Most researchers agree that current AI systems do not exhibit consciousness and are primarily tools that extend human intelligence without self-awareness .
  • Technical and Theoretical Barriers:
    • Some theories suggest that the physiological structure of biological neurons and their complex organization are necessary for true consciousness, which current AI lacks .
    • Consciousness in AI would require the ability to generate and manage complex temporal activity patterns and self-organizing capacities similar to the human brain, which is currently beyond our technological capabilities.
  • Potential for Future Conscious AI:
    • Despite current limitations, some researchers believe that significant algorithmic steps toward machines with core consciousness have already been taken, and future advancements could potentially lead to conscious AI .
    • The development of AI systems that can communicate their internal states and co-create languages might lead to emergent consciousness, similar to human evolution.
  • Empirical and Neuroscientific Approaches:
    • A rigorous, empirically grounded approach, using neuroscientific theories of consciousness, can help assess and guide the development of AI systems toward potential consciousness .
    • Understanding the neurobiology of consciousness and integrating these insights into AI development could bridge the gap between artificial and biological consciousness .
  • Ethical and Philosophical Considerations:
    • The possibility of AI achieving consciousness raises significant ethical and philosophical questions about the nature of self-awareness, the distinction between genuine consciousness and imitation, and the societal implications of sentient machines .

 


Will we ever make an AI with consciousness?

Chris Frith has answered Likely

An expert from University College London in Neuropsychology

There are two questions we need to answer first. 1) What do we mean by consciousness? and 2) How would we know if the AI was conscious?

I take consciousness to mean ‘having subjective experiences’.

I believe that my colleagues in the Institute Philosophy are conscious because they keep telling me about their subjective experiences. We do a lot of wine tasting and tell each other what the wine tastes like and how it makes us feel. Through this sharing of subjective experiences, we believe that we become better at identifying and discriminating wines.

There are already machines that are better at identifying wines than we are. This is because they can directly assess the chemical composition of the wine. But they are not having subjective experiences. Such a machine is equivalent to a sense organ. Our eyes don’t have subjective experiences. That comes at a later stage.

Communicating subjective experience is not easy. People with unusual subjective experiences, such as colour blindness or synaesthesia, frequently do not discover that their experiences are different from others until early adult hood.

We have to learn how to talk about our subjective experiences. Wine tasting, for example, is notorious for the strange terms used to convey the experience. Some of these terms are reasonably straightforward as when the colour of the wine is described as pale straw. But some cross sensory boundaries. The term dumb (auditory) means having little smell, while green (visual) means tasting too acidic. To be able to use such terms, even pale straw coloured, we need to have internalised a rich, multidimensional understanding of meaning (semantic space) acquired through lengthy experience and constrained by culture so that it is not too idiosyncratic. We need to agree on how to talk about our subjective experiences.

For an AI to be able to communicate its subjective experiences, it too would need to have such a cultural upbringing. Certainly difficult, but not, in my opinion, impossible. We usually succeed with our children.

But there is an obvious problem with my approach so far. I suggested that we can know that someone is conscious because they can tell us about their subjective experiences. But just because someone can’t tell us about their experiences, this doesn’t mean that they’re not having them.

Recently there has been much interest in the problem of locked-in syndrome. These are patients who appear to be in a coma, but are, in fact, fully conscious. The problem is that they cannot reveal this because they cannot speak or move. So how can we find out if a person in a coma is actually having subjective experiences? Adrian Owen and his colleagues have addressed this problem by looking at brain activity. When we imagine moving a limb, no movement occurs, but there is a characteristic pattern of brain activity which can be detected with a brain scanner. In principle at least, the person with locked-in syndrome can use brain activity, instead of speech or movement, to communicate. (To say ‘yes’ imagine moving your right arm, to say ‘no’ imagine moving your left foot.) In such cases the potential to communicate subjective experience is still present.

This consideration makes the rather obvious point that a conscious AI would have to have the capacity to interact with others. Such interactions involve a body that can be moved, or, at least, that we can imagine moving.

But there is a still deeper problem that we need to address. What about infants and non-human animals? Without language, they can’t tell us about their subjective experiences. Maybe they are not even aware that they are having subjective experiences. But this doesn’t mean that they are not having them.

I believe that consciousness exists at more than one level. At a lower level, I am having the experience of green. At a higher level, I am aware that I am having the experience of green and might choose to tell people about it. An experiment on mind wandering (by Jonathan Schooler) can demonstrate this. Participants were asked to read a rather dry, if not boring, book, which they would be quizzed about later. After they had been reading for a while they were unexpectedly interrupted and asked what they were thinking about at that moment. In many cases their mind had wandered. This is something we have all experienced. We realise that, for the last several moments, our eyes having been moving across the page, but we have no idea what we have just read. We have been thinking about something else. We might have been listening to the birds singing in the garden outside. So, we were having the subjective experience of the singing, but we were not reflecting on this experience and so we were not aware that it was not the experience that we were supposed to be having.

So, perhaps young infants and non-human animals have this lower level kind of subjective experience. But how could we know?

In the absence of communication, there are two possible sources of evidence: brain activity and behaviour.

For example, Sid Kouider has measured the pattern of brain activity (EEG) that occurs when observers are presented with a face. There are early and late components of this activity. The early components occur whenever the face is presented, even when the observer is not aware of the face. In adults, the late components are only seen when the observer is aware of the face. So, these components are neural markers of consciousness. In infants of 5 months these neural markers are very weak and delayed. By 15 months the markers are much stronger, but still delayed in comparison to adults. Similar markers might be found in non-human animals since, in mammals, at least, the brain structure is very similar to that of humans. The problem here is that we cannot apply neural markers to the study of consciousness in AI agents since their ‘brain’ will be very different.

So, we are left with behaviour. We used to believe that flexible, goal-directed behaviour was a marker of consciousness. But the behaviour of some people with brain damage shows that this is not the case. For example, patients with blindsight, caused by damage to the visual cortex (Sanders, 1974), are able to respond to visual stimuli despite having no subjective awareness of vision. Since then, many experiments have been reported showing that there are variety of tasks that people with intact brains can perform without the need for any subjective experience (e.g. Linzarini, 2017 ).

If there are any tasks for which subjective experience is necessary, then, perhaps, these could be used as markers of the presence of consciousness in an AI or any other kind of agent.

Are there any aspects of subjective awareness that directly affect our behaviour?

There is currently much interest in the feeling of confidence. I sometimes feel that I am not quite sure if I know the person approaching me. I can often feel very uncertain about whether I am making the right decision. Whether or not I can talk about these feelings of confidence, they will affect my behaviour. I will be more circumspect in my approach to the possible friend. I will seek more information before I make a decision which I am unsure about (Desender, 2018). If we can observe such behaviour, then it is likely that this agent is having a subjective experience. If an AI behaved like this, then we might conclude that it was conscious.

There is another advantage to this approach, which brings me back to the original question: will we ever make an AI with conscious experience?

Mathematical accounts are being developed which show how confidence might be computed and used to modify behaviour (e.g. Fleming, 2017). In principle, it should be possible to incorporate such computations into an AI. In other words, we may have a recipe for making a conscious AI.

 

Will we ever make an AI with consciousness?

Carlos Montemayor has answered Unlikely

An expert from San Francisco State University in Philosophy

This question is ambiguous in the sense that self-awareness involves intelligence and subjective experience. Intelligence is much more easy to simulate than subjectivity. It could be that perfectly simulating intelligence is sufficient for the simulator to count as intelligent, although even this possibility is problematic, because intelligence also requires agency and motivation, and these are difficult to simulate with programs. However, in the case of subjectivity and experience, simulating might not be sufficient to count as conscious and this could be an in principle problem for creating AI that are self-aware the way we conscious, subjective beings, are. See this paper for an argument defending this claim: https://www.sciencedirect.com/science/article/pii/S1053810016301817?via%3Dihub

The problem might be even deeper. Intelligence is valuable in an epistemic sense–we value it because it helps us solve problems, acquire accurate information, and arrive at the truth. Consciousness, understood as subjective experience, might be valuable in a moral sense–we value it because it is unique and because conscious beings have a distinct first person point of view that cannot be reproduced by programs. This distinction is deeply related to the dissociation between consciousness and attention: https://mitpress.mit.edu/books/consciousness-attention-and-conscious-attention

AI will certainly surprise us in many ways, but at least for now, it is unlikely that it will become conscious in a way similar to humans (at least with respect to subjective experience).

 

Will we ever make an AI with consciousness?

Karl Friston has answered Likely

An expert from University College London in Neuroimaging

Yes, I see no principled reason why we should not make a generalised artificial intelligence that possesses consciousness. Clearly, this depends upon how one defines consciousness. At present, the sort of criteria that could apply include:

  •         Autonomous and embodied behaviour of an artefact with the capacity to select its own input (e.g., sensory) data
  •         A purposeful exchange with the world (and others) based upon an abductive reasoning (technically, Bayesian inference and self-evidencing).
  •         An implicit generative model of its world that, crucially, generates the predicted consequences of its action.
  •         Action selection based upon the imperative to minimise or resolve uncertainty about the states of its world – and realise prior preferences.
  •         An integral part of that epistemic, information-seeking, uncertainty-resolving behaviour includes a capacity for communication; namely, a solution to the problem of hermeneutics in language and communication.
  •         This would necessarily entail theory of mind. In other words, the generative model must not only consider the future consequences of action but include a model of other (abductive) agents in its world – such as ourselves.

If this list suffices as a description of a conscious artefact, then there is no (mathematical or engineering) reason why such an artefact could not be created in the future.