Can we understand consciousness yet?
Professor Mark Solms, Director of Neuropsychology at the University of Cape Town, South Africa, revives the Freudian view that consciousness is driven by basic physiological motivations such as hunger. Crucially, consciousness is not an evolutionary accident but is motivated. Motivated consciousnesses, he claims, provides evolutionary benefits.
Mark Solms. 2021. The Hidden Spring. A Journey to the Source of Consciousness. London: Profile Books. ISBN: 9781788167628
He claims the physical seat of consciousness is in the brain stem, not the cortex. He further claims that artificial consciousness is not in principle a hard philosophical problem. The artificial construction of a conscious being, that mirrors in some way the biophysical human consciousness, would ‘simply’ require an artificial brain stem of some sort.
I have been wondering what it would be like to have injuries so radical as to destroy the physiological consciousnesses, if such a thing exists, while retaining the ability to speak coherently and to respond to speech. Perhaps a person in this condition would be like the old computer simulation, Eliza, which emulated conversation in a rudimentary fashion by responding with open comments and questions, such as “tell me more”, and by mirroring its human conversation partner. The illusion of consciousness was easily dispelled. The words were there but there was no conscious subject directing them. However, since then language processing has become significantly more advanced and machine learning has progressed the ability of bots without consciousness to have what appears to be a conscious conversation. Yet still there’s a suspicion that there’s something missing.
One area of great advance is the ability of machine learning to take advantage of huge bodies of data, for example, a significant proportion of the text of all the books ever published, or literally billions of phone text messages, or billions of voice phone conversations. It’s possible to program with some sophistication interactions based on precedent: what is the usual kind of response to this kind of question? Unlike Eliza, the repertoire of speech doesn’t need to be predetermined and limited, it can be done on the fly in an open ended manner using AI techniques. But there’s still no experiencer there, and we (just about) recognise this lack. Even if we didn’t know it, and bots already passed among us incognito, they might still lack ‘consciousness’.
So, at what point does the artificial speaker become conscious? If the strictly biophysical view of consciousness is correct, the answer is never.
A chat bot will never “wake up” and recognise itself, because it lacks a brain stem, even an artificial one. Even if to an observer the chat-bot appears fully conscious, at least functionally, this will always be an illusion, because there is no felt experience of what it is like to be a chat bot, phenomenologically.
From the perspective of neo-Freudian neuropsychology, it is easy to see why Freud grew exasperated with Carl Jung. Quite apart from the notorious personality clashes, it seems Jung departed fundamentally from Freud’s desire to relate psychological processes to their physical determinants. For example, what possible biophysical process would be represented by the phrase “collective unconscious” (see Mills 2019)?
For Freud, the consciousness was strongly influenced by the unconscious, which was his term for the more basic drives of the body. For example, the Id was his term for the basic desire for food, for sex, to void, etcetera. This was unconscious because the conscious receives this information as demands from a location beyond itself, which it finds itself mediating.
He saw terms such as the Id, the Ego and the Superego as meta-psychological. He recognised what was not at the time known about the brain, such as the question of where exactly the Id is located, but he denied it was a metaphysical term. In other words, he claimed that the Id was located, physically, somewhere, yet to be discovered. His difficulty was he fully understood that his generation lacked the tools to discover where.
Note that meta-psychology is explicitly not metaphysical. Freud had no more interest in the metaphysical than other scientists of his time, or perhaps ours have done. His terminology was a stopgap measure meant to last only until the tools caught up with the programme.
The programme was always: to describe how the brain derives the mind.
Jung’s approach made a mockery of these aspirations. Surely no programme would ever locate the seat of the collective unconscious?
But perhaps this is a misunderstanding of the conflict between Freud and Jung. What if the distinction is actually between two conflicting views of the location of consciousness? For Freud, and for contemporary psychology, if consciousness is not located physically, either in the brain somewhere or in an artificial analogue of the brain, where could it possibly be located? Merely to ask the question seems to invite a chaos of metaphysical speculation. The proposals will be unfalsifiable, and therefore not scientific - “not even wrong”.
However, just as Mark Solms has proposed a re-evaluation of Freud’s project along biophysical lines, potentially acceptable in principle to materialists and empiricists (i.e. the entire psychological mainstream), perhaps it is possible for a re-evaluation of Jung’s programme along similar lines, but in a radically different direction.
If the brain is not the seat of the conscious, what possibly could be? This question reminds me of the argument in evolutionary biology about game theory. Prior to the development of game theory it was impossible to imagine what kind of mechanism could possibly direct evolution other than the biological. It seemed a non-question. Then along came John Maynard Smith’s application of game theory to ritualised conflict behaviour and altruism, and proved decisively that non-biological factors decisively shape evolutionary change.
What if Jung’s terms could be viewed as being just as meta-psychological as Freud’s, but with an entirely different substantive basis? Lacking the practical tools to investigate, Jung resorted to terms that mediated between the contemporary understanding of the way language (and culture more generally), not biology, constructs consciousness.
What else is “the collective unconscious”, if not an evocative meta-psychological term for the corpus of machine learning?
Perhaps consciousness is just a facility with a representative subset of the whole culture.
I’m wary of over-using the term ‘emergence’. I don’t want to speak of consciousness as an emergent property, not least because every sentence with that word in it still seems to make sense if you substitute the word ‘mysterious’. In other words, ‘emergence’ seems to do no explanatory work at all. It just defers the actual, eventual explanation. Even the so-called technical definitions seem to perform this trick and no more.
However, it’s still worth asking the question, when does consciousness arise? As far as I can understand Mark Solms, the answer is, when there’s a part of the brain that constructs it biophysically, and therefore, perhaps disturbingly, when there’s an analogue machine that reconstructs it, for example, computationally.
My scepticism responds: knowing exactly where consciousness happens is a great advance for sure, but this is still a long way from knowing how consciousness starts. The fundamental origin of consciousness still seems to be shrouded in mystery. And at this point you might as well say it’s an ‘emergent’ property of the brain stem.
For Solms, feeling is the key. Consciousness is the theatre in which discernment between conflicting drives plays out. Let’s say I’m really thirsty but also really tired. I could fetch myself a drink but I’m just too weary to do so. Instead, I fall asleep. What part of me is making these trade-offs between competing biological drives? On Solms’s account, this decision-making is precisely what conscousness is for. If all behaviour was automatic, there would be nothing for consciousness to do.
As Solms claims in a recent paper (2022) on animal sentience, there is a minimal key (functional) criterion for consciousness:
The organism must have the capacity to satisfy its multiple needs – by trial and error – in unpredicted situations (e.g., novel situations), using voluntary behaviour.
The phenomenological feeling of conscioussness, then, might be no more than the process of evaluating the success of such voluntary decision-making in the absence of a pre-determined ‘correct’ choice. He says:
It is difficult to imagine how such behaviour can occur except through subjective modulation of its success or failure within a phenotypic preference distribution. This modulation, it seems to me, just is feeling (from the viewpoint of the organism).
Then there’s the linguistic-cultural approach that I’ve fancifully been calling a kind of neo-Jungianism 1. When does consciousness emerge? The answer seems to be that the culture is conscious, and sufficient participation in its networks is enough for it to arise. If this sounds extremely unlikely (and it certainly does to me), consider two factors that might minimise the task in hand - first that most language is merely transactional and second that most awareness is not conscious.
As in the case of chat bots, much of what passes for consciousness is actually merely the use of transactional language, which is why Eliza was such a hit when it first came out. This transactional language could in principle be dispensed with, and bots could just talk to other bots. What then would be left? What part of linguistic interaction actually requires consciousness? Perhaps the answer is not much. Furthermore, even complex human consciousness spends much of the time on standby. Not only are we asleep for a third of our lives, but even when we’re awake we are often not fully conscious. So much of our lives is effectively automatic or semi automatic.
When we ask what is it like… the answer is often that it’s not really like anything.
The classic example is the feeling of having driven home from work, fully awake, presumably, of the traffic conditions, but with no recollection of the journey. It’s not merely that there’s no memory of the trip, it’s that, slightly disturbingly, there was no real felt experience of the trip to have a memory about. This is disturbing because of the suspicion that perhaps a lot of life is actually no more strongly experienced than this.
These observations don’t remove the task of explaining consciousness, but they do point to the possibility that the eventual explanation may be less dramatic than it might at first appear.
For the linguistic (neo-Jungian??) approach to consciousness the task then is to devise computational interactions sufficiently advanced as to cause integrated pattern recognition and manipulation to become genuinely self aware.
A great advantage of this approach is that it doesn’t matter at all if consciousness never results. Machine learning will still advance fruitfully.
For the biophysical (neo-Freudian) approach, the task is to describe the physical workings of self awareness in the brain stem so as to make its emulation possible in another, presumably computational, medium.
A great advantage of this approach is that even if the physical basis of consciousness is not demystified, neuropsychology will still understand more about the brain stem.
As far as I can see, both of these tasks are monumental, and one or both might fail. However, the way I’ve described them they seem to be converging on the idea that consciousness can in principle be abstracted from the mammalian brain and placed somewhere else, whether physical or virtual, whether derived from the individual brain, analogue or digital, or collective corpus, physical or virtual.
I noticed in the latter part of Professor Solms’s book a kind of impatience for a near future in which the mysteries of consciousness are resolved. I wonder if this is in part the restlessness of an older man who would rather not accept that he might die before seeing at least some of the major scientific breakthroughs that his life’s work has prepared for. Will we work out the nature of consciousness in the next few years, or will this puzzle remain, for a future generation to solve? I certainly hope we have answers soon!.
References:
Mills, J. (2019). The myth of the collective unconscious. Journal of the History of the Behavioral Sciences, 55(1), 40-53.
Solms, Mark (2022) Truly minimal criteria for animal sentience. Animal Sentience 32(2) DOI: 10.51291/2377-7478.1711
Jules Verne could have told us AI is not a real person
-
To clarify, I’m claiming, with Solms, that Freud’s pursuit was meta-psychological, not metaphysical. In contrast, I’m going further than Solms and reading Jung against himself here. Jung seems to have taken a strongly metaphysical approach (Mills 2019), whereas, I’m suggesting his programme may nevertheless be treated as a non-metaphysical but meta-psychological enquiry into the relationship between consciousness and human culture, not the brain. Mark Solms took part in a discussion on the differences between Freud and Jung. ↩︎