Here’s how we journey beyond the ‘hero’s journey’.

a cat sits half-hidden in a paper carrier bag on the floor

Yes, we can be heroes, but does that mean we should be?

Yes really, we can be heroes. Thanks very much David Bowie! But if this sounds attractive, perhaps we should be careful what we wish for.

Do you want to be the hero of your own story? Perhaps you already are

According to reporting in Scientific American, imagining yourself as the hero of your own life gives you an increased sense of meaning.

“Our research reveals that the hero’s journey is not just for legends and superheroes. In a recent study published in the Journal of Personality and Social Psychology, we show that people who frame their own life as a hero’s journey find more meaning in it”.

But it’s not always great to be a hero

Meanwhile, from a quite different research perspective, comes a warning: Stanley and Kay (2023) caution that making people out to be heroic can inadvertently single them out for poor treatment from their peers.

“our studies show that heroization ultimately promotes worse treatment of the very groups that it is meant to venerate.”

Reading this I immediately thought of all those ‘heroic’ health workers who helped their communities through the COVID crisis at great personal cost and with very little long-term recognition (McAllister et al. 2020). In far too many cases, calling doctors, nurses and hospital workers heroes and even super-heroes ended up as quite tokenistic, little more than a way of justifying the exploitation of their labour. First they make you a hero, then they make you burn out.

Part of an artwork by Banksy showing a nurse doll as a caped superhero

Banksy’s artwork of a child playing with a nurse ‘superhero’ doll raised more than 16m for charity… but nurses' pay and conditions didn’t take off

Sometimes heroism isn’t what it seems

And here’s yet another, quite different warning: sometimes the person who sets themselves up as a classic hero is revealed to be anything but that. The case of Australian SAS fighter Ben Roberts-Smith is an extraordinary example of the moral jeopardy of a whole society desperate to believe in heroism. It seems this decorated and celebrated ‘war hero’ was really quite the opposite. The cover-up shows how much people want to believe in heroes, even when they don’t exist. This real-life tale has echoes of Beowulf to it. In Maria Dahvana Headley’s contemporary version (2021), the final words of that centuries-old tale ring painfully, bleakly, hollow with macho delusion:

”He rode hard! He stayed thirsty! He was the man! He was the man.”

So can we journey beyond the ‘hero’s journey’ already?

The hero’s journey trope has become so ubiquitous that it’s sometimes hard to remember that there’s any other kind of story. But there certainly is.

  • Maureen Murdock (1990) and later Gail Carriger (2020) have both presented feminised versions of the heroic quest narrative. I’m not convinced that these heroine’s journeys are really all that different, though, since they still assume that heroism, albeit that of women, is where it’s at. At least there’s an attempt to re-balance the faulty idea that only men are at the centre.
  • The New Yorker published a moving non-fiction account by Laura Secor of an Iranian woman’s bravery. The true story of journalist Asieh Amini doesn’t rely on a standard heroic arc, yet is highly effective. This is only one example of very many alternatives.
  • Novelist Becky Chambers points out in a talk on YouTube that real life has no protagonists. Surely this can help us to question stale narrative forms, especially those which claim to be true to reality.
  • Meanwhile, Christina de la Rocha is on a noble quest to put an end to the hero’s journey in literature and beyond. OK, not a quest. Perhaps she’d approve of Ursula le Guin’s claim that “the novel is a fundamentally unheroic kind of story”. More on that in a moment.
  • Screenwriter Anthony Mullins has written a whole book showing that there’s plenty more than only one kind of character arc.

And author Jane Alison goes even further. In her book Meander, Spiral, Explode she notes that there are far more key patterns in literature than just the arc.

Not every story is a journey

Taking her cue from Joseph Frank’s book The idea of spatial form, and from Peter Stevens’s Patterns in Nature, Alison identifies some alternative or complimentary shapes.

I particularly like her concept of the story that meanders like a river, or ripples in waves and wavelets. These aquatic images remind me of something the former monk and psychotherapist Thomas Moore said about how life itself has a kind of liquidity to it:

“Your story is a kind of water, making fluid the brittle events of your life. A story liquefies you, prepares you for more subtle transformations. The tales that emerge from your dark night deconstruct your existence and put you again in the flowing, clear, and cool river of life.” (Moore, 2004, p. 61)

In his book on spatial form, Joseph Frank examines the structure of Djuna Barnes’s modernist novel, Nightwood. This novel doesn’t have a hero’s journey or a flowing river, but instead has a series of views or glimpses of life. He says:

“The eight chapters of Nightwood are like searchlights, probing the darkness each from a different direction, yet ultimately focusing on and illuminating the same entanglement of the human spirit . . . And these chapters are knit together, not by the progress of any action . . . but by the continual reference and cross-reference of images and symbols which must be referred to each other spatially throughout the time-act of reading.”

This searchlight metaphor is illuminating, but story structure can be yet looser, more diffuse than rivers and spotlights. I’m particularly taken with Ursula Le Guin’s carrier bag theory of fiction. Remember she said the novel is a fundamentally unheroic kind of story? If so then what is it?

“the natural, proper, fitting shape of the novel might be that of a sack, a bag. A book holds words. Words hold things. They bear meanings. A novel is a medicine bundle, holding things in a particular, powerful relation to one another and to us.”

Le Guin’s insight is itself based on the carrier bag theory of human evolution, as described in Elizabeth Fisher’s Women’s Creation (1979).

“The first cultural device was probably a recipient …. Many theorizers feel that the earliest cultural inventions must have been a container to hold gathered products and some kind of sling or net carrier.” 1

Not everyone needs to be a hero to be a valid person. Mostly it’s better when we’re not. And not every story needs to be a hero’s journey for it to be worth the telling. The idea of the hero can be useful in some circumstances, dangerous in others. But more often it just gets in the way. Sometimes it’s really about “complex skills and compassion”. Sometimes it’s less about hunting and more about gathering.

So now, do you still want to be a hero, you hero you?

a cat sits half-hidden in a paper carrier bag on the floor

Now read:

More than ever, embracing your humanity is the way forward


References

Alison, Jane. 2019. Meander, Spiral, Explode. Design and Pattern in Narrative. New York: Catapult.

Barber, Elizabeth Wayland. 1994. Women’s Work: The First 20,000 Years; Women, Cloth, and Society in Early Times. New York, NY: Norton.

Barnes, Djuna. 2006/1937. Nightwood, New York: New Directions.

Carriger, Gail. 2020. The Heroine’s Journey: For Writers, Readers, and Fans of Pop Culture. Gail Carriger LLC.

Fisher, Elizabeth. 1979. Woman’s Creation: Sexual Evolution and the Shaping of Society. 1st ed. Garden City, NY: Anchor Press.

Frank, Joseph. 1991 The Idea of Spatial Form, New Jersey: Rutgers University Press.

Headley, Maria Dahvana. 2021. Beowulf. A New Translation. Melbourne and London: Scribe.

Kaul, Aashish. 2014. Mapping space in fiction: Joseph Frank and the idea of spatial form. 3am Magazine

LeGuin, Kroeber, Ursula. 1989. “The Carrier Bag Theory of Fiction” in Dancing at the Edge of the World. New York: Grove Atlantic Press. Accessed at stillmoving.org/resources…

Margaret McAllister, Donna Lee Brien & Sue Dean (2020) The problem with the superhero narrative during COVID-19, Contemporary Nurse, 56:3, 199-203, DOI: 10.1080/10376178.2020.1827964

Moore, Thomas. 2004. Dark Nights of the Soul. London, UK: Piatkus Books.

Mullins, Anthony. 2021. Beyond the Hero’s Journey: A Screenwriting Guide for When You’ve Got a Different Story to Tell. Sydney, N.S.W: NewSouth Publishing.

Murdock, M. 1990. The Heroine’s Journey. Boston, MA: Shambhala Publications.

Rogers, B. A., Chicas, H., Kelly, J. M., Kubin, E., Christian, M. S., Kachanoff, F. J., Berger, J., Puryear, C., McAdams, D. P., & Gray, K. 2023. Seeing your life story as a Hero’s Journey increases meaning in life. Journal of Personality and Social Psychology, 125(4), 752–778. https://doi.org/10.1037/pspa0000341

Secor, Laura. 2015. War of Words. New Yorker

Stanley, M. L., & Kay, A. C. 2023. The consequences of heroization for exploitation. Journal of Personality and Social Psychology. Advance online publication. https://doi.org/10.1037/pspa0000365

Stevens, P. 1974. Patterns in Nature, New York: Little, Brown & Co.



  1. See also Elizabeth Wayland Barber (1994) on women’s role in technology, textiles and the string revolution. ↩︎

Give it, give it all, give it now

Looks like you can all relax. Everyone’s ‘pivoting’ these days, so why not Satan? 📷

A sign at a traditional funfair promotes Funnyland and the Devil's slide.

Mark Luetke shows how he uses a Zettelkasten for creative work (‘zines!)

“The goal here is to create an apophenic mindset - one where the mind becomes open to the random connections between objects and ideas. Those connections are the spark we’re after. That spark is inspiration.”

dophs.substack.com/p/how-its…

Atomic notes - all in one place

From today there’s a new category in the navigation bar of Writing Slowly.

Atomic Notes’ now shows all posts about making notes.

How to make effective notes is a long-standing obsession of mine, but this new category was inspired by Bob Doto, who has his own fantastic resource page: All things Zettelkasten.

Atomic Notes

The Atomic Notes category is now highlighted on the site navigation bar.

And if you’d like to follow along with your favourite feed reader,there’s also a dedicated RSS feed (in addition to the more general whole-site feed).1

But if there’s a particular key-word you’re looking for here at Writing Slowly, you can use the built-in search.

And if you prefer completely random discovery, the site’s lucky dip feature has you covered.

Connect with me on micro.blog or on Mastodon. And on Reddit, I’m - you guessed it - @atomicnotes.

See also:

Assigning posts to a new category with micro.blog


  1. If you’re not sure what website feeds are, see IndieWeb: feed reader and how to use RSS feeds↩︎

A new post category in micro.blog, filtered to include existing posts

Micro.blog is a really useful and easy way to host a website. Even though it feels more like a cottage industry than a corporation there are way more features (and apps!) than I can probably use. It’s amazing how much Manton Reece, micro.blog’s creator, has achieved.

Under the hood the micro.blog platform is based on the Hugo static site generator, but there are a few differences. One such difference is post categories.

screenshot of how to create a new category in microdotblog

Here’s a new category being created.

It’s very easy to create a new category of posts, then you can use a filter to automatically add all new posts that include a selected key-word (or emoji, or even html element). By default only new posts are affected. But by running the filter you can also add all previous posts that meet the selected criteria. That’s what I wanted to do.

screenshot of how to add a filter in microdotblog

Once you have a new category, you can add a filter. This particular filter assigns to this new category only long posts with a particular word in the text.

screenshot of how to run a filter in microdotblog

When you run the filter, all existing posts that match will be added to the category. And future posts will be added automatically.

screenshot showing the RSS feed of a new category in microdotblog

Also, each category gets its own RSS feed, which can be very useful.

This process was much easier than I expected!


More info from elsewhere:

How to overcome Fetzenwissen: the illusion of integrated thought

It’s too easy to produce fragmentary knowledge.

One potential problem associated with making notes according to the Zettelkasten approach is Verknüpfungszwang: the compulsion to find connections. It may be true philosophically that everything’s connected, but in the end what matters is useful or meaningful connections. With your notes, then, you need to make worthwhile, not indiscriminate links.

Another potential problem is Fetzenwissen: fragmentary knowledge, along with the illusion that disjointed fragments can produce integrated thought.

Almost by definition, notes are brief, and I’m an enthusiast of making short, modular, atomic notes. Yes, this results in knowledge presented in fragments. And in their raw form these fragmentary notes are quite different from the kind of coherent prose and well-developed arguments readers usually expect. You can’t just jam together a set of notes and expect them to make an instant essay. So is this fragmentary knowledge really a problem for note-making? If so, how can determined note-makers overcome it?

  • Does the index box distort the facts?

  • Can you create coherent writing just from a pile of notes?

  • Perhaps you should keep your notes private

  • Make it flow

  • To create coherent writing, make coherent notes

Read More →

From fragments you can build a greater whole

Everything large and significant began as small and insignificant

This is my working philosophy of creativity and I’m trying to follow it through as best I can. Starting with simple parts is how you go about constructing complex systems.

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system”. — John Gall (1975) Systemantics: How Systems Really Work and How They Fail, p. 71.

An artwork by Lawrence Weiner, entitled Bits and pieces put together to present a semblance of a whole

Bits and pieces put together to create a semblance of a whole, by Lawrence Weiner

  • Begin with fragments

  • From smaller parts build a greater whole

  • Join your work together

  • Do it seamlessly well

Read More →

How to decide what to include in your notes

Before the days of computers, people used to collect all sorts of useful information in a commonplace book.

The ancient idea of commonplaces was that you’d have a set of subjects you were interested in. These were the loci - the places - where you’d put your findings. They were called loci communis - common places, in Latin, because it was assumed everyone knew what the right list of subjects was.

But in practice, everyone had their own set of categories and no one really agreed. It was personal.

Since the digital revolution, things have become trickier still. There’s no real storage limit so you could in principle make notes about everything you encounter. But no matter what software you use, your time on this earth is limited, so you need to narrow the field down somehow1.

But how, exactly?

You might consider just letting rip and collecting everything that interests you, as though you’re literally collecting everything.

Sacha Chua's summary of Lion Kimbro's book, How to make a complete map of every thought you think

Lion Kimbro tried to make a map of every thought he had.

As time passes, you’ll notice that you haven’t actually collected everything because that’s completely impossible. Even Thomas Edison, the prolific inventor, wasn’t interested in absolutely everything, although he tried hard to be. If you do a bit of a stock-take of your own notes, you’ll see that, really, you gravitate towards only a few subjects.

These are your very own ‘commonplaces’.

From then on you have two choices.

  1. If you’ve enjoyed it so far, you can just keep doing what you’ve been doing, collecting all the things. Why not?
  2. But if you like, you could start doing it more deliberately. For example, at the start of a new year, you could say to yourself: In 2023 I seem to have been interested in a,b, and c. Now in 2024 I want to explore more about b, drop a, and learn about d and e.

You could create an index, with a set of keywords, and add page number references to show what subject each entry is about, and how they relate. Or not. Of course, it’s your collection of notes and you can do whatever pleases you. That’s the point.

the bower of a satin bower bird. The male bird collects blue items to attract the female.

Bower birds collect everything, but with one crucial principle.

Where I live we have satin bower birds.

The male creates a bower out of twigs and strews the ground with the beautiful things he’s found. Apparently this impresses the females. The bower can contain practically anything, and it really is beautiful. Clothes pegs, pieces of broken pottery, plastic fragments, bread bag ties, lilli pilli fruit, Lego, electrical wiring, string - even drinking straws, as in the photo above. The male bower bird really does collect everything. But what every human notices immediately is that every single item, however unique, is blue.

I enjoy collecting stuff in my Zettelkasten, my collection of notes, but like the bower bird I have a simple filter. I always try to write: “this interests me because…” and if there’s nothing to say, there’s no point in collecting the item. It’s just not blue enough.


See also:

Images:
Sacha Chua Book Summary CC-by-4.0.
Peter Ostergaard, Flickr, CC NC-by 2.0 Deed


  1. There are exceptions. A few people have tried to video their whole lives. And at least one person, Lion Kimbro, has tried to write down all their thoughts. But its not sustainable↩︎

Promethean shame among authors using AI tools:

“It starts to make you wonder, do I even have any talent if a computer can just mimic me?” The Verge

At last, writing slowly is back in fashion!

Cal Newport, author of the forthcoming book, 📚Slow Productivity, has finally latched on to the premise of this website: you can get a lot done by writing slowly.

Speeding up in pursuit of fleeting moments of hyper-visibility is not necessarily the path to impact. It’s in slowing down that the real magic happens.

I didn’t even know they could drive.

See also:

Thinking nothing of walking long distances

How far is too far to walk?

Author Charlie Stross observed that British people in the early nineteenth century, prior to train travel, walked a lot further than people today think of as reasonable.

I’ve noticed a couple of literary examples of this seemingly extreme walking behaviour, both of which took place in North Wales.

Headlong Hall

In chapter 7 of Thomas Love Peacock’s satirical novel, 📚Headlong Hall (1816), a group of the main characters takes a morning walk to admire the land drainage scheme around the newly industrial village of Tremadoc, and they walk halfway across Eryri to do so, traversing two valleys and two mountain passes. The main object of their interest is The Cob, a land reclamation project that was later to become a railway causeway. Having seen it, and having taken some refreshment in the village, they walk straight back again.

A view of Traeth Mawr, Wales, from the Cob, looking towards the Moelwyn mountain range

Image: The Moelwyn range, viewed from the Cob. Wikipedia CC sharealike 2.0

Wild Wales

You’d think the invention of the railways would have put people off walking such long distances, but apparently not so much. In his travel account, 📚Wild Wales (1862), George Borrow walks from Chester 18 miles to Llangollen, then walks another 11 miles to Wrexham just to fetch a book. Interestingly, he was writing after the railways had arrived. He was happy to put his wife and children on the train - but still walk the journey himself.

Real life

I would have believed these feats of everyday walking were improbable, except for the fact that when I was a child, a man in our village, Mr Large, walked every day to and from Chester, a round trip of 26 miles. He didn’t need to do it. He was in his eighties and well retired, and he could just have walked two miles to the bus stop. But apparently you don’t break the habits of a lifetime. Everyone in the village must have offered him a lift at one time or another, but he’d made it known that he preferred to walk. So having observed Mr Large regularly tramping the back lanes with determination, I already knew a long utility walk is more than possible.

These days, people rarely get out of their cars, convinced as they are that progress has been made. Walking is a problem, it seems, not a solution. And yet, on holiday, some people do long walks or even very long walks. For fun.

Oh brave new world that has such people in it!

Does the Zettelkasten have a top and a bottom?

What does it mean to write notes ‘from the bottom up’, instead of ‘from the top down’?

It’s one of the biggest questions people have about getting started with making notes the Zettelkasten way. Don’t you need to start with categories? If not, how will you ever know where to look for stuff? Won’t it all end up in chaos?

Bob Doto answers this question very helpfully, with some clear examples, in What do we mean when we say bottom up?. I especially like this claim:

“The structure of the archive is emergent, building up from the ideas that have been incorporated. It is an anarchic distribution allowing ideas to retain their polysemantic qualities, making them highly connective.”

  • Which way is up?

  • Try seeing the trees and the forest too

  • Hierarchy, heterarchy, homoarchy… am I just making these words up?

  • Get linking to get thinking

  • The key questions

  • What if I really just want a fixed structure?

Read More →

Can we understand consciousness yet?

Professor Mark Solms, Director of Neuropsychology at the University of Cape Town, South Africa, revives the Freudian view that consciousness is driven by basic physiological motivations such as hunger. Crucially, consciousness is not an evolutionary accident but is motivated. Motivated consciousnesses, he claims, provides evolutionary benefits.

a cover shot of Mark Solms' book, The Hidden Spring

Mark Solms. 2021. The Hidden Spring. A Journey to the Source of Consciousness. London: Profile Books. ISBN: 9781788167628

He claims the physical seat of consciousness is in the brain stem, not the cortex. He further claims that artificial consciousness is not in principle a hard philosophical problem. The artificial construction of a conscious being, that mirrors in some way the biophysical human consciousness, would ‘simply’ require an artificial brain stem of some sort.

I have been wondering what it would be like to have injuries so radical as to destroy the physiological consciousnesses, if such a thing exists, while retaining the ability to speak coherently and to respond to speech. Perhaps a person in this condition would be like the old computer simulation, Eliza, which emulated conversation in a rudimentary fashion by responding with open comments and questions, such as “tell me more”, and by mirroring its human conversation partner. The illusion of consciousness was easily dispelled. The words were there but there was no conscious subject directing them. However, since then language processing has become significantly more advanced and machine learning has progressed the ability of bots without consciousness to have what appears to be a conscious conversation. Yet still there’s a suspicion that there’s something missing.

One area of great advance is the ability of machine learning to take advantage of huge bodies of data, for example, a significant proportion of the text of all the books ever published, or literally billions of phone text messages, or billions of voice phone conversations. It’s possible to program with some sophistication interactions based on precedent: what is the usual kind of response to this kind of question? Unlike Eliza, the repertoire of speech doesn’t need to be predetermined and limited, it can be done on the fly in an open ended manner using AI techniques. But there’s still no experiencer there, and we (just about) recognise this lack. Even if we didn’t know it, and bots already passed among us incognito, they might still lack ‘consciousness’.

So, at what point does the artificial speaker become conscious? If the strictly biophysical view of consciousness is correct, the answer is never.

A chat bot will never “wake up” and recognise itself, because it lacks a brain stem, even an artificial one. Even if to an observer the chat-bot appears fully conscious, at least functionally, this will always be an illusion, because there is no felt experience of what it is like to be a chat bot, phenomenologically.

From the perspective of neo-Freudian neuropsychology, it is easy to see why Freud grew exasperated with Carl Jung. Quite apart from the notorious personality clashes, it seems Jung departed fundamentally from Freud’s desire to relate psychological processes to their physical determinants. For example, what possible biophysical process would be represented by the phrase “collective unconscious” (see Mills 2019)?

For Freud, the consciousness was strongly influenced by the unconscious, which was his term for the more basic drives of the body. For example, the Id was his term for the basic desire for food, for sex, to void, etcetera. This was unconscious because the conscious receives this information as demands from a location beyond itself, which it finds itself mediating.

He saw terms such as the Id, the Ego and the Superego as meta-psychological. He recognised what was not at the time known about the brain, such as the question of where exactly the Id is located, but he denied it was a metaphysical term. In other words, he claimed that the Id was located, physically, somewhere, yet to be discovered. His difficulty was he fully understood that his generation lacked the tools to discover where.

Note that meta-psychology is explicitly not metaphysical. Freud had no more interest in the metaphysical than other scientists of his time, or perhaps ours have done. His terminology was a stopgap measure meant to last only until the tools caught up with the programme.

The programme was always: to describe how the brain derives the mind.

Jung’s approach made a mockery of these aspirations. Surely no programme would ever locate the seat of the collective unconscious?

But perhaps this is a misunderstanding of the conflict between Freud and Jung. What if the distinction is actually between two conflicting views of the location of consciousness? For Freud, and for contemporary psychology, if consciousness is not located physically, either in the brain somewhere or in an artificial analogue of the brain, where could it possibly be located? Merely to ask the question seems to invite a chaos of metaphysical speculation. The proposals will be unfalsifiable, and therefore not scientific - “not even wrong”.

However, just as Mark Solms has proposed a re-evaluation of Freud’s project along biophysical lines, potentially acceptable in principle to materialists and empiricists (i.e. the entire psychological mainstream), perhaps it is possible for a re-evaluation of Jung’s programme along similar lines, but in a radically different direction.

If the brain is not the seat of the conscious, what possibly could be? This question reminds me of the argument in evolutionary biology about game theory. Prior to the development of game theory it was impossible to imagine what kind of mechanism could possibly direct evolution other than the biological. It seemed a non-question. Then along came John Maynard Smith’s application of game theory to ritualised conflict behaviour and altruism, and proved decisively that non-biological factors decisively shape evolutionary change.

What if Jung’s terms could be viewed as being just as meta-psychological as Freud’s, but with an entirely different substantive basis? Lacking the practical tools to investigate, Jung resorted to terms that mediated between the contemporary understanding of the way language (and culture more generally), not biology, constructs consciousness.

What else is “the collective unconscious”, if not an evocative meta-psychological term for the corpus of machine learning?

Perhaps consciousness is just a facility with a representative subset of the whole culture.

I’m wary of over-using the term ‘emergence’. I don’t want to speak of consciousness as an emergent property, not least because every sentence with that word in it still seems to make sense if you substitute the word ‘mysterious’. In other words, ‘emergence’ seems to do no explanatory work at all. It just defers the actual, eventual explanation. Even the so-called technical definitions seem to perform this trick and no more.

However, it’s still worth asking the question, when does consciousness arise? As far as I can understand Mark Solms, the answer is, when there’s a part of the brain that constructs it biophysically, and therefore, perhaps disturbingly, when there’s an analogue machine that reconstructs it, for example, computationally.

My scepticism responds: knowing exactly where consciousness happens is a great advance for sure, but this is still a long way from knowing how consciousness starts. The fundamental origin of consciousness still seems to be shrouded in mystery. And at this point you might as well say it’s an ‘emergent’ property of the brain stem.

For Solms, feeling is the key. Consciousness is the theatre in which discernment between conflicting drives plays out. Let’s say I’m really thirsty but also really tired. I could fetch myself a drink but I’m just too weary to do so. Instead, I fall asleep. What part of me is making these trade-offs between competing biological drives? On Solms’s account, this decision-making is precisely what conscousness is for. If all behaviour was automatic, there would be nothing for consciousness to do.

As Solms claims in a recent paper (2022) on animal sentience, there is a minimal key (functional) criterion for consciousness:

The organism must have the capacity to satisfy its multiple needs – by trial and error – in unpredicted situations (e.g., novel situations), using voluntary behaviour.

The phenomenological feeling of conscioussness, then, might be no more than the process of evaluating the success of such voluntary decision-making in the absence of a pre-determined ‘correct’ choice. He says:

It is difficult to imagine how such behaviour can occur except through subjective modulation of its success or failure within a phenotypic preference distribution. This modulation, it seems to me, just is feeling (from the viewpoint of the organism).

Then there’s the linguistic-cultural approach that I’ve fancifully been calling a kind of neo-Jungianism 1. When does consciousness emerge? The answer seems to be that the culture is conscious, and sufficient participation in its networks is enough for it to arise. If this sounds extremely unlikely (and it certainly does to me), consider two factors that might minimise the task in hand - first that most language is merely transactional and second that most awareness is not conscious.

As in the case of chat bots, much of what passes for consciousness is actually merely the use of transactional language, which is why Eliza was such a hit when it first came out. This transactional language could in principle be dispensed with, and bots could just talk to other bots. What then would be left? What part of linguistic interaction actually requires consciousness? Perhaps the answer is not much. Furthermore, even complex human consciousness spends much of the time on standby. Not only are we asleep for a third of our lives, but even when we’re awake we are often not fully conscious. So much of our lives is effectively automatic or semi automatic.

When we ask what is it like… the answer is often that it’s not really like anything.

The classic example is the feeling of having driven home from work, fully awake, presumably, of the traffic conditions, but with no recollection of the journey. It’s not merely that there’s no memory of the trip, it’s that, slightly disturbingly, there was no real felt experience of the trip to have a memory about. This is disturbing because of the suspicion that perhaps a lot of life is actually no more strongly experienced than this.

These observations don’t remove the task of explaining consciousness, but they do point to the possibility that the eventual explanation may be less dramatic than it might at first appear.

For the linguistic (neo-Jungian??) approach to consciousness the task then is to devise computational interactions sufficiently advanced as to cause integrated pattern recognition and manipulation to become genuinely self aware.

A great advantage of this approach is that it doesn’t matter at all if consciousness never results. Machine learning will still advance fruitfully.

For the biophysical (neo-Freudian) approach, the task is to describe the physical workings of self awareness in the brain stem so as to make its emulation possible in another, presumably computational, medium.

A great advantage of this approach is that even if the physical basis of consciousness is not demystified, neuropsychology will still understand more about the brain stem.

As far as I can see, both of these tasks are monumental, and one or both might fail. However, the way I’ve described them they seem to be converging on the idea that consciousness can in principle be abstracted from the mammalian brain and placed somewhere else, whether physical or virtual, whether derived from the individual brain, analogue or digital, or collective corpus, physical or virtual.

I noticed in the latter part of Professor Solms’s book a kind of impatience for a near future in which the mysteries of consciousness are resolved. I wonder if this is in part the restlessness of an older man who would rather not accept that he might die before seeing at least some of the major scientific breakthroughs that his life’s work has prepared for. Will we work out the nature of consciousness in the next few years, or will this puzzle remain, for a future generation to solve? I certainly hope we have answers soon!.

References:

Mills, J. (2019). The myth of the collective unconscious. Journal of the History of the Behavioral Sciences, 55(1), 40-53.

Solms, Mark (2022) Truly minimal criteria for animal sentience. Animal Sentience 32(2) DOI: 10.51291/2377-7478.1711


Jules Verne could have told us AI is not a real person

Read more on A.I.


  1. To clarify, I’m claiming, with Solms, that Freud’s pursuit was meta-psychological, not metaphysical. In contrast, I’m going further than Solms and reading Jung against himself here. Jung seems to have taken a strongly metaphysical approach (Mills 2019), whereas, I’m suggesting his programme may nevertheless be treated as a non-metaphysical but meta-psychological enquiry into the relationship between consciousness and human culture, not the brain. Mark Solms took part in a discussion on the differences between Freud and Jung↩︎

Ross Ashby's other card index

During the Twentieth Century many thinkers used index cards to help them both think and write.

British cyberneticist Ross Ashby kept his notes in 25 journals (a total of 7,189 pages) for which he devised an extensive card index of more than 1,600 cards.

At first it looks as though Ashby used these notebooks to aid the development of his thought, and the card index merely catalogued the contents. But it turns out he used his card index not only to catalogue but also to develop the ideas for a book he was writing.

Cyberneticist Ross Ashby at work at his desk

In a journal entry of 20 October 1943 he explained his decision to switch from an alphabetical key-word index to ‘an index depending on meaning’.

He describes his method as follows:

“20 Oct ‘43 - Having seen how well the index of p.1448 works, & how well everything drops into its natural place, I am no longer keeping the card index which I have kept almost since the beginning. The index was most useful in the days when I was just amassing scraps & when nothing fitted or joined on to anything else; but now that all the points form a closely knit & jointed structure, an index depending on meaning is more natural than one depending on the alphabet. So I have changed to a (card) index with the points of p.1448 in order. Thus it can grow, & be rearranged, on the basis of meaning. Summary: Reasons for changing the form of the index.” - Ross Ashby, Journal, Vol. 7

What was he up to? Thanks to Ashby’s meticulous note-taking, and the fact that it has all been saved and digitized, you can trace his working methods. It also helps that his handwriting is very clear!

  • First Ashby made almost random notes in his notebooks, which he indexed alphabetically by key-word, using a card index. To aid referencing, he gave the notebooks a continuous page numbering across all 25 volumes.
  • Next, in April 1943 and based on his notes, he created an outline for a book manuscript p.1234, then revised it six months later, on October 4th p.1447.
  • Satisfied with the revised outline p.1497, he created a completely new card index (the ‘Other’ index), arranged by subject, based on the outline headings, rather than key-words. This new index is what he describes in his note of October 20th 1943, which is reproduced above.
  • He deliberately kept this second index flexible, so that his notes could be re-arranged for as long as possible prior to the drafting of the actual manuscript.

This workflow is quite different from that of sociologist Niklas Luhmann, who unlike Ashby, didn’t use notebooks to any great extent. In fact it highlights a particularly striking aspect of Luhmann’s approach: for Luhmann the card index is its own contents; they are one and the same. Put another way, Luhmann’s Zettelkasten is largely self-indexing.

Ashby didn’t do this. Instead he followed the more standard card index system, elaborated, for example, by R.B. Byles, in 1911. In this system, originally designed for business, all documents are filed away, typically in order of receipt or creation, and then accessed by means of a separate card index, which provides the key to the entire collection. Ashby’s innovation was to adapt the card index system to refer to key-words in his notebooks, referenced by page number. Luhmann, certainly, also used key-words. His first Zettelkasten had “a keyword index with roughly 1,250 entries”, while his second, larger Zettelkasten had “a keyword index with 3,200 entries, as well as a short (and incomplete) index of persons containing 300 names” (Schmidt, 2016: 292).

However, due to Luhmann’s meticulous cross-referencing of individual cards, the key-word index isn’t strictly essential to connect the ideas in the Zettelkasten in a meaningful way; Luhmann’s cards link directly to other cards.

Fast-forward two generations and it seems that in the Internet age it is Luhmann’s method that has won out. The online version of Ross Ashby’s journal includes both the notebooks and the index as a single hyperlinked body of work. This represents a tremendous effort on the part of those who have painstakingly digitized the collection. Today, like Luhmann’s Zettelkasten, Ashby’s notebooks, at least in their Web-based incarnation, are finally self-indexing.

And there’s another sense in which Luhmann’s method won out. While Luhmann published scores of books, Ashby published plenty of academic articles, but only two full-length books. And neither of these books, as far as is evident, bear much relation to the manuscript outlines in his ‘other’ index. We can only speculate on whether Ashby might have produced more books had he used a system more like Luhmann’s.

Yet despite their differences, Ashby’s approach in creating his ‘other’ index is very consistent with Luhmann’s concern to keep the order of notes as flexible as possible for as long as possible.

The open drawer of Ross Ashby’s card index

ashby.info/journal/i…

References

The W. Ross Ashby Digital Archive

Jill Ashby (2009) W. Ross Ashby: a biographical essay, International Journal of General Systems, 38:2, 103-110, DOI: 10.1080/03081070802643402 (This is the source of the photo, above, of Ashby at his desk).

Byles, R.B. 1911. The card index system; its principles, uses, operation, and component parts. London, Sir I. Pitman & Sons, Ltd.

F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels), URL: cleamc11.vub.ac.be/ASHBBOOK….

Schmidt, J. (2016). Niklas Luhmann’s Card Index: Thinking Tool, Communication Partner, Publication Machine. pdfs.semanticscholar.org/88f8/fa9d…)


*Read more *:

The Hashtags of a cyberneticist

Even the index is just another note

To illustrate that claim, here’s a dynamic index of my Zettelkasten articles

Soon we'll all be writing the books we want to read

To benefit from AI-assisted writing, look closely at how it’s transforming the readers.

Whenever new technologies appear, many changes in the economy happen on the consumer side, not the producer side.

As AI-assisted writing disrupts the writers, it will do so mainly by transforming the readers.

Reading Confessions of a viral AI writer in Wired magazine made me realise I had the future of AI-assisted writing the wrong way around. Vauhini Vara’s article shows how AI is already making a massive difference to our expectations of writing. She’s a journalist and author who has seen her working practices upended. But what about the readers? Sure: production is undergoing massive disruption.

But meanwhile the consumption of writing is on the cusp of a complete revolution.

In the old days you used to stand at the grocery store counter while the staff fetched all the groceries for you, and at the fuel station an attendant would fill up your vehicle’s tank on your behalf. Then new technology transferred these tasks from the seller to the buyer, and the buyer had no real say in the matter, so that by now it’s completely normal to walk down a grocery aisle filling the trolley yourself, or to operate the fuel hose on your own.

No one pays you to do this work that the employees used to insist on doing.

It’s the same for all kinds of office work. The managers do their own budgeting using spreadsheets, while the staff do all their own typing. No one expects to find a typing pool at work. In fact, few workers are even old enough to have seen one.

A black and white photo of a typing pool in the 1950s. Rows of women sit at rows of desks in an office with a typewriter each.

Move forward a few years and social media has completely adopted this labour-shifting approach.

All the work of social networks is done by the customer.

On YouTube, Instagram and TikTok, the users literally make their own entertainment.

The consumer is now the producer. And this is exactly how it’s going to be with AI-assisted writing.

In former times other people, professionals, wrote books for you. They were called ‘writers’ or ‘authors’, and they, in turn, called you a ‘reader’. But the new technology is shifting the workload to the consumers. We won’t really have a choice, no one will pay us, and eventually we’ll come to see it as completely normal.

“If there’s a book you want to read, but it hasn’t been written yet, then you must write it.” — Toni Morrison

From now on the readers will use AI to write the books they want to read.

‘Professional writer’ will be a job like ‘bowser attendant’ - almost forgotten. Certainly the books still need to be written, just as the fuel tank still needs filling, but why not just let the reader write the books themselves? Who better to decide what they want?

Soon we’ll all be authors, each of us writing for a single reader - ourselves.

These categories, reader and writer, used to be obviously distinct. But AI will result in only one category. Maybe we’ll even need a new name for it.

But as every marketer and advertiser knows, people are completely out of touch with their own taste - they need someone to show them. Fashion, celebrity - consumerism is an ideology that requires followers.

The writers will have a new job: advising people on how best to describe their own desires.

A further, more tentative prediction: AI will also assist the general public to write computer programs. The programmer’s job will shift towards advising the public on what software they actually want to write.

Footnote: I’ll revisit this article in five years to see how accurate my crystal-ball-gazing really is!

Image source: How it was: life in the typing pool


See also:

Even the index is just another note

Index cards from The Card System at the Office

It’s tempting to place your notes in fixed categories

At some point in your note-making journey you’ll notice that quite a few people like to place their notes in fixed categories according to some scheme or other. The ancient method of commonplaces held that knowledge was naturally organised according to loci communis (common places). Ironically, no one from Aristotle onwards could ever agree on what the commonly-agreed categories were. Assigning your notes to categories is consistent with the ‘commonplace’ tradition, but that’s not what the prolific sociologist Niklas Luhmann did with his Zettelkasten, and furthermore it runs exactly counter to Luhmann’s claim in ‘Communicating with Slipboxes’, where he said:

“it is most important that we decide against the systematic ordering in accordance with topics and sub-topics and choose instead a firm fixed place (Stellordnung).”

But there’s no need to despair, there is a way through the impasse! After all, what exactly is a subject or category? The subject or category index itself, it turns out, is nothing other than just another note. Here’s a real-life example:

Screenshot of a Zettelkasten index created in Obsidian

“i have this note that basically functions as an general index and entry point for my ZK: it has every index card plus a People index and every main card.” - u/Efficient_Earth_8773

When everything’s a note, even the categories are just notes

Why does this matter? If even the index is just a note, then you haven’t constrained yourself with pre-determined categories. Instead, you can have different and possibly contradictory index systems within a single Zettelkasten, and further, a note can belong not only to more than one category, but also to more than one categorization scheme. Luhmann says:

“If there are several possibilities, we can solve the problem as we wish and just record the connection by a link [or reference].”

When even the index is just a note, a reference to a ‘category’ takes no greater (or lesser) priority than any other kind of link. This is liberating. Where a piece of information ‘really’ belongs shouldn’t be determined in advance, but by means of the process itself.

A colourful diagram of the Dewey Decimal classification system

The Dewey Decimal System pigeonholes all knowledge, like cells in a prison.

Some people want an index, like folders in a filing cabinet, or subject shelves in the library. Well they can have it: just write a note with the subjects listed and make them linkable. Some people don’t want this, and they can ignore it. I personally don’t understand why you’d want to set up a subject index that mimics Wikipedia or the Dewey Decimal system, or even the ‘common places’ of old. I’m neither an encyclopedist, a librarian, nor an archivist. What I’m trying to do is to create new work. I want to demonstrate my own irreducible subjectivity by documenting my own unique journey through the great forest of thought. My journey is subjective, because it’s my journey. I’m pioneering a particular route, and laying down breadcrumbs for others to follow should they so choose. It’s unique, not because it’s original but because the small catalogue of items that attract me is wholly original. As film-maker Jim Jarmusch said:

“Select only things to steal from that speak directly to your soul. If you do this, your work (and theft) will be authentic.” (I stole that from Austin Kleon).

But that’s just me (and Luhmann).

Make just enough hierarchy to be useful

Having thought a bit about this I’m inspired now to sketch my own workflow, to see how it… flows. In general, I favour just enough data hierarchy to be viable - which really isn’t very much at all. I’m inspired by Ward Cunningham’s claim that the first wiki was ‘the simplest online database that could possibly work’. Come to think of it, this may be one of the disadvantages of the way the Zettelkasten process is presented: perhaps it comes across as more complex than it really needs to be. As the computer scientist Edsger Dijkstra lamented,

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.” - On the nature of Computing Science (1984).

If you must have hierarchies like lists and trees, remember that they’re both just subsets of a network.

image showing how a list and a tree are subsets of a network

Source: I don’t know. If you do, please tell me ;)


See also:

I’m late to the party but just needed to say: yes, the web is fantastic . I actually love it 😍

📷 Moonlight over the headland. This time of year in Sydney is pretty magical.

Moonlight through clouds above a rocky headland. In the foreground the moonlight is reflected in the waves lapping the shore.