article
- Harrison Owen, creator of Open Space Technology.
-
Don’t build a magnificent but useless encyclopaedia
-
Document your journey through the deep forest
-
Avoid inert ideas
-
Converse about what really matters to you
-
Imagine, then build, new knowledge products
-
Where (and how) you go is more important than where you start from
-
An example
- Maureen Murdock (1990) and later Gail Carriger (2020) have both presented feminised versions of the heroic quest narrative. I’m not convinced that these heroine’s journeys are really all that different, though, since they still assume that heroism, albeit that of women, is where it’s at. At least there’s an attempt to re-balance the faulty idea that only men are at the centre.
- The New Yorker published a moving non-fiction account by Laura Secor of an Iranian woman’s bravery. The true story of journalist Asieh Amini doesn’t rely on a standard heroic arc, yet is highly effective. This is only one example of very many alternatives.
- Novelist Becky Chambers points out in a talk on YouTube that real life has no protagonists. Surely this can help us to question stale narrative forms, especially those which claim to be true to reality.
- Meanwhile, Christina de la Rocha is on a noble quest to put an end to the hero’s journey in literature and beyond. OK, not a quest. Perhaps she’d approve of Ursula le Guin’s claim that “the novel is a fundamentally unheroic kind of story”. More on that in a moment.
- Screenwriter Anthony Mullins has written a whole book showing that there’s plenty more than only one kind of character arc.
-
See also Elizabeth Wayland Barber (1994) on women’s role in technology, textiles and the string revolution. ↩︎
-
If you’re not sure what website feeds are, see IndieWeb: feed reader and how to use RSS feeds. ↩︎
-
Does the index box distort the facts?
-
Can you create coherent writing just from a pile of notes?
-
Perhaps you should keep your notes private
-
Make it flow
-
To create coherent writing, make coherent notes
-
Begin with fragments
-
From smaller parts build a greater whole
-
Join your work together
-
Do it seamlessly well
- If you’ve enjoyed it so far, you can just keep doing what you’ve been doing, collecting all the things. Why not?
- But if you like, you could start doing it more deliberately. For example, at the start of a new year, you could say to yourself: In 2023 I seem to have been interested in a,b, and c. Now in 2024 I want to explore more about b, drop a, and learn about d and e.
- How to be interested in everything
- Don’t you need to start with categories?
- It’s tempting to place your notes in fixed categories
- To build something big start with small fragments
- Thoughts are nest-eggs: Thoreau on writing
- This article is adapted from a comment on Reddit
-
There are exceptions. A few people have tried to video their whole lives. And at least one person, Lion Kimbro, has tried to write down all their thoughts. But its not sustainable. ↩︎
-
Which way is up?
-
Try seeing the trees and the forest too
-
Hierarchy, heterarchy, homoarchy… am I just making these words up?
-
Get linking to get thinking
-
The key questions
-
What if I really just want a fixed structure?
-
To clarify, I’m claiming, with Solms, that Freud’s pursuit was meta-psychological, not metaphysical. In contrast, I’m going further than Solms and reading Jung against himself here. Jung seems to have taken a strongly metaphysical approach (Mills 2019), whereas, I’m suggesting his programme may nevertheless be treated as a non-metaphysical but meta-psychological enquiry into the relationship between consciousness and human culture, not the brain. Mark Solms took part in a discussion on the differences between Freud and Jung. ↩︎
A minimal approach to making notes
I want a minimal approach to making notes.
I don’t want anything fancy, just enough structure to be useful.
When I see people’s souped-up Obsidian note-taking vaults my head spins (OK, I’m jealous). I also wonder, though, what extra result is achieved with a fantastically complex system. Having said that, I’m keen on people creating a working environment that works for them, and I do admire people’s creativity in this area.
I just can’t be bothered to do it myself.
When discussing the Zettelkasten approach to making notes, it seems there are a lot of different note types to consider, which confuses people. The extensive discussion about different types of notes caused by reading Sonke Ahrens’s book How to Take Smart Notes makes me think this multiple-note-types approach is just too complicated for me. So what do I do instead?
A forest of evergreen notes
Jon M Sterling, a computer scientist at Cambridge University, has created his own ‘mathematical Zettelkasten’, which he also calls ‘a forest of evergreen notes’.
He maintains a very interesting website, built using a tool he created, named, appropriately enough, Forester.
The implementation of his ideas raises all sorts of ideas and questions for me, almost all enthusiastic. Here are a few in no order at all:
Make your notes a creative working environment
“Do you have an ideal creative environment? Also do you believe the physical space influences your creativity?”
This is a question Manuel Moreale regularly asks his guests on the People and Blogs newsletter. The answers are always fascinating and well worth a read.
This got me thinking about my own working environment and maybe I overthought it. It looks like I’ve totally ignored Barry Hess’s reminder that you’re a blogger not an essayist.[^1] Anyway, here goes.
Note: This post is part of the Indieweb Carnival on creative environments.
Is the Web reconfiguring itself again?
Is the web falling apart?, Eric Gregorich wonders.
Meanwhile Manuel Moreale is confident that the web is not dying.
I agree with both of them. These views aren’t contradictory. Falling apart is what the Web does best. It’s been falling apart since it started, and reconfiguring itself too.
Google search used to control and shape the web. Because everyone just Googled their searches, websites all used Search Engine Optimization in a vicious circle of conformity. But that’s finally changing.
Search gets degraded by advertising greed on one side and AI tools are generating drivel on the other. Both are examples of what Ed Zitron calls the rot economy.
So how can good material rise to the surface?
In part it’s a return to the old ways. Blogrolls and webrings and RSS are having a mini-revival and it’s not entirely mere nostalgia. One-person search engines like Marginalia are having a moment, as are metasearch engines and other ‘folk’ search strategies. I like little experiments like A Website Is A Room.
Here’s my tip: to find interesting books, great quotes, and intriguing podcasts, more people should know about micro.blog Discover!
Photo by Valeria Hutter on Unsplash
How to set your own agenda
Harrison Owen, who died in March 2024, invented one of the most hopeful approaches to group facilitation I’ve ever come across. He called it ‘Open Space Technology’ (OST), but it was far from hi-tech. In fact, the main ‘technology’ was simply in how people in a group setting can interact fruitfully with one another, even when they really don’t agree.
“Peace of the sort that brings wholeness, harmony, and health to our lives only happens when chaos, confusion, and conflict are included and transcended.”
I first came across Open Space as a means of organising workshops in highly contested political spaces.
In the UK during the 1980s and early 1990s progressive social activity was constantly undermined by Trotskyites (or whatever they were) striving to co-opt social movements for their own ends. There was always a risk that as soon as you set up a committee of any kind, they’d get themselves voted onto it and turn it into a front for the true workers revolutionary communist workers party, or some such combination of those terms.
But what were the alternatives? The Labour Party had been hammered with this problem, and had settled on a full-blown witch hunt against anyone affiliated with the Militant Tendency, which like a monstrous baby cuckoo had nearly pushed them out of their own nest. We’d witnessed how the so-called cure was nearly as bad as the disease.
I think it was about 1992 when we organised our own small Open Space event. Of course, the entryists turned up, but the Law of Two Feet really stumped them. When they realised anyone could set the agenda they were delighted. This must have seemed much easier than having to take over by stealth! But when the discussions began they were confounded by the fact that, equally, anyone could just walk away and find something more important to them. To everyone except the entryists, the experience was delightful.
Image by Chris Kinkel from Pixabay
Of course, OST didn’t change the whole world, and it’s not useful for every meeting. But it was formative for me personally, because I could see how people could come together to identify, commit to and begin to solve their own problems, without waiting for someone else to do it for them.
Open Space Technology has also left a strong mark on facilitation generally. Unconferences, World Café, Bar Camp, the Art of Hosting, design sprints, and many other approaches owe a great deal to Harrison Owen’s pioneering determination to trust people to pursue their own agendas.
Vale Harrison Owen.
More:
Working in Open Space: A Guided Tour
Don't make a Zitatsalat out of your writing
Zitatsalat? What does that even mean?
Yes, Zitatsalat. I found this lovely but rarely used German term in the title of a book by the journalist Stephan Maus. The book’s name is Zitatsalat von Hinz & Kunz.[^1]
I love the rhyming rhythm of this compound term, but what does Zitatsalat actually mean?
Well, Zitatsalat translates as Quote Salad. It’s not a compliment.
Zitatsalat, by Stephan Maus (2002).
But what’s wrong with quoting other writers?
How to make Mastodon even more fun!
Here are a couple of fun websites that will make Mastodon (and possibly the whole fediverse) even more fun. I know, it hardly seems possible. And if you know of others, please let me know about them too.
Just my toots
Do you sometimes wish you could see all your posts on Mastodon in a long list with no distractions? Of course you do! Every day! That’s why justmytoots.com is here to help. And yes, it shows you just your toots.
For the record, I hate the word ‘toots’.
At least where I live no one thinks of flatulence when they hear it, but still, it somehow manages to sound even more stupid than ‘tweets’, which takes some doing.
Now, above the cacophony of all the tooting I can almost hear you ask, “What’s the alternative?” ‘Posts’, that’s the alternative, and that’s what I’m sticking with. Why not join me, world?
Until then, you can see just my toots at https://justmytoots.com/@writingslowly@aus.social
RSS is dead LOL
Now this one really is cool.
You know how everyone at the Internet always says ‘RSS is dead’, right? It’s so annoying!
But anyway, just type in a fediverse username into rss-is-dead.lol and up pops a list of RSS feeds for that user and every account that account follows.
Its amazing! Nearly everyone I follow has an RSS feed! Wow!
Pretty much proves RSS still ain’t dead. Take that, haters!
Bonus fact: it turns out you can use RSS to ‘boost your productivity’. I don’t know what that phrase means, but it sounds great!
Meanwhile, check out my graph, or whatever you call it, at rss-is-dead.lol.
How to start a Zettelkasten from your existing deep experience
An organized collection of notes (a Zettelkasten) can help you make sense of your existing knowledge, and then make better use of it. Make your notes personal and make them relevant. Resist the urge to make them exhaustive.
Yes, we can be heroes, but does that mean we should be?
Yes really, we can be heroes. Thanks very much David Bowie! But if this sounds attractive, perhaps we should be careful what we wish for.
Do you want to be the hero of your own story? Perhaps you already are
According to reporting in Scientific American, imagining yourself as the hero of your own life gives you an increased sense of meaning.
“Our research reveals that the hero’s journey is not just for legends and superheroes. In a recent study published in the Journal of Personality and Social Psychology, we show that people who frame their own life as a hero’s journey find more meaning in it”.
But it’s not always great to be a hero
Meanwhile, from a quite different research perspective, comes a warning: Stanley and Kay (2023) caution that making people out to be heroic can inadvertently single them out for poor treatment from their peers.
“our studies show that heroization ultimately promotes worse treatment of the very groups that it is meant to venerate.”
Reading this I immediately thought of all those ‘heroic’ health workers who helped their communities through the COVID crisis at great personal cost and with very little long-term recognition (McAllister et al. 2020). In far too many cases, calling doctors, nurses and hospital workers heroes and even super-heroes ended up as quite tokenistic, little more than a way of justifying the exploitation of their labour. First they make you a hero, then they make you burn out.
Banksy’s artwork of a child playing with a nurse ‘superhero’ doll raised more than 16m for charity… but nurses' pay and conditions didn’t take off
Sometimes heroism isn’t what it seems
And here’s yet another, quite different warning: sometimes the person who sets themselves up as a classic hero is revealed to be anything but that. The case of Australian SAS fighter Ben Roberts-Smith is an extraordinary example of the moral jeopardy of a whole society desperate to believe in heroism. It seems this decorated and celebrated ‘war hero’ was really quite the opposite. The cover-up shows how much people want to believe in heroes, even when they don’t exist. This real-life tale has echoes of Beowulf to it. In Maria Dahvana Headley’s contemporary version (2021), the final words of that centuries-old tale ring painfully, bleakly, hollow with macho delusion:
”He rode hard! He stayed thirsty! He was the man! He was the man.”
So can we journey beyond the ‘hero’s journey’ already?
The hero’s journey trope has become so ubiquitous that it’s sometimes hard to remember that there’s any other kind of story. But there certainly is.
And author Jane Alison goes even further. In her book Meander, Spiral, Explode she notes that there are far more key patterns in literature than just the arc.
Not every story is a journey
Taking her cue from Joseph Frank’s book The idea of spatial form, and from Peter Stevens’s Patterns in Nature, Alison identifies some alternative or complimentary shapes.
I particularly like her concept of the story that meanders like a river, or ripples in waves and wavelets. These aquatic images remind me of something the former monk and psychotherapist Thomas Moore said about how life itself has a kind of liquidity to it:
“Your story is a kind of water, making fluid the brittle events of your life. A story liquefies you, prepares you for more subtle transformations. The tales that emerge from your dark night deconstruct your existence and put you again in the flowing, clear, and cool river of life.” (Moore, 2004, p. 61)
In his book on spatial form, Joseph Frank examines the structure of Djuna Barnes’s modernist novel, Nightwood. This novel doesn’t have a hero’s journey or a flowing river, but instead has a series of views or glimpses of life. He says:
“The eight chapters of Nightwood are like searchlights, probing the darkness each from a different direction, yet ultimately focusing on and illuminating the same entanglement of the human spirit . . . And these chapters are knit together, not by the progress of any action . . . but by the continual reference and cross-reference of images and symbols which must be referred to each other spatially throughout the time-act of reading.”
This searchlight metaphor is illuminating, but story structure can be yet looser, more diffuse than rivers and spotlights. I’m particularly taken with Ursula Le Guin’s carrier bag theory of fiction. Remember she said the novel is a fundamentally unheroic kind of story? If so then what is it?
“the natural, proper, fitting shape of the novel might be that of a sack, a bag. A book holds words. Words hold things. They bear meanings. A novel is a medicine bundle, holding things in a particular, powerful relation to one another and to us.”
Le Guin’s insight is itself based on the carrier bag theory of human evolution, as described in Elizabeth Fisher’s Women’s Creation (1979).
“The first cultural device was probably a recipient …. Many theorizers feel that the earliest cultural inventions must have been a container to hold gathered products and some kind of sling or net carrier.” 1
Not everyone needs to be a hero to be a valid person. Mostly it’s better when we’re not. And not every story needs to be a hero’s journey for it to be worth the telling. The idea of the hero can be useful in some circumstances, dangerous in others. But more often it just gets in the way. Sometimes it’s really about “complex skills and compassion”. Sometimes it’s less about hunting and more about gathering.
So now, do you still want to be a hero, you hero you?
Now read:
More than ever, embracing your humanity is the way forward
References
Alison, Jane. 2019. Meander, Spiral, Explode. Design and Pattern in Narrative. New York: Catapult.
Barber, Elizabeth Wayland. 1994. Women’s Work: The First 20,000 Years; Women, Cloth, and Society in Early Times. New York, NY: Norton.
Barnes, Djuna. 2006/1937. Nightwood, New York: New Directions.
Carriger, Gail. 2020. The Heroine’s Journey: For Writers, Readers, and Fans of Pop Culture. Gail Carriger LLC.
Fisher, Elizabeth. 1979. Woman’s Creation: Sexual Evolution and the Shaping of Society. 1st ed. Garden City, NY: Anchor Press.
Frank, Joseph. 1991 The Idea of Spatial Form, New Jersey: Rutgers University Press.
Headley, Maria Dahvana. 2021. Beowulf. A New Translation. Melbourne and London: Scribe.
Kaul, Aashish. 2014. Mapping space in fiction: Joseph Frank and the idea of spatial form. 3am Magazine
LeGuin, Kroeber, Ursula. 1989. “The Carrier Bag Theory of Fiction” in Dancing at the Edge of the World. New York: Grove Atlantic Press. Accessed at stillmoving.org/resources…
Margaret McAllister, Donna Lee Brien & Sue Dean (2020) The problem with the superhero narrative during COVID-19, Contemporary Nurse, 56:3, 199-203, DOI: 10.1080/10376178.2020.1827964
Moore, Thomas. 2004. Dark Nights of the Soul. London, UK: Piatkus Books.
Mullins, Anthony. 2021. Beyond the Hero’s Journey: A Screenwriting Guide for When You’ve Got a Different Story to Tell. Sydney, N.S.W: NewSouth Publishing.
Murdock, M. 1990. The Heroine’s Journey. Boston, MA: Shambhala Publications.
Rogers, B. A., Chicas, H., Kelly, J. M., Kubin, E., Christian, M. S., Kachanoff, F. J., Berger, J., Puryear, C., McAdams, D. P., & Gray, K. 2023. Seeing your life story as a Hero’s Journey increases meaning in life. Journal of Personality and Social Psychology, 125(4), 752–778. https://doi.org/10.1037/pspa0000341
Secor, Laura. 2015. War of Words. New Yorker
Stanley, M. L., & Kay, A. C. 2023. The consequences of heroization for exploitation. Journal of Personality and Social Psychology. Advance online publication. https://doi.org/10.1037/pspa0000365
Stevens, P. 1974. Patterns in Nature, New York: Little, Brown & Co.
Atomic notes - all in one place
From today there’s a new category in the navigation bar of Writing Slowly.
‘Atomic Notes’ now shows all posts about making notes.
How to make effective notes is a long-standing obsession of mine, but this new category was inspired by Bob Doto, who has his own fantastic resource page: All things Zettelkasten.
The Atomic Notes category is now highlighted on the site navigation bar.
And if you’d like to follow along with your favourite feed reader,there’s also a dedicated RSS feed (in addition to the more general whole-site feed).1
But if there’s a particular key-word you’re looking for here at Writing Slowly, you can use the built-in search.
And if you prefer completely random discovery, the site’s lucky dip feature has you covered.
Connect with me on micro.blog or on Mastodon. And on Reddit, I’m - you guessed it - @atomicnotes.
See also:
Assigning posts to a new category with micro.blog
A new post category in micro.blog, filtered to include existing posts
Micro.blog is a really useful and easy way to host a website. Even though it feels more like a cottage industry than a corporation there are way more features (and apps!) than I can probably use. It’s amazing how much Manton Reece, micro.blog’s creator, has achieved.
Under the hood the micro.blog platform is based on the Hugo static site generator, but there are a few differences. One such difference is post categories.
Here’s a new category being created.
It’s very easy to create a new category of posts, then you can use a filter to automatically add all new posts that include a selected key-word (or emoji, or even html element). By default only new posts are affected. But by running the filter you can also add all previous posts that meet the selected criteria. That’s what I wanted to do.
Once you have a new category, you can add a filter. This particular filter assigns to this new category only long posts with a particular word in the text.
When you run the filter, all existing posts that match will be added to the category. And future posts will be added automatically.
Also, each category gets its own RSS feed, which can be very useful.
This process was much easier than I expected!
More info from elsewhere:
How to overcome Fetzenwissen: the illusion of integrated thought
It’s too easy to produce fragmentary knowledge.
One potential problem associated with making notes according to the Zettelkasten approach is Verknüpfungszwang: the compulsion to find connections. It may be true philosophically that everything’s connected, but in the end what matters is useful or meaningful connections. With your notes, then, you need to make worthwhile, not indiscriminate links.
Another potential problem is Fetzenwissen: fragmentary knowledge, along with the illusion that disjointed fragments can produce integrated thought.
Almost by definition, notes are brief, and I’m an enthusiast of making short, modular, atomic notes. Yes, this results in knowledge presented in fragments. And in their raw form these fragmentary notes are quite different from the kind of coherent prose and well-developed arguments readers usually expect. You can’t just jam together a set of notes and expect them to make an instant essay. So is this fragmentary knowledge really a problem for note-making? If so, how can determined note-makers overcome it?
From fragments you can build a greater whole
Everything large and significant began as small and insignificant
This is my working philosophy of creativity and I’m trying to follow it through as best I can. Starting with simple parts is how you go about constructing complex systems.
“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system”. — John Gall (1975) Systemantics: How Systems Really Work and How They Fail, p. 71.
Bits and pieces put together to create a semblance of a whole, by Lawrence Weiner
How to decide what to include in your notes
Before the days of computers, people used to collect all sorts of useful information in a commonplace book.
The ancient idea of commonplaces was that you’d have a set of subjects you were interested in. These were the loci - the places - where you’d put your findings. They were called loci communis - common places, in Latin, because it was assumed everyone knew what the right list of subjects was.
But in practice, everyone had their own set of categories and no one really agreed. It was personal.
Since the digital revolution, things have become trickier still. There’s no real storage limit so you could in principle make notes about everything you encounter. But no matter what software you use, your time on this earth is limited, so you need to narrow the field down somehow1.
But how, exactly?
You might consider just letting rip and collecting everything that interests you, as though you’re literally collecting everything.
Lion Kimbro tried to make a map of every thought he had.
As time passes, you’ll notice that you haven’t actually collected everything because that’s completely impossible. Even Thomas Edison, the prolific inventor, wasn’t interested in absolutely everything, although he tried hard to be. If you do a bit of a stock-take of your own notes, you’ll see that, really, you gravitate towards only a few subjects.
These are your very own ‘commonplaces’.
From then on you have two choices.
You could create an index, with a set of keywords, and add page number references to show what subject each entry is about, and how they relate. Or not. Of course, it’s your collection of notes and you can do whatever pleases you. That’s the point.
Bower birds collect everything, but with one crucial principle.
Where I live we have satin bower birds.
The male creates a bower out of twigs and strews the ground with the beautiful things he’s found. Apparently this impresses the females. The bower can contain practically anything, and it really is beautiful. Clothes pegs, pieces of broken pottery, plastic fragments, bread bag ties, lilli pilli fruit, Lego, electrical wiring, string - even drinking straws, as in the photo above. The male bower bird really does collect everything. But what every human notices immediately is that every single item, however unique, is blue.
I enjoy collecting stuff in my Zettelkasten, my collection of notes, but like the bower bird I have a simple filter. I always try to write: “this interests me because…” and if there’s nothing to say, there’s no point in collecting the item. It’s just not blue enough.
See also:
Images:
Sacha Chua Book Summary CC-by-4.0.
Peter Ostergaard, Flickr, CC NC-by 2.0 Deed
At last, writing slowly is back in fashion!
Cal Newport, author of the forthcoming book, 📚Slow Productivity, has finally latched on to the premise of this website: you can get a lot done by writing slowly.
Speeding up in pursuit of fleeting moments of hyper-visibility is not necessarily the path to impact. It’s in slowing down that the real magic happens.
I didn’t even know they could drive.
See also:
Thinking nothing of walking long distances
How far is too far to walk?
Author Charlie Stross observed that British people in the early nineteenth century, prior to train travel, walked a lot further than people today think of as reasonable.
I’ve noticed a couple of literary examples of this seemingly extreme walking behaviour, both of which took place in North Wales.
Headlong Hall
In chapter 7 of Thomas Love Peacock’s satirical novel, 📚Headlong Hall (1816), a group of the main characters takes a morning walk to admire the land drainage scheme around the newly industrial village of Tremadoc, and they walk halfway across Eryri to do so, traversing two valleys and two mountain passes. The main object of their interest is The Cob, a land reclamation project that was later to become a railway causeway. Having seen it, and having taken some refreshment in the village, they walk straight back again.
Image: The Moelwyn range, viewed from the Cob. Wikipedia CC sharealike 2.0
Wild Wales
You’d think the invention of the railways would have put people off walking such long distances, but apparently not so much. In his travel account, 📚Wild Wales (1862), George Borrow walks from Chester 18 miles to Llangollen, then walks another 11 miles to Wrexham just to fetch a book. Interestingly, he was writing after the railways had arrived. He was happy to put his wife and children on the train - but still walk the journey himself.
Real life
I would have believed these feats of everyday walking were improbable, except for the fact that when I was a child, a man in our village, Mr Large, walked every day to and from Chester, a round trip of 26 miles. He didn’t need to do it. He was in his eighties and well retired, and he could just have walked two miles to the bus stop. But apparently you don’t break the habits of a lifetime. Everyone in the village must have offered him a lift at one time or another, but he’d made it known that he preferred to walk. So having observed Mr Large regularly tramping the back lanes with determination, I already knew a long utility walk is more than possible.
These days, people rarely get out of their cars, convinced as they are that progress has been made. Walking is a problem, it seems, not a solution. And yet, on holiday, some people do long walks or even very long walks. For fun.
Oh brave new world that has such people in it!
Does the Zettelkasten have a top and a bottom?
What does it mean to write notes ‘from the bottom up’, instead of ‘from the top down’?
It’s one of the biggest questions people have about getting started with making notes the Zettelkasten way. Don’t you need to start with categories? If not, how will you ever know where to look for stuff? Won’t it all end up in chaos?
Bob Doto answers this question very helpfully, with some clear examples, in What do we mean when we say bottom up?. I especially like this claim:
“The structure of the archive is emergent, building up from the ideas that have been incorporated. It is an anarchic distribution allowing ideas to retain their polysemantic qualities, making them highly connective.”
Can we understand consciousness yet?
Professor Mark Solms, Director of Neuropsychology at the University of Cape Town, South Africa, revives the Freudian view that consciousness is driven by basic physiological motivations such as hunger. Crucially, consciousness is not an evolutionary accident but is motivated. Motivated consciousnesses, he claims, provides evolutionary benefits.
Mark Solms. 2021. The Hidden Spring. A Journey to the Source of Consciousness. London: Profile Books. ISBN: 9781788167628
He claims the physical seat of consciousness is in the brain stem, not the cortex. He further claims that artificial consciousness is not in principle a hard philosophical problem. The artificial construction of a conscious being, that mirrors in some way the biophysical human consciousness, would ‘simply’ require an artificial brain stem of some sort.
I have been wondering what it would be like to have injuries so radical as to destroy the physiological consciousnesses, if such a thing exists, while retaining the ability to speak coherently and to respond to speech. Perhaps a person in this condition would be like the old computer simulation, Eliza, which emulated conversation in a rudimentary fashion by responding with open comments and questions, such as “tell me more”, and by mirroring its human conversation partner. The illusion of consciousness was easily dispelled. The words were there but there was no conscious subject directing them. However, since then language processing has become significantly more advanced and machine learning has progressed the ability of bots without consciousness to have what appears to be a conscious conversation. Yet still there’s a suspicion that there’s something missing.
One area of great advance is the ability of machine learning to take advantage of huge bodies of data, for example, a significant proportion of the text of all the books ever published, or literally billions of phone text messages, or billions of voice phone conversations. It’s possible to program with some sophistication interactions based on precedent: what is the usual kind of response to this kind of question? Unlike Eliza, the repertoire of speech doesn’t need to be predetermined and limited, it can be done on the fly in an open ended manner using AI techniques. But there’s still no experiencer there, and we (just about) recognise this lack. Even if we didn’t know it, and bots already passed among us incognito, they might still lack ‘consciousness’.
So, at what point does the artificial speaker become conscious? If the strictly biophysical view of consciousness is correct, the answer is never.
A chat bot will never “wake up” and recognise itself, because it lacks a brain stem, even an artificial one. Even if to an observer the chat-bot appears fully conscious, at least functionally, this will always be an illusion, because there is no felt experience of what it is like to be a chat bot, phenomenologically.
From the perspective of neo-Freudian neuropsychology, it is easy to see why Freud grew exasperated with Carl Jung. Quite apart from the notorious personality clashes, it seems Jung departed fundamentally from Freud’s desire to relate psychological processes to their physical determinants. For example, what possible biophysical process would be represented by the phrase “collective unconscious” (see Mills 2019)?
For Freud, the consciousness was strongly influenced by the unconscious, which was his term for the more basic drives of the body. For example, the Id was his term for the basic desire for food, for sex, to void, etcetera. This was unconscious because the conscious receives this information as demands from a location beyond itself, which it finds itself mediating.
He saw terms such as the Id, the Ego and the Superego as meta-psychological. He recognised what was not at the time known about the brain, such as the question of where exactly the Id is located, but he denied it was a metaphysical term. In other words, he claimed that the Id was located, physically, somewhere, yet to be discovered. His difficulty was he fully understood that his generation lacked the tools to discover where.
Note that meta-psychology is explicitly not metaphysical. Freud had no more interest in the metaphysical than other scientists of his time, or perhaps ours have done. His terminology was a stopgap measure meant to last only until the tools caught up with the programme.
The programme was always: to describe how the brain derives the mind.
Jung’s approach made a mockery of these aspirations. Surely no programme would ever locate the seat of the collective unconscious?
But perhaps this is a misunderstanding of the conflict between Freud and Jung. What if the distinction is actually between two conflicting views of the location of consciousness? For Freud, and for contemporary psychology, if consciousness is not located physically, either in the brain somewhere or in an artificial analogue of the brain, where could it possibly be located? Merely to ask the question seems to invite a chaos of metaphysical speculation. The proposals will be unfalsifiable, and therefore not scientific - “not even wrong”.
However, just as Mark Solms has proposed a re-evaluation of Freud’s project along biophysical lines, potentially acceptable in principle to materialists and empiricists (i.e. the entire psychological mainstream), perhaps it is possible for a re-evaluation of Jung’s programme along similar lines, but in a radically different direction.
If the brain is not the seat of the conscious, what possibly could be? This question reminds me of the argument in evolutionary biology about game theory. Prior to the development of game theory it was impossible to imagine what kind of mechanism could possibly direct evolution other than the biological. It seemed a non-question. Then along came John Maynard Smith’s application of game theory to ritualised conflict behaviour and altruism, and proved decisively that non-biological factors decisively shape evolutionary change.
What if Jung’s terms could be viewed as being just as meta-psychological as Freud’s, but with an entirely different substantive basis? Lacking the practical tools to investigate, Jung resorted to terms that mediated between the contemporary understanding of the way language (and culture more generally), not biology, constructs consciousness.
What else is “the collective unconscious”, if not an evocative meta-psychological term for the corpus of machine learning?
Perhaps consciousness is just a facility with a representative subset of the whole culture.
I’m wary of over-using the term ‘emergence’. I don’t want to speak of consciousness as an emergent property, not least because every sentence with that word in it still seems to make sense if you substitute the word ‘mysterious’. In other words, ‘emergence’ seems to do no explanatory work at all. It just defers the actual, eventual explanation. Even the so-called technical definitions seem to perform this trick and no more.
However, it’s still worth asking the question, when does consciousness arise? As far as I can understand Mark Solms, the answer is, when there’s a part of the brain that constructs it biophysically, and therefore, perhaps disturbingly, when there’s an analogue machine that reconstructs it, for example, computationally.
My scepticism responds: knowing exactly where consciousness happens is a great advance for sure, but this is still a long way from knowing how consciousness starts. The fundamental origin of consciousness still seems to be shrouded in mystery. And at this point you might as well say it’s an ‘emergent’ property of the brain stem.
For Solms, feeling is the key. Consciousness is the theatre in which discernment between conflicting drives plays out. Let’s say I’m really thirsty but also really tired. I could fetch myself a drink but I’m just too weary to do so. Instead, I fall asleep. What part of me is making these trade-offs between competing biological drives? On Solms’s account, this decision-making is precisely what conscousness is for. If all behaviour was automatic, there would be nothing for consciousness to do.
As Solms claims in a recent paper (2022) on animal sentience, there is a minimal key (functional) criterion for consciousness:
The organism must have the capacity to satisfy its multiple needs – by trial and error – in unpredicted situations (e.g., novel situations), using voluntary behaviour.
The phenomenological feeling of conscioussness, then, might be no more than the process of evaluating the success of such voluntary decision-making in the absence of a pre-determined ‘correct’ choice. He says:
It is difficult to imagine how such behaviour can occur except through subjective modulation of its success or failure within a phenotypic preference distribution. This modulation, it seems to me, just is feeling (from the viewpoint of the organism).
Then there’s the linguistic-cultural approach that I’ve fancifully been calling a kind of neo-Jungianism 1. When does consciousness emerge? The answer seems to be that the culture is conscious, and sufficient participation in its networks is enough for it to arise. If this sounds extremely unlikely (and it certainly does to me), consider two factors that might minimise the task in hand - first that most language is merely transactional and second that most awareness is not conscious.
As in the case of chat bots, much of what passes for consciousness is actually merely the use of transactional language, which is why Eliza was such a hit when it first came out. This transactional language could in principle be dispensed with, and bots could just talk to other bots. What then would be left? What part of linguistic interaction actually requires consciousness? Perhaps the answer is not much. Furthermore, even complex human consciousness spends much of the time on standby. Not only are we asleep for a third of our lives, but even when we’re awake we are often not fully conscious. So much of our lives is effectively automatic or semi automatic.
When we ask what is it like… the answer is often that it’s not really like anything.
The classic example is the feeling of having driven home from work, fully awake, presumably, of the traffic conditions, but with no recollection of the journey. It’s not merely that there’s no memory of the trip, it’s that, slightly disturbingly, there was no real felt experience of the trip to have a memory about. This is disturbing because of the suspicion that perhaps a lot of life is actually no more strongly experienced than this.
These observations don’t remove the task of explaining consciousness, but they do point to the possibility that the eventual explanation may be less dramatic than it might at first appear.
For the linguistic (neo-Jungian??) approach to consciousness the task then is to devise computational interactions sufficiently advanced as to cause integrated pattern recognition and manipulation to become genuinely self aware.
A great advantage of this approach is that it doesn’t matter at all if consciousness never results. Machine learning will still advance fruitfully.
For the biophysical (neo-Freudian) approach, the task is to describe the physical workings of self awareness in the brain stem so as to make its emulation possible in another, presumably computational, medium.
A great advantage of this approach is that even if the physical basis of consciousness is not demystified, neuropsychology will still understand more about the brain stem.
As far as I can see, both of these tasks are monumental, and one or both might fail. However, the way I’ve described them they seem to be converging on the idea that consciousness can in principle be abstracted from the mammalian brain and placed somewhere else, whether physical or virtual, whether derived from the individual brain, analogue or digital, or collective corpus, physical or virtual.
I noticed in the latter part of Professor Solms’s book a kind of impatience for a near future in which the mysteries of consciousness are resolved. I wonder if this is in part the restlessness of an older man who would rather not accept that he might die before seeing at least some of the major scientific breakthroughs that his life’s work has prepared for. Will we work out the nature of consciousness in the next few years, or will this puzzle remain, for a future generation to solve? I certainly hope we have answers soon!.
References:
Mills, J. (2019). The myth of the collective unconscious. Journal of the History of the Behavioral Sciences, 55(1), 40-53.
Solms, Mark (2022) Truly minimal criteria for animal sentience. Animal Sentience 32(2) DOI: 10.51291/2377-7478.1711
Jules Verne could have told us AI is not a real person
Ross Ashby's other card index
During the Twentieth Century many thinkers used index cards to help them both think and write.
British cyberneticist Ross Ashby kept his notes in 25 journals (a total of 7,189 pages) for which he devised an extensive card index of more than 1,600 cards.
At first it looks as though Ashby used these notebooks to aid the development of his thought, and the card index merely catalogued the contents. But it turns out he used his card index not only to catalogue but also to develop the ideas for a book he was writing.
Soon we'll all be writing the books we want to read
To benefit from AI-assisted writing, look closely at how it’s transforming the readers.
Whenever new technologies appear, many changes in the economy happen on the consumer side, not the producer side.
As AI-assisted writing disrupts the writers, it will do so mainly by transforming the readers.
Reading Confessions of a viral AI writer in Wired magazine made me realise I had the future of AI-assisted writing the wrong way around. Vauhini Vara’s article shows how AI is already making a massive difference to our expectations of writing. She’s a journalist and author who has seen her working practices upended. But what about the readers? Sure: production is undergoing massive disruption.
But meanwhile the consumption of writing is on the cusp of a complete revolution.
In the old days you used to stand at the grocery store counter while the staff fetched all the groceries for you, and at the fuel station an attendant would fill up your vehicle’s tank on your behalf. Then new technology transferred these tasks from the seller to the buyer, and the buyer had no real say in the matter, so that by now it’s completely normal to walk down a grocery aisle filling the trolley yourself, or to operate the fuel hose on your own.
No one pays you to do this work that the employees used to insist on doing.
It’s the same for all kinds of office work. The managers do their own budgeting using spreadsheets, while the staff do all their own typing. No one expects to find a typing pool at work. In fact, few workers are even old enough to have seen one.
Move forward a few years and social media has completely adopted this labour-shifting approach.
All the work of social networks is done by the customer.
On YouTube, Instagram and TikTok, the users literally make their own entertainment.
The consumer is now the producer. And this is exactly how it’s going to be with AI-assisted writing.
In former times other people, professionals, wrote books for you. They were called ‘writers’ or ‘authors’, and they, in turn, called you a ‘reader’. But the new technology is shifting the workload to the consumers. We won’t really have a choice, no one will pay us, and eventually we’ll come to see it as completely normal.
“If there’s a book you want to read, but it hasn’t been written yet, then you must write it.” — Toni Morrison
From now on the readers will use AI to write the books they want to read.
‘Professional writer’ will be a job like ‘bowser attendant’ - almost forgotten. Certainly the books still need to be written, just as the fuel tank still needs filling, but why not just let the reader write the books themselves? Who better to decide what they want?
Soon we’ll all be authors, each of us writing for a single reader - ourselves.
These categories, reader and writer, used to be obviously distinct. But AI will result in only one category. Maybe we’ll even need a new name for it.
But as every marketer and advertiser knows, people are completely out of touch with their own taste - they need someone to show them. Fashion, celebrity - consumerism is an ideology that requires followers.
The writers will have a new job: advising people on how best to describe their own desires.
A further, more tentative prediction: AI will also assist the general public to write computer programs. The programmer’s job will shift towards advising the public on what software they actually want to write.
Footnote: I’ll revisit this article in five years to see how accurate my crystal-ball-gazing really is!
Image source: How it was: life in the typing pool
See also: