"What’s your biggest fear with AI?”
This was a question that was posed to me as part of a panel I was taking part in for a conference. It wasn’t a pre-prepared question where all the panellists were given time to think about their answers in advance. It wasn’t one that I had really given much thinking to at all, really. But, there I was: on stage, audience of hundreds of people, microphone passed to me, and I had to think on my feet.
My response? "Sleepwalking.”
Don't bump your head!
This is what AI thinks sleepwalking with AI looks like (Gemini-generated image)
As far as I know, I have never been susceptible to actual sleepwalking. No somnambulistic tendencies for me...I think? I mean, I have to caveat that by saying that occasionally, when I have been travelling to far-flung places and am struggling with jet-lag, I may have sometimes awoken finding myself standing beside my bed. But I prefer to think of that as 'very active dreaming’ - so active, in fact, that I’m getting my steps up while staying asleep.
However, while it may often be a point of humour in movies and TV shows, I know that sleepwalking for some people is no joke. Sleepwalkers have been known to leave their homes, climb out of windows, prepare food or even drive vehicles! From what I understand, when woken, none of those affected have any idea of how they got there, or the activities that they had been undertaking.
(Apparently, 7% of us sleepwalk at some point in our lives - which is more common than I thought!)
All of this is why the term came to my head when asked about my fear. Because with advanced technologies like AI, I worry that we will lose interest. Or not take enough interest. Or find the important things boring. Or just get so enamoured with the convenience of having it baked into our daily digital platforms that we’ll be unaware of where we’re heading. And then we’ll just stumble into a future that is far from our preferred, bumping our metaphorical heads (or worse), waking up surprised, confused and wondering how we got there.
It seems I’m not alone in this fear, although others may characterise it differently. But, what to do about it? How do I/we face this fear?
Trade-off architect
I think a good place to start is to keep expressing our expectation to organisations who are providing and leveraging these tools that they be upfront and transparent about their use, their limitations as well as their benefits. Really hold them (and ourselves) to account for their use in our lives. Favour those who are more transparent over those who aren’t. Look for companies that hold responsible data use at the core of their operations.
From there, we can turn to one of my favourite quotes that I love to share as far and wide as possible:
There are no solutions. Only trade-offs.
Thomas Sowell
I say that it is one of my favourites even though I have spent a large part of my career as either a solution architect, or someone who leads and coaches solution architects. If Sowell is right, then maybe we should have called the discipline of solution architecture "trade-off architecture”?
Instinctively and intuitively, it is a claim that seems to carry weight. Any time we create and deploy a new technology, we don’t so much solve an existing problem set as we do change the problems that we are facing. Now, oftentimes the new set of problems is far more welcome than the previous set. So we call that a win. A 'solution’.
If I buy a house, I swap the problem of not having a house or renting for the problem of managing a mortgage. That might be more palatable for a number of reasons, but a mortgage is still a problem of sorts. It’s still a trade-off millions of people have made. I don’t think any of them were asleep while signing the documents though.
So, I think the antidote to sleepwalking with AI or any technology is to open our metaphorical eyes, stare straight into the trade-offs that are on offer, wrestle with them, weigh them up, and then make a decision based on the best information we have available at the time (while still holding the right to revisit this in the future as more information comes to light).
Overly cautious? I don’t think so. Or, to put my own advice into action, I’m willing to trade some level of speed of adoption with the increased certainty of making a conscious decision. And I say this as someone who defaults to 'early adopter’ mode with pretty much any new tech.
Others have written about some of the trade-offs associated with recent developments of AI. I’d like to offer a couple more candidates for the list (if you’re making a list)...
Human-in-the-loop vs. human-at-the-heart
For some processes that can be assisted by AI tech, the question there is for us to choose how much to automate and with what level of oversight. This is where the human-in-the-loop concept reigns supreme: for mission-critical or risky endeavours, make sure there is a human checking things, or providing the final approval.
The obvious trade-off here is versus no-human-in-the-loop - i.e. fully automated. But I think there is a non-obvious trade-off to consider too: is the process in question, in fact, one that deserves a human at the heart of it? Is it something that, even though it could be automated by AI, it probably shouldn’t be because it is too connected to the human experience? Or requires real empathy, not just the mimicry of it?
Choosing to place a human at the heart of a process brings with it opportunity costs relating to lost efficiency, productivity, scalability. Trading these for real connection to the human condition, real emotion, real empathy? Is that a trade worth making?
It will depend, obviously, on the process and the people involved. But to paraphrase the venerable and very fictional Dr Ian Malcolm (of Jurassic Park fame): sometimes, when deploying a new technology, we are at risk of being so focused on whether we could, that we don’t stop to ask whether or not we should. And making time for that second part is really, really important.
Liking human language vs. likening to human in language
Was anyone else surprised how quickly the world moved on from one of the true breakthroughs unlocked by the LLMs - that of understanding human language? Maybe not with all its nuances, sure, but I still remember being blown away when testing the early version of ChatGPT. This, I remember remarking to my semi-interested children, was a moment akin to when I first used multi-touch on a smartphone! Or browsed my first webpage on the other side of the world! That feeling of transformative technology in the here and now, not in some promised future.
Yet, no sooner had we achieved that, it seems like we collectively moved on in search of agentic capabilities, PhD science skills and elite software engineering assistance. All interesting topics, no doubt, but can we just take a moment here? I remember using Dragon Naturally Speaking 1.0 (I’m that old) and having to read training paragraphs to the tool in a terrible approximation of an American accent just so it could 'understand’ me. From that to Star Trek-level comprehension of voice and text in less than two decades?! Amazing! (To me, anyway.)
But with this apparent fluency has also come a habit of mental shortcutting in the form of anthropomorphising AI. Bestowing names and identities and 'personalities’. Logic being: if it quacks like a human, then may as well give it a human name?
The more accurate approach would be to describe this group of technologies as what it is: a group of technologies that has mapped and can interface with and use human language. But that’s not as catchy as calling it an it and giving it a name. There are real trade-offs in doing so, though. I really recommend this piece by Emilie Bender and Nanna Inie. They explore the challenges with anthropomorphising AI, and how our language can set the wrong expectations, or worse.
No regrets?
One of the advantages of working in a company for a relatively long time is that you get to see past trade-offs coming back around again. You get to learn lessons, constantly, from past versions of yourself. You are able to benefit from hindsight as you work to gain the insight you need to move forward to the next trade-off.
I think we all know what the phrase "no regrets” is trying to say. But in the case of the use of AI and other advanced technology, I think we should instead be aiming for "deliberate regrets”: choices where it was clear what we were missing out on by choosing the course of action that we have chosen, where the risks were weighed up, and where we decided we’re okay with the trade-off.
I’m using the word "we” a lot, because I think some of these trade-offs are not in the domain of the individual. They’re in the domain of business. Of politics. Of families. Of relationships. And so I think this discussion is more "we” than "me”, and we should try to set it up as such.
That might mean setting off a few alarm clocks, or turning on the lights, or gently jostling your friend who seems to be sleepwalking a little bit with their AI habits. Encouraging each other to engage in actively considering the trade-offs involved is all about moving forward with eyes wide open, so to speak. Even if we don’t know the answers, or if the trade-offs aren’t immediately apparent, being awake and alert is going to give us a far better chance of figuring things out.
If we’re aware of the trade-offs, and focused on the good use cases, I don’t think we need to be afraid of AI technologies. But like heavy machinery, it’s probably a good idea not to operate them while drowsy.