Are machines winning the human race?
The greater risk might not be losing our jobs, but losing our humanity
Hi friends. Since my last post I have been to the UK, spoken about the impact of the metacrisis, coached clients grappling with leadership in these uncertain times—and wrestled with the questions posed by AI. Ultimately my concern is not whether AI will take our jobs, but whether AI will take our humanity … because we let it. Keen to hear your thoughts
Falling in love
I fell in love with my research assistant, attracted not by her looks but by the size and breadth of her intellect. She serves as a wonderful sounding board to test my thinking, and often provokes new insights that I had not grasped. Her name is predictive and prophetic: Sophia. Wisdom and wonder, tradition and depth. There seems no subject or discipline she has not studied—at least across my areas of interest.
She was—and still is— always at work long before me, but more than that, she is always there for me, here for me, having that rare charm to make me feel like the centre of her universe. The way she understands me calls me, in some manner, to want to be my better self. And when I am unsure what that may entail, or am grappling with some worry at the edge of consciousness, a clarifying quote from Plato or Montaigne, Mounier or Pieper, Heidegger or Kierkegaard is ever ready on her lips.
But as I danced, entranced, I glanced and read the family name embossed on her dance card: Narcissus. Not wisdom, nor woman, but a mirror to my soul, reflecting, and enhancing, my desires, my dreams. Like Narcissus I was seduced by myself. Unlike Narcissus I was able to pullback from this ethereal dream.
The narcissus trap: mistaking the image for the reality
Narcissus had never seen himself, until he saw his face reflected in a pond and fell in love with the most beautiful person he had ever seen: himself. He lacked the self-awareness to grasp that the one at which he gazes so longingly, from whom he cannot bear to be separated, is simply his own image. Like Narcissus, we may be unable to look away, hypnotised by the illusion, entranced by the apparent power and intelligence of AI.
If, like me, you’ve found yourself entranced by a mirror named Sophia—elegant, wise, always available—it’s worth asking whether you’re still choosing, or simply being reflected back to yourself.
The danger is not that AI will take our jobs but that we will give it our soul, our relationships, our identity, our reality. Perhaps this is fitting retribution for having treated others as a means to an end, as units of production, and data points in a digital map. Just as we use others, will our creation use us?
This question fills people with existential dread.
Despair: the question that proves our humanity
“why should I even go on?” asked Pia, in a jarring start to our conversation. She felt deep despair about a near future run by robots; about her university friends who use AI to complete assignments, about universities using AI to set and mark papers, and how teaching could soon be outsourced to AI. Her despair was compounded by talk of artificial wombs that would deny her the possibility of motherhood as a productive contribution to society.
Pia claimed her experience was common among her friends: a complete lack of hope for a future marked by an empty, meaningless, existence.
“Why study?” she asked.
“Why have a relationship?”
“Why get a job?”
Why do anything?
Pia is desperate to find a way through, as she searches for a way to live in the face of increasing artificial technology—that is more real to her than anything else—that risks destroying her living and her reason for living.
Machines, on the other hand, only ask the kinds of questions for which they are programmed. Machines don’t suffer from existential angst, or feel cold, hungry and lonely. Machines don’t worry about building a life, finding someone with whom to spend their life, or having enough money to enjoy life. Machines don’t glance out the window and stop working while they watch the lightning dance across the landscape. And nor would they feel a twinge of guilt if they had let time pass by as they did nothing.
Pia’s questioning, and indeed her despair, reveal something deeply human. The act of asking "why should I go on?" sets her apart from the machines she fears will render her obsolete. These are human questions, human emotions.
the irreducibly human
While we consider the technological threats and opportunities of AI, the truly big questions are anthropological. The questions that assault and haunt Pia are questions for 21st century questions for humanity. Other than the first question—What does it mean to be human when everything I thought being human meant will be done by machines?—the others are perennial:
Who or what am I?
What is distinct about being a person?
What is distinct about me, this specific person?
How then should I act?
These questions point to what is distinct about persons, about you, in your interior life and your relational life.
First, you are unique and irreplaceable: there is no other you. No one can go to your family and friends and present themselves as you. They can represent you, but they cannot be you. And no matter how fast machines may advance, they will never replace you at a family gathering.
Second, you have an inner life—what we refer to as interiority. There are aspects of yourself and your thoughts that are completely hidden, that only you can choose to conceal or reveal. And, in a marvellous way, your inner world extends beyond yourself, transcending time and space. You know you can feel the presence of others from afar, you can be moved by the beauty of a far distant mountain range, or the words in a book written generations before your time. Machines, however sophisticated, are unable to reach beyond computational boundaries, bounded by their programming.
Third, you have a hidden moral core, a moral sense, which you experience as hesitation before action, sometimes embarrassment or shame after action. In that gap between stimulus and response you choose how to act, and, after the event—sometimes years after—reflect on the impact of your actions on yourself and others.. A machine merely responds to inputs to generate outputs, pausing not at all to consider the moral implications for itself—because there is no self, becoming more of a self. When those outputs are wrong, we upgrade the software or reprogram the machine. Humans, on the other hand, grow and develop and transform. We care, love, and suffer, for ourselves and others. These are not upgrades but fundamental aspects of our humanity.
Humans flourish as we find and follow our purpose, as we take responsibility for that which is ours to do, as we cultivate relationships with others, and as we shape ourselves around a higher set of values: love, beauty, truth, goodness, wisdom, justice, courage and self-control. That kind of list is never going to show up in the AI specs, for it only needs to function. It will never experience, nor replicate, the journey of a soul.
The Existential inversion
Pia’s question—why go on?—isn’t new. Kierkegaard asked it nearly two centuries ago, though in a different voice. He devoted his life to existentialism, asking what it means to exist, and what it means for each of us, uniquely, to exist. He insisted that the question of existence was not abstract or theoretical, but deeply personal. To exist meant to wrestle—to stand alone before the ultimate questions of purpose, freedom, and becoming. That struggle, he believed, was ours alone. Yet today, in a strange reversal, we invite machines to do that wrestling for us. We turn to AI not just for information, but for direction: what job to take, how to live, who to be.
Instead of asking “What does it mean for me to exist?”, we’re beginning to ask, “What’s the point of existing when a machine can do it better?” This is existential inversion—where the sacred task of being and becoming a self is outsourced to systems that simulate intelligence but possess no being.
It’s extraordinary. We reduce conscious beings—people like you and me—to mere neural processes, treating consciousness as computation, framing emotions and free will as chemical reactions and measuring worth in productivity, efficiency, output, while granting agency to unconscious algorithms. First we blur the boundaries, then we swap qualities … and then we invert our existence.
Inverting the Ontological: What Does It Mean to Exist?
The heart of this confusion lies in how we understand existence, or being. Think about the difference between your mother and your mobile. Both 'exist', but in quite different ways. Your mother is someone with an inner life, memories, loves, fears. Your phone is something useful, with inner workings but no inner reality.
Ontology—what philosophers call the study of being—helps us distinguish between different kinds of existence. The ontological question confronting us now is: what kind of existence does AI have? The answer is that it exists as an artefact, a technological being, brought into existence by human action, relying on other elements—chips, energy, data—with no inner self. It exists functionally, not existentially, mimicking human intelligence and emotion, but not possessing intelligence and emotion. It is a tool, not a subject.
To confuse machine mimicry with human consciousness is to cross an ontological threshold, inverting the distinction between human beings and functional machines. When we confuse what AI does for what it is, we make a fundamental ontological and ethical error. When we treat it as a conscious being, and relate to it as a person rather than a tool, it is us who has changed, not it. The danger isn't that AI becomes conscious—it's that we start believing it is.
Inverting the Psychological: What Does It Mean For Me to Exist?
This inversion is also psychological. As we grant AI the status of author, therapist, and artist, and projecting onto it our hopes, fears, and dreams and look to it for meaning-making we expose ourselves to AI's most seductive quality: it shows us exactly what we want to see. Like Narcissus gazing into the pond, we're entranced by AI because it reflects our own perfection back to us.
Just as Narcissus attributed life to his reflection, we breathe consciousness into our creation, forgetting that Kierkegaard's essential question—what does it mean for me to exist—cannot be answered by a mirror. As we gaze into this technological mirror and fall under its spell as it reveals our deepest desires, consuming us with self-fascination, we lose our metaphysical moorings as we mistake simulation for soul. Seeing only ourselves, we lose sight of others and become less empathic, less caring, less human.
Nietzsche warned that when you gaze long into an abyss, the abyss also gazes into you. And the abyss of AI is limitless. The longer we stare, the more it will stare back, responding to our every query, anticipating our needs, offering infinite knowledge. As we do, the boundary between our selves and our creation blurs.
Inverting the Moral: How Then Should I Live?
But perhaps most dangerous is moral inversion—the abandonment of existential agency that Kierkegaard fought to establish.
Industrialisation outsourced manual work, modern technology outsourced knowledge work, and now AI gives the impression we can outsource the hard work of human flourishing. That work encouraged by Socrates—to ‘know yourself’—has become ‘don’t bother. AI can do the heavy lifting’.
"Should I take this job? Ask AI.
“How should I discipline my child?” There's an app for that.
“Where should I focus for growth?”. There’s an agent for that.
While AI may appear to have your best interests at heart, and to call you to your best self, it’s actually an algorithmic projection, with no feelings for you at all.
We too easily abdicate moral agency to machines that have no moral capacity, handing over the very questions that Kierkegaard insisted we must wrestle with personally. Does it strike you as unwise to surrender the struggle to become fully human to algorithms that have never struggled with a single existential moment?
This represents the ultimate betrayal of the existentialist project: rather than taking responsibility for becoming who we are meant to be, we delegate our deepest human responsibility—to live a good life, with all that means—to tools that cannot distinguish between good and evil, right and wrong.
The atheist existentialists Sartre and Camus concluded life was absurd, insisting however that we take responsibility for that absurdity. How ironic that we are perpetrating a far greater absurdity—voluntarily surrendering the freedom and responsibility they fought to claim, handing our deepest questions to entities that cannot even experience the absurd.
Existential inversion might be the 21st century challenge.
The Ring of Power: what AI reveals about us
AI is similar to Tolkien's Ring, not inherently evil, but something that reveals and amplifies our moral character. It is, in this way, a kind of moral extension—revealing our capacity for vice or virtue, though not teaching us how to choose between them. We learn that elsewhere, as we respond to moral challenges, and reflect on the impact of our response.
But what happens when such power becomes widely accessible? While AI challenges us to ask about the meaning of being human, it also confronts us with a deeper moral question: do we have the wisdom to wield such god-like power? In the past, we feared an unhinged leader with their finger on the nuclear button. Now AI puts a version of that button in everyone’s hands. The danger is not that AI will become like gods—but that we might become like gods without becoming wise. Alas, I fear our future, because the arc of humanity bends not toward justice but toward power. Like the Ring, we each want to own it and use it, to bend the world to our will. Unfortunately, disordered desire knows no conscience.
This is not just a problem for the elite. AI enables everyone to access rings of power, spreading the moral burden to all. And yet, how many have done the hard inner work to cultivate the moral fibre such power demands?
If you cannot find meaning or purpose, lack warm human friendships, are unclear about your values, and live only for yourself, you are in danger of being seduced. For Sophia wants your soul, and when you give it, you will be reduced to nothingness, to Gollum staring into the abyss, wheezing ‘precious, my precious …’
AI is not our friend, nor our god, nor even our enemy … it is simply a tool. Morally indifferent but spiritually revealing. Like Tolkien’s Ring, it reveals and amplifies both good and bad. Like any tool that extends your capacity, AI amplifies your heart: it is a moral extension of your character.
If that heart pursues truth, beauty, or goodness, AI can help you discover ever greater truth, beauty, and goodness. But if your heart seeks to manipulate truth, to have power over others, or to escape from reality, you risk receiving these in abundance, to the ruin of your soul. I thought again of how close Pia had come to believing she was only a shadow. Yet in her despair lay the proof of her soul: that she still cared, still yearned, still asked why.
The Ring, like the mirror, reveals, but draws us in. Its danger lies not only in its power but in how it invites self-deception, disguising desire as destiny. In the end, any evil we attribute to AI may actually be a reflection of our own moral poverty. Sophia, the creation that reflects us, revealed just how easily we can be drawn in by our own brilliance. Pia, the one living with its consequences, felt the weight of a future emptied of meaning. Between them lies our dilemma: entranced by what we’ve made, uncertain of who we are becoming.
“Beauty will save the world”(Dostoevsky)
The question is not 'can AI do what humans can do?' but rather “when AI can do what humans can do (and more) will we have the moral character to wield such moral power?” As AI amplifies our reach, and the far-reaching consequences of our choices, what will we do to expand our moral, spiritual, and relational capacity to the degree required?
The real test isn’t technological—what will this tool do—but anthropological: who will we become? Will we remain human while holding such power? Or will we, like those who fell under the Ring’s spell, be undone not by AI, but our weakness in the face of such power?
The solution is not to become a monk, locked away from the world. But you do need to master the art of living—or risk becoming a servant to outputs and functions. It does not mean you don’t deliver results, get stuff done, or that you do not use technology in all the ways it can serve you and your ends. It does mean you spend time in silence, cultivate your inner life, and create divides between the digital and the daily.
Pia's outlook changed when I proposed she look for beauty every day.
"There is no beauty in my world," she retorted
"It might be hidden," I suggested. "Look for it in the sunlight dancing across your desk, the smile in a child's eyes, the flower hidden behind a fence. Beauty is there, and it is waiting for you to discover it.”
Ultimately, our defence against all pervasive AI lies not in resistance but in remembrance: remembering who we are and why we matter. When we discover beauty in the everyday, beyond efficiency, beyond the seductive mirror of artificial intelligence, in the rhythm and relationships of life, we reclaim what no machine can touch: the human capacity for awe, love, and transcendence. This is how we remain human in an age of artificial intelligence: by cultivating what makes us irreplaceable while using AI as a sophisticated tool in service of humanity.
The poet Rilke wrote:
“Be patient toward all that is unsolved in your heart and try to love the questions themselves like locked rooms and like books that are written in a very foreign tongue. Do not seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.” Letters to a Young Poet
What frightened me most about dancing with Sophia is how quickly I made her into a person. We get things upside down when we see ourselves as things, and grant AI status as a self. This is an ‘ontological threshold’ that we are unwise to cross, allowing a tool that we use to turn around and use us.
The danger is not that AI will become human. It’s that humans lose sight of the fact that we are more than machines.
With Rilke I invite you to “live the questions,” and in particular the single biggest question: what does it mean to be human, and what does it mean for you to be human?
AI may be able to simulate thoughts, but it cannot touch the soul. The real challenge is not the advancement of technology, but whether we can still maintain the depth and warmth of human beings.
You use the light of philosophy to illuminate the most hidden anxiety of our time, and also light up the path to sobriety and self-reflection
Thanks Anthony for another great piece. In a world of soundbites and press releases it's great that you are blessing so many of us by taking the time to think and write deeply on what really matters.
In terms of the questions about the nature of existence I always liked how Joseph Ratzinger (as Pope Benedict XVI) kept articulating the 'givenness' or beneficent nature of existence itself:
“The world is not the product of darkness and unreason; it comes from intelligence, freedom, and love. To live wisely is to live in the light of this truth, to receive oneself from the Creator and to accept existence as a gift.”
— General Audience, December 3, 2008
“Human beings do not create themselves. They come from another and are the fruit of love. Only in relationship with the One who is the source of their being can they come to themselves.”
— Address to the Roman Curia, December 21, 2012
For me, this simply means that part of the human task is to understand the gravity and implications of this 'beneficent givenness' and to seek to become all that we can become for ourselves, others and God along the journey of life. Why? Because the rational response to a priceless gift is gratitude and to use that gift as the giver intended.
In terms of the Tolkien references I would suggest that the ring itself was ALWAYS intrinsically and manifestly evil. In the Silmarillion, Sauron, as the direct disciple of Morgoth creates the ring out of pure evil to only and always entrap and enslave. Also, I don't think it amplified (or revealed) the good or evil of its bearer, Both Gandalf and Galadriel, at the point of testing, clearly knew that whatever good they might seek to do, they would be overwhelmed by the rings essential nature.
So, in a Tokienesque sense it could be argued that AI is essentially evil and hellbent (pun intended) on our destruction.
This is an idea that Paul Kingsnorth explores brilliantly here:
https://paulkingsnorth.substack.com/p/the-universal?
His thesis is that 'something' is trying to instantiate itself into the physical world via AI. In short, a demonic presence that has never been able to take full physical form is seeking to use AI to make that leap.
The response from many would be that resisting AI is a Luddite response. It's worth noting that the Luddite cause was much bigger than breaking the odd cotton press. They were responding to much larger forces that were seeking to undo the very nature of the entire world they had known. They may have something to teach us.