
Artificial intelligence can already write, analyze, predict, and decide faster than we can. It is entering our relationships, our workplaces, our health systems and even our moral decisions. But the uncomfortable question is not how powerful AI will become.
The question is: Are we psychologically mature enough to handle it?
Yes, this is another article about AI. You might wonder whether everyone has stopped doing what they normally do and started contemplating the future with artificial intelligence. As AI evolves so rapidly, we are all trying to understand what we are going to do with it—how it will shape us and how we will shape it.
As a curious human working at the intersection of technology, connection, and creativity, I am interested in the evolution of humanity through our interactions with AI.
My first encounter with AI was years ago during my college days. I was coding heuristic algorithms—the bedrock of what we broadly call AI today—to solve NP-complete problems through sub-optimized solutions in domains where finding the perfect answer was nearly impossible. At the time it felt exciting and abstract, something that mostly lived in textbooks.
It was not a part of everyday life.
Today it is everywhere—from writing our personal emails to suggesting whom we should date (The Shared Pulse), from legal and health counseling to pattern analysis and decision support.
Artificial intelligence has the capacity to analyze, learn, solve, and act extremely fast within well-defined domains of data. In many ways it expands our collective intellectual capacity. It can process information at a scale and speed no human mind could match.
But like any powerful tool, it also carries shadows.
AI learns from the data we provide. If the data contains bias, the system learns that bias. When deployed at scale, those biases can become amplified. AI can produce convincing answers without truly understanding the world it describes. It has no insight, no lived experience—only patterns extracted from massive streams of zeros and ones.
It does not take responsibility for its actions.
It does not feel guilt or grief the way humans do.
You unplug it—and everything disappears.
And yet, precisely because of its power, AI pushes us out of our comfort zone, just like every transformative tool before it.
When the first knife was invented thousands of years ago, people likely went through a similar process of discovery and anxiety. A knife enhances the capabilities of our hands: it can help prepare food, build shelter, and sustain life—or it can destroy. The tool itself is neutral. It is the consciousness of the person holding it that determines the outcome.
Later, the pen extended our cognitive abilities. It allowed us to represent, refine, and share ideas across time and space. As famously written by Edward Bulwer-Lytton in his play Richelieu; Or the Conspiracy, the pen is mightier than the sword. Words can shape laws, inspire revolutions, or manipulate entire societies.
Artificial intelligence represents the next amplification: not of our hands like the knife, nor of our words like the pen, but of our thinking itself.
And that amplification forces new questions.
Who owns the responsibility when destruction happens at the press of a button?
What if the outcome is unexpected destruction?
Is it the programmer, the designer, the approver, the decision maker—or all of us who simply watch from the sidelines?
If AI learns about humanity through the data we produce, what picture of ourselves are we giving it?
When was the last time we shared our authentic selves on social media? Instead, we mostly share achievements and smiling photographs, rarely our vulnerabilities or the messy process behind success. We seldom post heartbreaks, doubts, or failures. The data AI learns from therefore reflects a skewed version of humanity.
Fortunately, literature still offers a richer portrait of the human condition.
These questions point to something deeper about human development itself. Developmental psychologist Robert Kegan describes human growth not as simply gaining more knowledge, but as transforming the way we make meaning of the world. In his work The Evolving Self and In Over Our Heads, Kegan explains development through what he calls the subject–object shift.
What is subject is something we are embedded in—something we are identified with. It shapes our thinking, identity, and emotions so closely that we cannot step back from it. What becomes object is something we can observe, question, and take responsibility for. Development happens when what once defined us becomes something we can reflect upon.
Kegan describes five progressively complex ways of making meaning.
In the Impulsive Mind, common in early childhood, individuals are largely subject to their impulses and perceptions.
In the Instrumental (or Imperial) Mind, people can step back from impulses but remain primarily guided by personal needs and exchanges: what benefits me, what can I trade? This is the stage where we ask, What is in it for me?
In the Socialized Mind, identity becomes shaped by relationships, expectations, and cultural norms. Many adults primarily operate at this level, where belonging and shared values define who we are.
In the Self-Authoring Mind, individuals develop an internal compass. Social expectations become object, and people organize their lives around self-defined principles and values.
Finally, in the rare Self-Transforming Mind, even one’s own belief system becomes object. Individuals can hold multiple perspectives simultaneously and remain open to revising their frameworks as complexity increases.
As we move through these stages, the subjects we once identified with become objects we can examine. By stepping back from parts of our identity, we gain broader perspective and deeper understanding.
Kegan suggests that most adults operate between the Socialized and Self-Authoring stages. In reality, we often show different levels of development in different areas of life.
What fascinates me is how AI seems to invite us toward the later stages. The systems we build today are deeply interconnected, global, powerful, and impactful. Understanding their implications requires the ability to see relationships between systems, question assumptions, and take responsibility beyond our immediate roles.
AI may therefore be doing something unexpected. Beyond optimizing efficiency or automating tasks, it may be pushing humanity toward greater maturity.
If the knife required responsibility and the pen required discernment, artificial intelligence may require something even deeper: a level of psychological maturity capable of holding complexity, uncertainty, and global consequence.
You can also read at Medium: https://medium.com/@eda.uzuncakara/is-ai-an-opportunity-to-get-wiser-6a40e50bb86a


