In Donella Meadows’ classic, brilliantly simple “places to intervene in a system,” she lists the second best place as “the mindset or paradigm out of which the system — its’ goals, structure, rules, delays, parameters — arises” and the first as “the power to transcend paradigms.” It’s been on my mind as I’ve been getting deliberately lost in some of the discussions on Artificial Intelligence.
My friend Steve Done tells a story about riding out on the road with a friend whose horse was being spooked by the sound of a squeaky fan belt in an old van following them. The horse became increasingly unsettled until they turned it to face the van and the source of the noise. As soon as it identified the nature of the source, it calmed down.
That mirrors my (and I suspect many people’s) relationship with AI. So, I have been turning to face the source of the noise. I’ve done two things: first, I took out a subscription to OpenAI ChatGPT4 (it offers a lot more than the free 3.5), and second, I started conversations with people who are much more familiar with the AI landscape than I am.
It’s early days yet, but I have learned two significant lessons from the encounter. The first is from my active engagement with it. As is my habit. I wanted to find out where its edges are. I selected what I thought of as relative polarities, psychotherapy and culture, poetry and strategy, using texts I am very familiar with.
First, I asked it to tell me the lessons we can learn about corporate culture from Gabor Maté’s “In the Realm of Hungry Ghosts”, his insightful book on addiction.
Here’s an extract:
Maté’s discussion about the roots of addiction in emotional pain and trauma can be translated into the corporate world, where high-stress environments, fierce competition, and intense pressure to perform can lead to harmful behaviors. Workaholism, the compulsion to overwork as a way to escape discomfort or to fulfill unmet emotional needs, parallels substance addiction in many ways. Both are used to manage stress and emotional pain, often at great personal cost.
Second, I asked it tell me about the lessons we can learn about Corporate Strategy from Loris Malaguzzi’s poem “The Hundred Languages of Children” (an iconic piece on early years education pedagogy and a personal favourite).
Again, an extract (points 1-4 of 8):
Value Diverse Perspectives: Malaguzzi's poem celebrates the multitude of ways children express themselves and understand the world. In a corporate setting, this translates to valuing diverse perspectives and skills within a team. Recognizing that each employee brings a unique viewpoint can lead to more innovative solutions and a more inclusive company culture.
Encourage Creativity and Innovation: Just as the poem advocates for the creative potential in every child, businesses can foster an environment where creativity and innovation are at the forefront. Encouraging employees to think outside the box and challenge the status quo can lead to breakthrough products and solutions.
Adaptability and Learning: The poem suggests that children are adaptable, learning through different languages and modalities. Similarly, companies must remain flexible, continuously learning and adapting to changing market conditions and consumer needs. This involves not only leveraging new technologies and processes but also constantly refining strategic goals.
Integrated Communication: The concept of 'hundred languages' can be likened to the myriad ways businesses communicate—internally and externally. Effective communication strategies that integrate various modes (visual, verbal, digital, etc.) can enhance understanding and engagement with stakeholders, from employees to customers.
So far, so interesting. Then, I started talking with those who are properly immersed in evaluating AI, as opposed to my paddling in the shallows. The insights that have emerged, so far, are for me profound (even if, on reflection, probably obvious)
Artificial Intelligence is not some ubiquitous “thing.” There are as many AIs as there are originators (and we can each customise, quite easily, our own), and each reflects the mindset, paradigms, biases, and intent of its originator. They may share a similar architecture in large language models, but there, again, we share 60% of our DNA with Bananas. Ask the same question of different AIs, and although it may start with similar responses, the more we refine and interrogate the question, the more divergent the responses become.
Here’s Chat GPT’s take:
GPT
In the tapestry of life’s grand plan,
Lies the humble curve of a banana clan,
Beneath peels, like clothes we shed,
Lies softness, not so different said.
We, like them, ripen with time’s kiss,
From green to gold, a fleeting bliss,
Each stage a phase, a new demeanor,
Softening hearts, as we grow keener.
Clustered close on life’s broad stem,
Family ties, our hidden gem,
In bunches we find strength, united,
Through storms and winds, together righted.
And should we fall from our collective tree,
We face the world, boldly, individually,
Yet underneath what eyes can see,
The same sweet pulse beats you, beats me.
So peel back layers and you will find,
In each of us, a kindred mind,
Humans and bananas, in essence akin,
A mirrored journey lies within.
An Emergent Craft?
I have a long way to go with this over the next months, already it is clear that AI is both very capable but potentially duplicitous.
First, if we have a good grasp of the subject we are tasking it with, it is a valuable assistant, and it asks us to raise the stakes of our human creativity to start where it leaves us. If we don’t have a good grasp, on the other hand, we are in a dangerous place, easily deceived by apparently fluent answers and finding ourselves in the hands of a sort of Digital Boris.
Secondly, there are many AIs, and we’re only just starting. Understanding that and being able to understand and navigate the landscape they create will be valuable.
Thirdly, we talk about “prompt engineers”. I suspect there is far more to it than that.
For those willing to do the work, a craft is emerging where art and science meet. Prompt Engineers are tools for AI, but those with a craft approach use AI as a tool.
My instinct is to be enthused. It comes with caveats, of which, more later.
I am grateful for the edge work you are doing with AI. Through you, it confirms what I surmised that AI represents the character of the creator. If we take the creator stories of the Bible, there are two in the early chapters of Genesis, we see something similar. We see a creator create man and women in his image. There is a God-mark on them we could say. There is also a boundary set for their relationship. That boundary is not eating from the Tree of the Knowledge of Good and Evil. Of course, they do eat because as creatures made in the image of their creator, they want to be like him. But not just like him, but greater than him, so that God the Creator, in this instance, no longer matters.
This is one of the concerns of AI that the creation will cast out the creator and become a god unto itself. We see that in leaders who openly advocate for and practice genocide. They have decided who is worthy to exist and who is not. They do not make this decision on an individual basis, but on a categorical basis. Certain categories are unworthy to remain members of the global human community.
This is a human dilemma. We are at the center of this historic drama. AI is just the latest system to confront us with our need to be as gods. I am not pessimistic about the future, but realistic. If we look at the full breadth of history over time, tyrannies are impossible to sustain. We are presently witnessing the end of one that has sought to control all aspects of life on our planet for a century and a half. They see themselves as gods, and create the tools that gods need. The reality is that what is sustainable is human community where our relationships of mutuality, respect, trust, and openness make it possible to manage change over time.
I suspect that at sometime in the near future, we will begin to hear of the conflict between different AI systems as they vye for control. When we do, let's us remember, at the heart of this technology are the human creators who have intentions that are never fully expressed in their public statements. What then do we do? We talk and share about what we want for our lives and the future of our world. We establish boundaries for our own participation in these technologies. We don't give ourselves wholesale to a new system of control. We learn together and collaborate to create the new human systems that can sustain us through these times of innovation and change.
Yes, but what happens when other people’s posts on imposter syndrome are better than mine? 😂😂