
There’s an interesting article in this week’s Economist concerning the new role of “prompt engineering”; the task of being an AI “handler” and ensuring it gives us the needed responses. It suggests that…
“…ideally, the prompt should coax the model into complex reasoning: telling it to “think step by step” often sharply improves results. So does breaking instructions down into a logical progression of separate tasks. To prompt a clear explanation of a scientific concept, for example, you might ask an ai to explain it and then to define important terms used in its explanation. This “chain of thought” technique can also reveal a bit about what is going on inside the model.
If your first reaction is that we may have to brief AI rather better than many employees, you’re not alone. That, though, was not what struck me; what did was the issue of what I’m thinking of as “algorithmic compounding”.
When it comes to data gathering, AI is infinitely better, faster and has more capacity than we do. The benefit, though, comes from how we process it. In workplaces that prioritise efficiency, productivity and risk aversion, leadership and management training makes major use of algorithmic thinking via the use of mental models, processes, and other heuristics to deal with incoming information, categorise it and deal with it. In many ways, we become meat-based algorithms, shaving off outliers and anomalies so that our business models run smoothly and our performance is deemed satisfactory. The danger is that we end up with various shades of the culture that is the Post Office today or the banks in 2008.
I wonder what happens when those cultures find spaces for prompt engineers who then serve the AI, which will then apply its own “mental models”, heuristics and anomaly removal to ensure our business models run even more smoothly, regardless of how flawed they might be.
The more I think about AI, the more I advocate for its potential for good. What frightens me is management. When AI has so much potential, the idea of managing it rather than harnessing it seems full of danger.
Here’s what Chat GPT had to say on the matter:
Traditional bureaucratic, hierarchical companies face several cultural challenges in making the most effective use of artificial intelligence (AI). These challenges include:
Resistance to Change: Implementing AI technologies often requires significant changes to existing processes and workflows. Bureaucratic cultures may resist these changes due to entrenched norms and a preference for traditional methods.
Lack of Innovation Mindset: Bureaucratic cultures may stifle innovation and risk-taking, which are essential for effectively leveraging AI. Employees may be hesitant to explore new technologies or experiment with AI-driven solutions due to fear of failure or uncertainty about their roles.
Data Silos and Accessibility Issues: Effective AI implementation relies on access to high-quality data from across the organization. Bureaucratic structures may lead to data silos, where information is hoarded within specific departments or teams, making it difficult to access and integrate data for AI applications.
Limited Collaboration and Communication: Bureaucratic hierarchies can hinder collaboration and communication among different departments and teams. Effective AI implementation often requires cross-functional collaboration and knowledge sharing, which may be challenging in organizations with rigid structures and communication barriers.
Skills Gap and Training Needs: Successfully implementing AI technologies requires employees with the necessary skills and expertise in data analysis, machine learning, and AI development. Bureaucratic companies may struggle to attract and retain talent with these specialized skills, and they may also face challenges in providing adequate training and upskilling opportunities for existing employees.
Ethical and Regulatory Concerns: AI technologies raise complex ethical and regulatory concerns, including issues related to data privacy, algorithmic bias, and accountability. Bureaucratic organizations may struggle to navigate these challenges and establish robust governance frameworks to ensure responsible and ethical use of AI.
Lack of Leadership Support and Vision: Successful AI implementation requires strong leadership support and a clear strategic vision for how AI can drive value for the organization. Bureaucratic leaders may be resistant to change or may lack the vision and expertise needed to effectively lead AI initiatives.
To address these cultural challenges, traditional bureaucratic hierarchical companies may need to undergo cultural transformation initiatives that foster a more innovation-friendly environment, promote collaboration and knowledge sharing, invest in employee training and development, establish clear governance frameworks for AI implementation, and cultivate leadership support and vision for AI initiatives. Additionally, organizations may benefit from fostering a culture of experimentation and learning, where employees are encouraged to explore new technologies and approaches in a supportive and inclusive environment.
Given the history of “cultural transformation exercises” and putting them alongside the speed of AI development, it seems likely we are going to face a few challenges.
We do not know what the impact of AI will be. I suspect, though, that being a manager brought up in the ways of efficiency and productivity will be a high-risk occupation. The benefit of AI seems likely to be in the options it gives us, whilst the biggest risk is assuming it will do as we ask. When faced with this degree of uncertainty, many will, of course, just become ever more confident in their actions and, to go back to what we are learning from the Post Office, avoid the issue altogether.
We have several generations of management brought up with an aleatoric (Alea - Latin - Dice) approach to uncertainty. That we can forecast and calculate event probabilities. They are poorly placed to deal with the epistemic (lack of knowledge) uncertainty we face today, dealing with a lack of knowledge, with “unknown unknowns” suddenly becoming known, overwhelming any idea of robustness, but rather demanding the sort of resilience for which large organisations are inherently unsuited.
Which brings me to Artisans. We are natural “prompt engineers”. Whether it is a piece of wood, a set of accounts, or a start-up, artisans' first tool of choice is curiosity, not process. It is about observation and orientation, developing a relationship and sensing what is there beyond recordable data. Artisans are almost the polar opposite of the traditional manager; they are the equivalent of trespassers, poachers, heretics and others who question, challenge and harness the systemic vulnerabilities of algorithmic certainty.
I think we need them, placed along the changing nature of supply and decision-making chains, to provide provocations and inconvenient truths to provide early warning to those who would otherwise remain wilfully blind to emerging reality. Latterday Fools at Court speaking truth to power.
Maybe that is where today’s artisans sit - between management's short-term efficiency focus and leadership's longer-term purpose and vision, a latter-day court jester. I suspect that will be a bridge too far for many organisations, and terminal change will be a surprise. Seems a shame, but…
So what about us? What do we do, where we are, with what we’ve got in order to be best positioned as these changes occur?
Three things I think:
We should have a focus, a skill, a craft that we practice. It should be something tangible - not “leadership” or “management, " which are temporary, transient, context-dependent skills. It should be something that we can practice daily.
An evidence base of that practice and references that support that focus.
An active practice working towards mastery of that focus, a demonstration of commitment and curiosity that goes far beyond what we might get paid for.
For my own part, that focus is on the nature of uncertainty, from philosophy to practice. It is about recognising we cannot use history to answer current uncertainty; there is no algorithm, and it requires every facet of who we are.
We can only meet it where we find it. That means being prepared before we know what it is, being curious about what we see and harnessing our uniqueness more than our conformity as we prepare to tackle it.
Being prepared to be a Fool at Court.
How we do that is my artisanal focus.
What is yours?
This is very good. For the mediocre manager, AI is that young MBA grad whippersnapper who must be controlled rather than guided and supported. Change is going to be hard for a lot of people.