The Importance of Residue
What's left when the role description is reduced to its components?
In my previous post, I questioned the effect of organisations setting strategies within an invisible doctrine. We talk about strategy, its creativity, and potential, but not about the invisible doctrine that creates a boundary between the organisation and its broader environment. The places that we find scary or unacceptable, and which might affect the future in ways that would threaten the reliability of short-term, regular returns.
“If deep knowledge of changing landscapes cannot be acquired inside organisations constrained by invisible doctrine, how are we to develop it? And where are the spaces in which this learning can take place?”
There is a gap between what we know and how we use it that cannot be replicated by AI. In search of efficiency and productivity, our organisational instinct is to eliminate that in order to create uniformity. I think that’s a dangerous place to be.
When we find ourselves inside the invisible walls of unstated doctrine, our ability to learn what’s going on outside them is limited. Whilst shareholders can change their commercial environment with a click of a mouse, those working inside the organisation cannot.
They are constrained by practical considerations, including mortgages and the need for a regular income, as well as by non-compete and confidentiality agreements that make relocation difficult. In many areas, there’s a further complication: they work with proven legacy technologies that can be made efficient and productive, even as new technologies emerge outside the walls that may replace them, but that they don’t have time to learn.
As they do the work of delivering on KPIs and OKRs using the technology that they’re being told to use, they can find themselves, in remarkably short order, perfectly prepared for a world that no longer exists outside the walls of their organisation. Businesses that not long ago offered security and career development are now becoming liabilities, as they mine the seam they’re working to exhaustion, with no view of the longer term.
The shareholders may be able to move on to a new scene, but it’s much more difficult for us, and as AI combines the depth and breadth of roles that may be subverted, at the same time as it offers easy access to the knowledge of the world, it is easy to become, or at least feel, stranded.
That, though I think, is to fall victim to willful blindness, to view the world of work through the same lens as those who would organise it, treating technology as a blunt instrument, and who fail to recognise the nuances of the work that matters.
Technology may be effective at doing the work we already know how to do. It will not only help us but may eventually replace us in performing the calculations to make the best bets we can with what we have. It’s equally effective at accessing knowledge that we lack, but is needed.
But it starts to run out of steam where judgment is required. Technology may be great for gathering information together and helping us to make better, calculated risks, but when we get to other forms of uncertainty, radical, the world of black swans, and dynamic, the risks that emerge as the system responds to what we do, it’s of far less use.
These are the areas where there is no data. It’s the world of judgment, imagination, and spotting the emergent, and I think, like many of us, finding a way to capture that and bring it into process is more than difficult.
As an experiment, I took the line from my last New Artisans blog, shared at the top of this post, printed it out, and stuck it on my wall. Every time I walked past it, I’d put a pencilled thought on it, then, after a week, put the phrase and all my pencilled thoughts into Claude, and asked it to give me frameworks for thinking about it.
And there it was, James C. Scott and his book “Seeing Like a State”.
At the heart of the book is the idea that, to function, the state needs to make society legible; to arrange the population in ways that simplify the classic state functions of taxation, conscription, and the prevention of rebellion, as well as to align these with the interests of the economy. He, of course, was talking about the state, but a few years later, it applies equally to Google, Meta, Amazon, and the rest.
In analysing why, over the centuries, these attempts have not worked for more than short periods, he identified one key dynamic: mētis.
Mētis. The concept of practical, experiential knowledge that emerges from intimate engagement with specific environments offers a powerful lens for understanding what organisations systematically exclude. This is knowledge that cannot be captured in best practices or standard operating procedures. It’s the farmer who reads subtle changes in wind patterns, the craftsperson whose hands know when the wood is ready, the community organiser who senses when a moment is ripe for action. Seeing Like A State.
What is your Mētis?
Imagine for a moment that somebody has taken your job description or even your own view of what you do and written it down at every last level of detail they can imagine. Then imagine that they take that description and place it in a digital Petri dish, and heat it until every element in there has been burned off. The residue, the matter it cannot process, is what is really valuable about you and the way you see the world.
If, on the other hand, there’s nothing left, then AI can do your job. Not today, perhaps, but probably before long.
My search for a place to bring the idea of mētis into the work we do here took me back to the work of David Graeber, the anthropologist and author of “Bullshit Jobs,” whose work I really valued, and Larry Greiner, who showed that organisations grow through a series of predictable phases, each marked by a dominant management approach that eventually creates its own crisis.
Greiner’s central insight is that the structures, roles, and processes that enable success at one stage will, if left unchanged, become the very constraints that organisations must overcome in the next stage.
Graeber approached work from the standpoint of meaning, legitimacy, and lived experience, asking why so many people find themselves in roles they privately believe should not exist. Greiner, by contrast, examined organisations from the inside out, mapping how growth produces predictable crises and how roles thicken with layers of process, coordination, and managerial control as a rational response to scale.
It made me think that perhaps mētis is the difference between meaningful work and bullshit work.
Read together, their work traces a quiet arc. Greiner helps explain how such roles come into being, while Graeber shows how, over time, they can lose their social purpose while remaining structurally entrenched.
If we view AI from that perspective, it appears less as an external disruptor and more as a revealing agent, stripping away ritualised busyness, proxy activity, and organisational theatre.
What remains, if anything does, is judgment exercised in context, accountability that cannot be delegated, and trust built through relationships over time. Yet this opens a deeper line of questioning. If meaning only survives in the residue, what does that say about the way we have chosen to organise work in the first place? Is value something that can be designed into roles, or does it only emerge through lived responsibility and human presence? And if organisations increasingly prefer scalable approximation to situated judgement, are we witnessing a technological shift, or a philosophical choice about what kinds of work, and indeed what kinds of people, we are prepared to value?
What “Deep Knowledge” matters to us?
To go back to my original query:
“If deep knowledge of changing landscapes cannot be acquired inside organisations constrained by invisible doctrine, how are we to develop it? And where are the spaces in which this learning can take place?”
The unsettling thing about AI isn’t that it knows so much, but that it reveals the difference between processing and transformation. AI operates like an industrial refinery, efficiently recombining the vast archaeological record of human thought, but it cannot do what happens when this knowledge meets us as humans.
When knowledge enters us, we don’t just recombine; we transform it through embodied experience, through presence, through the weight of knowing there will be consequences.
I think it goes beyond the immediate ideas of the nurse who senses infection before any test shows it, the craftsperson who knows wood through touch, and the teacher who feels the moment before a classroom tips into chaos to something deeper. As is so often the case, when we’re ready to hear something, it appears. I was on a call yesterday morning with someone who had a bottle of a well-known whisky in front of them and asked what it was about that didn’t resonate.
We discussed several areas. It appears to have had its cultural heritage quietly sanitised in favour of inoffensive design. The pack doesn’t give an indication of the age of the whisky, which, for those of us who like whisky, is really quite important. A brief exploration uncovers a deliberate choice rather than an omission. In Scotch whisky, any age statement must legally reflect the youngest spirit in the bottle, not an average or a promise of maturity. In this case, the whisky is built by marrying spirit drawn from different cask types and of different ages to achieve a particular flavour profile. By omitting the age statement, the focus shifts from time spent in wood to the brand’s heritage, while still meeting the legal requirement that all Scotch be matured for at least three years.
It’s a perfectly valid marketing strategy, but it seemed a powerful metaphor for the way that many people and companies are using AI, looking for speed and time to market rather than crafting content.
The McDonaldisation of a heritage product. Mediocrity at pace.
I think that when it comes to doing the work that matters, we are the wood in which the product is matured. Polanyi called it personal knowledge, passionate participation, where we don’t just use tools but become them. We develop, and want skin in the game, as Taleb would say, while AI remains forever skinless. It cannot embody what it “knows”
It processes; we become.
It raises uncomfortable questions: What knowledge really matters that can’t be scaled or transmitted digitally? How do we value the trembling heart before the crucial decision, when that trembling is part of what makes knowledge deep rather than merely comprehensive? And if transformation requires embodiment and presence, what does this mean for how we develop our craft in an increasingly mediated world?
In my Outside the Walls Reflections on Sunday, I argued that what we’re experiencing is a phase change that cannot be reversed.
I think we need to regard this not as some additional increment in a linear form of change, but as something altogether different; a phase change. As energy increases and the waters boil, the relationships among us, our organisations, our communities, and our economies change fundamentally. Some phase changes are reversible, for instance, changing water to ice and back. Others are not; we can’t uncook an egg.
The value we bring as New Artisans is the ability to find novel ways of thinking about a problem, rather than merely solving it using existing models. AI can do existing models, but it can’t do the alchemy that we talk about at The Athanor.
If we want to turn the base materials of AI into gold, mētis is a vital element.
We are valuable not so much for what we know as for how we use and experiment with what we discover together.
We should not be afraid of AI; we should welcome it, but we should also realise its boundaries. That can be quite challenging in a world in which we have been brought up and raised, educated, and given gold stars for what we know, even as those gold stars are now obsolete.
We’re in a different game.
The question isn’t whether AI will replace us; it’s whether we’ll remember what makes us irreplaceable. Not our ability to process information or follow procedures, but our capacity to transform through encounter, to know through presence, to judge through consequence.
Take your own work and subject it to the Petri dish test. Heat away everything that can be documented, proceduralised, or scaled. What remains? That residue, perhaps barely visible, certainly unscalable, definitely unmarketable, is your mētis. It’s what emerges when knowledge meets flesh, when information encounters mortality, when data confronts care.
This isn’t about protecting old ways of working or resisting new tools. It’s about recognising that in the rush to make everything legible to machines, we risk losing precisely what cannot be made legible: the trembling before decision, the intuition before analysis, the knowing that comes only from having skin in the game.
This phase change is irreversible. We cannot uncook this egg. But perhaps, in small rooms and patient conversations, in the space between private knowledge and shared understanding, we can remember how to be alchemists rather than processors. How to transform rather than merely recombine. And mētis, unlike data, multiplies rather than scales when practiced together in small, patient groups
What is your mētis? And more importantly, where will you practice it?
We will be meeting on Zoom this evening at 5:00pm UK. You are more than welcome to join us.
New Artisans is one of three blogs I write exploring the changing world of work, craft and community. You will find the others here:




A further thought, triggered by a comment from a friend this morning, on how Robert Pirsig’s work “on Quality” maps onto this.
Pirsig’s notion of Quality - that immediate, pre-intellectual recognition that something is right or wrong before we can explain why, maps directly onto mētis. He distinguished between “classical” understanding (specifications, procedures, the measurable) and “romantic” understanding (the felt, immediate experience), a divide that mirrors precisely what AI can and cannot do. AI masters the classical; it can process every specification of whisky-making, every documented procedure, but it cannot access Quality, that moment of knowing that this particular whisky has been rushed, that something essential has been sacrificed for speed. Pirsig’s mechanic who truly cares versus the spectator who merely follows procedures parallels the distinction between alchemist and processor. Both Quality and mētis emerge only through patient, caring engagement. It is what Pirsig called “peace of mind” through attention, what we might recognise as becoming the wood in which knowledge matures. His “gumption traps” , those moments when enthusiasm drains because we’re following procedures rather than engaging with Quality, describe what happens to people trapped inside invisible doctrine, perfectly prepared for a world that no longer exists because they’ve lost touch with what actually matters.
The McDonaldisation of a heritage product is, at heart, the sacrifice of Quality for classical specifications, the confusion of meeting requirements with creating something good.
I have good friends :-)