We are living through a curious moment. Not the moment everyone is talking about, where AI supposedly revolutionises everything overnight, but the quieter, stranger moment underneath, where millions of workers have already decided the future isn’t coming through the front door with official approval, so they are letting it in through the back.
The statistics are remarkable not for what they say about technology, but for what they reveal about us. While only companies purchased official LLM subscriptions, they are seeing little return, even as the majority of employees in them regularly use personal AI tools for work. They’re doing it multiple times a day, and on weekends. The average knowledge worker now puts more corporate data into AI tools on a Saturday or Sunday than they did during the middle of the work week a year ago.
This isn’t just adoption, it’s a survival tactic for employees under pressure whose employers are demanding productivity without providing the tools.
It feels like an insurgency, technology as a latter-day fifth column.
The Ghost of the Hacker
Those of us who remember Pekka Himanen’s The Hacker Ethic will recognise something familiar in this pattern. The hacker embraces seven values: passion, freedom, social worth, openness, activity, caring, and creativity. The original hacker ethic emerged from abundance, MIT labs, Bell Labs, places where curious people had access to tools and time. It was characterised by the “Hands-On Imperative”: the belief that essential lessons can be learned about systems and the world by taking things apart, seeing how they work, and using this knowledge to create new and more interesting things.
Shadow AI workers exhibit this same hands-on imperative. They’re not waiting for permission. Shadow AI adoption is creating a new class of workers: AI-native professionals who think in terms of human-AI collaboration rather than human-only work, and they aren’t just using AI tools; they’re developing entirely new approaches to problem-solving, creativity, and productivity. Technology appears to be disintermediating management. They are, in the original sense of the word, hacking. Not breaking in, but making things work.
The ghost, though, is troubling. The original hacker ethic was deeply communal. Caring describes hackers’ attitude towards others, particularly the exploited and marginalised. Knowledge was meant to be shared freely. The point was collective elevation, not individual advantage.
Today’s Shadow AI economy has kept the hacker’s autonomy but lost the hacker’s ethics. It is individualised, competitive, and precarious. The data tells the story: Work patterns are changing, relationships are more transactional. The young are being sacrificed at the altar of “book knowledge” that AI can replicate, while those with tacit knowledge, the kind built through years of practice, remain safe, for now.
But where is the caring in that? Where is the collective good?
What’s Been Lost
Perhaps we need to think about a hacker/artisan hybrid.
An artisan doesn’t just acquire skills, she develops a practice, a way of working that integrates tool, material, environment, and self into something coherent. The artisan knows that mastery isn’t about speed or efficiency. It’s about developing a relationship with the work itself.
The ability to learn new things quickly and effectively, unlearn old habits, and adapt to new tools and paradigms as they emerge is what defines artisanal practice. It’s what the medieval guilds understood. Apprenticeship wasn’t about memorising facts. It was about internalising a way of being with tools and materials until they became extensions of thought itself.
Today’s shadow AI insurgents are developing exactly this kind of practice, but they’re doing it in secret. Natural-language interfaces democratise creation: LLM tools accept plain language input and convert context-specific knowledge into functioning outputs without coding skills. Iterative re-prompting accelerates private learning while leaving no visible trace in organisational memory.
They are becoming AI artisans in the shadows.
The challenge is that this learning remains invisible. The knowledge isn’t being transmitted. The guild isn’t forming. Everyone is learning in isolation, developing their own techniques, protecting their own competitive advantage.
We’re producing a generation of artisans with no guild hall.
Meanwhile, the old pathways, stable entry-level jobs that taught you the craft, companies that invested in apprenticeships, career ladders you could climb, are being systematically dismantled. Not because AI is replacing jobs (the data doesn’t really support that yet), but because firms overhired in professional services post-pandemic and are now dealing with larger-than-needed workforces. Companies are choosing not to replace staff as positions open. They’re experimenting with “AI first.”
Despite $30 billion to $40 billion invested in gen-AI initiatives, only 5% of organisations are seeing transformative returns. The vast majority, 95%, report zero impact on profit and loss statements from formal AI investments.
So we have individuals figuring out AI faster than institutions, while institutions cut entry-level positions and invest billions in AI initiatives that don’t work. This is not a recipe for collective flourishing.
The hacker tradition gave us individual agency, hands-on learning, bottom-up innovation, rapid iteration, playful experimentation. All present in Shadow AI.
The artisan tradition gave us sustained practice, knowledge transmission, collective standards, recognition of mastery, social purpose. Almost entirely absent from Shadow AI.
What we’re left with is a degraded hybrid. Ruthless competition, not collaboration. Individual optimisation, not collective advancement. Not caring for those left behind.
The security people will tell you Shadow AI is a risk. Companies with high levels of shadow AI significantly increased the average breach cost. But the deeper risk isn’t technical. It’s social. It’s the risk that we’re building a world where individuals optimise frantically for personal survival while the collective capacity for knowledge transmission and genuine craft development atrophies.
The Athanor: A Different Kind of Heat
The questions we’re asking are all wrong.
We’re asking: How do I stay employable? How do I not get left behind? How do I prove my worth?
These are defensive questions. Survival questions. They’re not wrong, exactly, but they produce a particular kind of heat, an anxious, unsustainable heat of constant reaction.
We need a different heat entirely. We need the heat of the athanor.
The athanor is an alchemical furnace, yes, but it’s a very particular kind of furnace. It was called Piger Henricus, “Slow Henry” in Latin, because it was chiefly used for slower operations, and because once filled with coals, it would keep burning for a long time. The Greeks referred to it as “giving no trouble”, as it did not need to be continually attended.
This is not the heat of panic. This is the heat of transformation.
The athanor represents what we’ve lost in our frantic adaptation to AI: the capacity for sustained, patient, deep work. This is the kind of work that doesn’t show immediate results. The type of work that compounds over time. Work that transforms not just your output but your being.
One of the most important aspects of the athanor is its ability to maintain a constant temperature for long periods. This thermal constancy is crucial for the success of certain alchemical operations.
What would it mean to maintain constant heat, not burning yourself out, not letting the fire die, for an entire year?
The Shadow AI economy shows us that workers have retained the hacker’s skills but lost the hacker’s ethics. They’ve also lost the artisan’s sensibilities. We need to regain them. Not as nostalgia for the medieval past. Not as romantic withdrawal from technology. But as a conscious choice to create structures for knowledge transmission, collective learning, mutual support, and the development of genuine craft.
Here’s the crucial insight: you can be a shadow AI user and an artisan. In fact, that’s the whole point. The athanor doesn’t replace your current practice, it transforms how you hold it. You do what you must, you work inside the Machine when you need to, but you make it optional for yourself by building something else alongside it. A barbell strategy: robust base on one end, asymmetric bets on the other.
Developing Hybrid Practice
Over the next twelve months, those of us choosing transformation over adaptation will be working with different questions:
Not: “What skills do employers want?” but “What craft do I want to develop?”
Not: “How do I stay competitive?” but “What knowledge do I want to transmit?”
Not: “How do I build a personal brand?” but “What community of practice do I belong to?”
Not: “How do I use AI to work faster?” but “How do I use AI to work more deeply?”
This is harder than learning ChatGPT prompts. It’s also more necessary. What’s at stake isn’t just individual employability. It’s whether we can build a world where work develops people rather than depleting them, where collective knowledge grows rather than fragments into isolated pools of competitive advantage.
The Practice of the Athanor
The athanor isn’t about intensity. It’s about constancy. Here’s what I think the work looks like:
Choosing your craft. Not “what AI skills to learn” but what practice you want to develop over a lifetime. What’s worth doing slowly? This isn’t about picking a niche or finding your personal brand. It’s about identifying the work that calls you, the work that deserves patient attention, the work that will still matter in twenty years.
Finding your community. Not your network, your community of practice. Who else is doing this work? How do you make knowledge visible and shareable? I know someone building a collective of AI-literate writers who meet weekly to share techniques, not to compete but to elevate the craft together. Another runs monthly workshops where developers teach non-coders how to think algorithmically. These aren’t networking events. They’re guild halls.
Maintaining constant heat and not burning out, not coasting. What does steady, sustainable practice look like for you? The athanor burns for months because it’s tended properly. You can’t maintain that kind of constancy through willpower alone. You need structure. You need ritual. You need accountability.
Transmitting knowledge. What are you learning that’s worth teaching? Who needs to learn it? How do you document the craft? Make your learning visible. Write it down. Show your workings. Mentor deliberately. The knowledge you hoard today is the knowledge that dies tomorrow.
Caring for the margins. Who’s being left behind in this transition? How do you extend the craft to them? Those 22 to 25-year-olds seeing their employment prospects shrink, they’re not your competition. They’re your apprentices. The question isn’t whether they’ll learn AI, it’s whether they’ll learn it as isolated competitors or as practitioners in a guild.
The hackers gave us autonomy. The artisans gave us craft. The athanor gives us time.
The Shadow AI economy is revealing what happens when we keep the autonomy but lose the craft and refuse the time. We get frantic, isolated, precarious optimisation dressed up as liberation.
We can do better. But it requires choosing transformation over adaptation. Practice over panic. Guild over grind.
What Happens Next
Tomorrow, in the Athanor, I’ll share my thoughts for the first twelve months, how the practices connect, what the rhythms look like, how the community forms.
In November, the work begins. We may not feel ready, and that’s OK.
What craft are you choosing? Write it down. One sentence. That’s your raw material.
The athanor needs it.
Tomorrow, in The Athanor, I will outline what the first 12 months might look like. It is a draft. It will only be complete when those who choose to be part of it say so…
Sources
Shadow AI & Workplace Technology
Gartner Research: “Shadow AI in the Enterprise” (2024) - https://www.gartner.com/en/newsroom/press-releases
Cisco Cybersecurity Report on Shadow AI adoption rates - https://www.cisco.com/c/en/us/products/security/cybersecurity-reports.html
IBM Cost of a Data Breach Report 2024 - https://www.ibm.com/security/data-breach
Anthropic/McKinsey Research on GenAI ROI - search: “generative AI enterprise adoption McKinsey”
The Hacker Ethic
Himanen, Pekka. The Hacker Ethic: A Radical Approach to the Philosophy of Business (2001)
The Jargon File: Hacker Culture & History - http://www.catb.org/jargon/html/
Raymond, Eric S. “How To Become A Hacker” - http://www.catb.org/~esr/faqs/hacker-howto.html
Great guiding questions you are posing! I fully agree.
On people having retained hacker skills, but lost hacker ethics: Is it more like regaining hacker skills? The original hackers used the openings in the then new, immature technologies to abuse them. Some ethically, and some unethically. The majority of people at that time stayed in their lanes. Then, the technology matured, solidified, which made everything more organized and sophisticated, including finding the cracks that could be used for hacking.
There was another wave of that, the makers, which were leveraging the new tools for making physical objects (3D printing and the like) to innovate and incubate different ways of thinking. And again, there was "ethical abuse", like printing out-of-production spare parts for things, and abuse, like 3D-printing guns.
Now, we have with AI a new immature technolgy, with easy to find cracks. Again, abuse, as well es ethical repurposing, are becoming possible for the individual. Again, most people will stay in their lanes, and people like Sam Altman with OpenAI will try to incentivise people to do so (ChatGPT will have an erotic service now? After introducing shopping a few weeks ago, and Apps inside ChatGPT... come on!). A minority will incubate new thinking.
A great parallel, though, and a great idea to tap into the original hacker ethics.
What I also wonder, and may explore: Are these the only examples, or are there more, older ones. Was the printing press being hacked? Was it open source to start with? Or was the printing press a hack? Does hacking need IP, because messing with what has been determined to be the "Intellectual Property" of others is an integral part of it?