I’ve always loved the way ideas bump into each other to ask questions of us. Over the weekend, I was reading “A Thousand Brains - A New Theory of Intelligence” by Jeff Hawkins. I like its honesty in that it proposes a radical idea without trying to assert that it has proof - compelling evidence, maybe, but not yet proof. Its main assertion is that our neocortex comprises around 150,000 cortical columns that work in much the same way, paying attention to different areas - each of the senses and then areas like thought. He proposes that our brains are prediction engines, using and updating “frames of reference” to work out what we see now and what comes next, based on what we have experienced. I’m interested because of the questions it asks about uncertainty. what do we do when we can’t find a frame of reference that works?
It makes me think about AI, whose frame of reference is currently based on “most of” rather than “best of” (it does not yet judge) and which, in order to function, has to find a frame of reference it can use in order to process. We also do that, as humans, when under pressure to “perform”, but we can pause and reflect for a while to find a new, perhaps speculative, frame of reference to experiment with using our imagination in conversation with others. (see Ed Brenegar’s post today, a conversation with Robert Poynton, about conversation).
There’s another thought that occupies me. The large language models that drive AI take much of their data from the Web to feed the Web, so we end up with a form of ouroboros, a snake eating its own tail, or perhaps a form of digital interbreeding to give us digital idiots, brought up on a force-fed diet of average.
The biggest risk, of course, is not AI at this point; it’s the mimetic rapid adoption by those, whether companies or people posting articles on social media or answering exams, to consider the second-order consequences of their intellectual laziness.
The curiosity and purpose of the artisan mind have rarely been more important.
Interesting that you see the threat as mimetic. I agree with you. The problem is that our mimetic desire is not for greatness or to be the best, but to be just good enough to be better than the one that I want to be better than. Mimetic desire in its worse form doesn’t create creative conflict, the friction of innovation and entrepreneurship, but destructive conflict and the laziness that lowers the bar of achievement to that of marginal benefit.