Beware the metaphor as a mental model

A few years ago I spoke to a founder working on generative machine learning. After the chat she sent me a follow-up, thanking me for the meeting, and said she particularly liked the "metaphors" I had used to talk about her segment.

I'm sure she meant it as a compliment, but the comment threw me. Had we not managed to have a real conversation about her work? Was my technical limitation starting to show (I don't have a CS degree and the little programming I do could be generously described as "script-kiddy")? What did she mean?

This was prior to my coaching days, so I didn't give her a call to ask, which would have been the obvious and straightforward thing to do. Instead I retreated - good introvert that I am - into dwelling on it. Which leads us to this blog post.

"Mental models" have become more popular recently because of a speech given by Charlie Munger in the 1990s. And perhaps also because the world is more connected and complex and we still have to live very human lives with very average intelligence dealing with a lot more ambiguity. That's before "The Merge" anyway, but I'm getting ahead of myself.

The concept of mental models has been around for a very, very long time. It's right there in the cave with Plato and rather the whole point of Kant: there is no real direct access to things-in-themselves. Our perception and its concepts mediate between us and the world. You could argue that cave paintings are an abstraction and that parietal artists knew they weren't making "real" buffalo on the wall, so this likely goes back as far as human history or at least as far as the development of language.

Which leads me to my second point. Metaphors are a type of mental model. They're very good for communicating. They're powerful, memorable, evocative. All good things if you want a point to stick. Take the "Big Bang" theory (a Fred Hoyle creation). I wonder if "Primordial Singularity" would have had the same resonance.

But as mental models, metaphors for critical thinking can be sloppy. Their abstraction of the things-in-themselves relies on the ability of language to supply us with relevant images. Which is a much worse way to go about building a model than, for example, the Pareto principle.

This, of course, leads to the imprecision of language and the realization that most of language is metaphor. But let's not go down that - wait for it - rabbit hole. Rather, I've found it helpful to note when I'm going down metaphor lane (sorry). It helps me question whether I'm taking a lazy shortcut (again, sorry) to abstraction, rather than doing it thoughtfully.