Mental models are the internal theories that we use to make sense of the world and understand our experiences. We construct mental models intuitively, without realizing that we’re even doing it. When I take two steaks out of the freezer and place them in the refrigerator to defrost overnight, I expect to find two steaks waiting for me in the morning. If I woke up to find three steaks in the refrigerator, I’d be startled and I’d suspect that someone had been in my kitchen without my knowledge. I would not think that a third steak had materialized out of thin air. That’s because one of my most trusted mental models tells me that objects do not appear or disappear by themselves.
I would also expect to find the steaks soft and wet on the outside but still frozen in the center. I’d be surprised and annoyed if they were rock hard or soft all the way through. While some of my mental models about thawing food may draw from my study of thermodynamics as a chemical engineer, most are constructed from years of personal experience in the kitchen. I know that thick steaks need more time to thaw than thin steaks, and items thaw faster in the summer unless I turn down the little dial in the refrigerator.
I also have mental models that tell me where to place food in my refrigerator. From watching cooking shows, I know not to place cooked food next to raw food. I’m not sure why that’s so bad, but I have a few guesses based on my mental models for microorganisms. I typically place my thawing meat on the bottom shelf in my refrigerator. Why? I’m nervous about yucky meat juices dripping down on my fresh produce. While I know that this mental model is naive (it’s not based on any data, and potential drops would be caught in the tray I use to defrost meat), I have to place those frozen steaks somewhere, and I have no other theory to fall back on.
Our brains have evolved to detect patterns and construct mental models because we need to make thousands of decisions based on limited information every day. The prehistoric human who could predict the location of game animals, the onset of winter, or whether a stranger was friend or foe had a much better chance of survival. Even if a model is founded on little more than folk wisdom and superstition—instead of careful observations and well-reasoned theories—we generally prefer to base our decision-making on something rather than nothing. The need to detect patterns is hard-wired, as is our need to explain and make sense of those patterns.
Recently, I’ve been doing a lot of writing about mental models, and—in the process—my brain has detected a curious pattern: I tend to use the same set of adjectives over and over again to characterize them. In one context, I might describe a mental model as sophisticated and grounded; in another, as grounded and robust; and in a third, as becoming more sophisticated and robust. I’m not entirely sure how I choose which adjectives from the set to use in which contexts, but there appears to be some internal logic to it. My brain wanted to investigate further, so I’ve begun to transition from a makeshift set of adjectives that I use frequently to an organized classification system.
I characterize a mental model as well-tested when the model has been tested many times. A mental model is robust when it is well-tested and accurate. A mental model is sophisticated when it accounts for multiple variables and it has enough nuance to predict different outcomes in different situations. A mental model is intuitive when it’s been used so often that we can apply it reflexively, without having to think about it. A mental model makes sense when it explains a pattern instead of merely describing it. A mental model is grounded when that explanation is constructed on top of underlying models that are robust. A mental model is concrete when those underlying models are also intuitive.
I’m still not sure about my classification system. The adjectives in the classification system don’t quite match up with how I’ve been using those adjectives in my writing, which tells me that my classification system is imprecise or the way I’ve been writing about mental models has been imprecise. I feel like my brain needs more data before it can sort this out. But once it has, I’ll have a much better theory for understanding mental models.