It seems like there are two extreme intuitions that are commonly held about how best to go about decision making: The first is to say "The hell with models - I can do just fine by myself!" and the second is "Sure I can use some help, and the more sophisticated the better! And by sophisticated, I mean AI."
Both of these ideas reject using simple pencil-and-paper models, to their detriment! I'll explain why for each in turn.
Regarding the first idea - that we do just fine by ourselves - there are many factors mitigating against this notion, but in the interest of space I'll focus on just two: the recency effect and incomplete thinking.
The recency effect is our tendency to give whatever we were thinking about most recently a greater weight than other factors that we want to influence our decision. So, for example, to decide where we want to go for vacation, suppose we care about affordability and location. If the last thing we were considering is location, then affordability likely won't get as much weight as it deserves when we are making our final decision.
Again, if we are taking a multi-lens approach - that is, looking at the decision from many different perspectives - then we run the risk of giving the last lens we looked through more power than it deserves.
Incomplete thinking can refer to not generating enough possibilities, i.e. failing at step one of the search-inference framework, but it also happens when we don't think through each of the possibilities we've generated thoroughly enough. Exacerbating this problem is that it usually feels to us like we're considering everything we need to, like something more complicated is going on in our minds than really is.
This feeling - that we are always doing a good job integrating the information available to us - is similar to the feeling that we experience the full panoply of information about what lies in front of us through the light that reaches our eyes: In reality we only see a small part of that information; our brains fill in the gaps in a way that is usually correct but fools us into thinking that we have a much richer perception of the world than we do.
We can overcome the recency effect, and to a large extent incomplete thinking, by using pencil-and-paper math models. Such models will maintain the proper weights because those weights will be contained in the model equations; moreover, the models will guide us to think all the way through the relevant possibilities.
Another important consideration is that decisions often seem to involve a lot of variables, and to get anywhere it helps to try and boil these dimensions down to just a few things that matter the most; making a model often forces us to go through that process. It's true that AI can also help us simplify things this way...
...which brings us to the second of the extreme intuitions: That the only thing that will do better than we do is something computationally sophisticated like machine learning (a type of AI). For many decisions we don't have time to collect months worth of data (or spend time trying to find and clean data that may already be out there somewhere) and run analytics on it. In fact, for many decisions we don't even start out knowing what data would be relevant or what questions we would want to ask of that data!
A middle approach is to do something that improves on our native mental abilities but that doesn't involve computation - in other words, pencil-and-paper modeling.
In the next few posts I'll share some of these models with you!