the decision bloghttps://calculensis.github.io/Sat, 06 Aug 2022 00:00:00 -0400simple tools, part 5: decision treeshttps://calculensis.github.io/decision%20trees.html<p><img align=right src="images/decision-tree.jpg" width="150"/></p>
<p>Now that we know, from <a href="https://www.thedecisionblog.com/probability%20and%20degrees%20of%20belief.html">the previous post</a>, how to translate back and forth between our degrees of confidence and subjective probabilities, we can learn a new tool that, unlike the <a href="https://www.thedecisionblog.com/linear%20model.html">linear model</a> or the <a href="https://www.thedecisionblog.com/pro-con%20list.html">weighted pro-con list</a>, takes into account the uncertainty of the outcomes of our actions.</p>
<p>Before continuing, it's important to say that we'll often be considering simple versions of our models, because it's easier to explain how to use them if we keep it simple. But once you know how to use them, you can add sophistication in all sorts of ways. For example, the three factor linear model can have more than three factors; the tree we're about to describe can get very complicated indeed if that's what it takes to faithfully describe the situation that you're in!</p>
<p>Now, then, say we're trying to consider whether to read that new book everyone is talking about. After all, reading it has an opportunity cost: Time you spend reading it is time that you could have spent doing something else, something perhaps more valuable to you. </p>
<p>The decision tree for whether to read the book is shown below; here is how it's constructed: The square on the left hand side is called a "choice point"; the branches leading from it (sometimes called "levers") are the different things you could choose to do. In this example there are two branches, "read it" and "don't read it", but in general there could be any number of branches depending on how many options you're considering.</p>
<p><img src="images/read-or-not.png" width="300"/></p>
<p>The circles represent points past which things are not under your control anymore: If you read the book, then you will either have learned something valuable or not (according to this model, though see below), each with a certain probability. Suppose you suspect that the book contains something valuable that you don't already know, maybe because a friend recommended it to you. </p>
<p>Suspecting that you would learn something is in the neighborhood of 0.6; so we label the "learn" branch with that probability. The probabilities on all the branches leading from a given circle have to sum to 1 (because one of those things has to happen); so the probability on the "not learn" branch is 0.4.</p>
<p>We have been assuming that only two branches lead from each of the circles to keep things simple, but they can be any number depending on the number of different outcomes we're considering. For example, we could have made three branches, one for "learn something of great value", one for "learn something moderately useful", and one for "learn nothing useful". Again, whatever numbers we put for the probabilities of these outcomes, they would all need to sum to 1.</p>
<p>The lower circle also has two possible outcomes: You didn't read it and missed out on something important, or you didn't read it and didn't miss out on anything. If the probability of learning something useful from the book is 0.6, it makes sense that the probability of missing out if you don't read it would be the same, 0.6, but the probabilities on the lower part of the tree don't always have to be the same as on the upper part. It just turns out that way for this example.</p>
<p>The last ingredients are the numbers at the ends of the branches. These are called "utilities" or "personal values", and they represent how good or how bad each outcome would be, with the worst outcome getting 0 and the best getting 100. </p>
<p>The worst outcome is spending the time to read the book and getting nothing out of it, so that branch gets 0. The best outcome, let's say, is reading the book and learning something valuable from it (the value of the thing you learned being much greater than the opportunity cost you incurred reading). This outcome gets 100. </p>
<p>Not reading it and missing out on what the book has to teach you is bad, but not as bad as reading it and getting nothing would be, so that branch gets a 10. This number you get by asking your gut how bad it would be on a scale from 0 to 100. Again consulting our guts, suppose we decide that not reading the book and not missing out on anything should get a 90. (It's not as good as if we had read it and learned something!)</p>
<p>After the tree is constructed, we calculate "expected utilities" for each choice as shown on the right hand side: For each branch leading from the choice point, multiply the probability times the personal value and add it all up. The choice that has the highest expected utility wins, in this case the "read it" branch.</p>
<p><a href="https://twitter.com/Estimatrix/status/1555693184977600512?s=20&t=YFPoxpEQ2Qp14U4FliD7fA">Discuss on Twitter</a></p>Kayla LewisSat, 06 Aug 2022 00:00:00 -0400tag:calculensis.github.io,2022-08-06:/decision trees.htmlbasicssimple tools, part 4: probability and degrees of beliefhttps://calculensis.github.io/probability%20and%20degrees%20of%20belief.html<p><img align=right src="images/dice.jpg" width="150"/></p>
<p>Before we start, an admission: The things in this post are not so much useful in themselves as prerequisites for things we're going to do later. For many of the tools that follow, we'll need to be able to assign numbers to our subjective degrees of confidence. By the end of this post, you'll know how we're going to do that!</p>
<p>Suppose we're trying to decide whether to found a startup, and that the probability of success of a randomly chosen startup is 10% (numbers like this often come from historical frequency data). To make the math work out right for the models I'll be introducing later, we'll divide by 100 and represent probabilities as numbers between 0 and 1 instead of between 0 and 100.</p>
<p>If our startup were like a randomly chosen one, then, we would estimate our probability of success at 0.1. This probability, which we get before taking into account the details of our own particular circumstances, is called the base rate or prior probability; it comes from Bayesian probability theory (on which more later).</p>
<p>At this point we could look up the most common reasons that startups fail and then take steps to make those outcomes less probable. For example, running out of money is a common reason; to mitigate that possibility, we could work extra hard securing investors. We might also make extra sure that we did our due diligence with market analysis, etc. Suppose we did all this. Now we can update our prior to... what?</p>
<p>What we need is a way to map our new (albeit squishy and gut-derived) degree of confidence to a probability; we can do it via the following reasoning: Consider flipping a fair coin. It has probability 0.5 of landing heads and 0.5 of landing tails. If I asked you, before flipping, "Do you think the coin will land heads?", an answer that would make tons of sense is "I have no idea, it's equally likely to come up either way!" From this we conclude that 0.5 corresponds to taking no position, that is, not believing either way about the truth of some possibility. </p>
<p>Now suppose that the coin is weighted so that it lands heads 60% of the time. In this case a reasonable answer to the question of which side in will land on is "I suspect it will land heads, because it's a little more likely to do that." By now, you see where this line of reasoning is going: Probabilities between 0.5 and 1 represent increasing degrees of confidence that a thing will happen, and between 0.5 and 0 represent degrees of confidence that it won't. </p>
<p>Returning to our problem: What is the probability corresponding to our degree of confidence of success now that we've taken precautions? Our starting value of 0.1 is a lot closer to zero than it is to 0.5, so it corresponds to being highly confident that the business will fail (i.e., highly confident that it won't succeed). How much should we increase that number for our particular case? It's a difficult question, about which much ink or, erm, pixels can be spilled. Let's say you only suspect failure. If 0.6 is suspecting success, then 0.4 should be suspecting failure, so our updated degree of confidence that the business will succeed is 0.4.</p>
<p>That's how it works! Here is a scale with suggested degrees of belief attached:</p>
<p><img src="images/probability-scale.png" width="400"/></p>
<p><a href="https://twitter.com/Estimatrix/status/1555693184977600512?s=20&t=YFPoxpEQ2Qp14U4FliD7fA">Discuss on Twitter</a></p>Kayla LewisFri, 05 Aug 2022 00:00:00 -0400tag:calculensis.github.io,2022-08-05:/probability and degrees of belief.htmlbasicssimple tools, part 3: the weighted pro-con listhttps://calculensis.github.io/pro-con%20list.html<p><img align=right src="images/journal-coffee.jpg" width="150"/></p>
<p>The linear model directs us to condense our decision down to the few most important things, but that can be difficult or not feasible; what then? One answer to that question is the weighted pro-con list. </p>
<p>It goes like this: For each possibility that you're considering, write down a list of good things that would happen (pros) and bad things (cons) underneath it. Now you've got an ordinary pro-con list.</p>
<p>The next step is, for each item you wrote as a pro, rate the goodness of that item on a scale from 0 (not good at all) to 10 (maximally good). Similarly, rate the cons, except this time use negative numbers, so something moderately bad would get, e.g., a -5. </p>
<p><img src="images/weighted-pro-con.png" width="350"/></p>
<p>Once all the pro-con items for an action under consideration have scores, you add them all up, both negative and positive; do that separately for each choice you're considering. An example is shown for the decision of "whether to leave my current job." The choice that ends up with the highest score is the one to choose, according to this model; in the example shown, the model says that you should leave your current job.</p>
<p>Just like with the linear model, you may find yourself unhappy with the final result; again, you will have learned something about how you feel. This would also be an excellent opportunity to understand your feelings better by figuring out why the pro-con model ended up giving the "wrong" result. Maybe the weights aren't quite right. Maybe you missed an important pro or con.</p>
<p>For example, suppose you are unhappy with the verdict that you should change jobs. In this case I would first apply the status quo bias test, designed to make sure that we aren't valuing our current situation too much just because we happen to already be in it. This test asks you to imagine yourself, as vividly as possible, already at the new job and then ask "If there were a button I could press that would make things go back to the way they are now, would I press it?" Your answer to this question can reveal what your preferences would be subtracting out the status quo bias. After that, if the problem is still not resolved, you might return to the pros, cons, and their weights.</p>
<p>So far we've been treating the consequences of our decisions as certain. For example, the pro-con list for the example pictured above assumes that "I would love what I do" would definitely happen if you took the new job. But what if we want to incorporate uncertainty into our thinking? Stay tuned!</p>
<p><a href="https://twitter.com/Estimatrix/status/1555693184977600512?s=20&t=YFPoxpEQ2Qp14U4FliD7fA">Discuss on Twitter</a></p>Kayla LewisWed, 27 Jul 2022 00:00:00 -0400tag:calculensis.github.io,2022-07-27:/pro-con list.htmlbasicspro-con listsimple tools, part 2: the linear modelhttps://calculensis.github.io/linear%20model.html<p><img align=right src="images/arrow.jpg" width="150"/></p>
<p>For our first simple decision making tool, let's look at the 2 to 3 factor linear model. Suppose you're trying to decide which of two houses to buy, <span class="math">\(H_1\)</span> or <span class="math">\(H_2\)</span>, and you're really on the fence about it! </p>
<p>The linear model approach asks you first to consider what are the 2 or 3 most important attributes for you that a potential home could have. For example, let's say they are affordability of monthly payment (A), that feeling of charm when you first walk in (C), and typical level of quietness (Q). Granted, this last attribute might be hard to get at, but knowing that it's one of the most important qualities for you would be important, and it could get you thinking of ways you might determine it, perhaps by interviewing some of your new would-be neighbors.</p>
<p>The next step is to decide how important these attributes are relative to one another and assign them weights; let's call them <span class="math">\(w_A\)</span>, <span class="math">\(w_C\)</span>, and <span class="math">\(w_Q\)</span>. The weights will be numbers between 0 and 1 such that when you add them together you get 1. For example, if each of the above attributes is equally important to you, then they all get weight 1/3. Or if, say, affordable monthly payment is twice as important as level of quietness, and quietness is just as important as the charm factor, you would have <span class="math">\(w_A=2/4\)</span>, <span class="math">\(w_C=1/4\)</span>, and <span class="math">\(w_Q=1/4\)</span>. You may have to play around to find the right values, but you can always just guess, see if the weights sum to 1, and adjust as needed. For the rest of our house example, let's use the weights <span class="math">\(w_A=2/4\)</span>, <span class="math">\(w_C=1/4\)</span>, and <span class="math">\(w_Q=1/4\)</span>.</p>
<p>Now you would rate homes <span class="math">\(H_1\)</span> and <span class="math">\(H_2\)</span> based on the attributes, where each attribute gets a score from 0 (worst) to 10 (best). For example, maybe <span class="math">\(H_1\)</span> is a super affordable house in a moderately quiet neighborhood; in that case, you might might score it as A = 9 and Q = 7. Maybe it's not so high on charm, so C = 3. The linear model we've been constructing would then have us calculate a total score <span class="math">\(S(H_1)\)</span> for house 1 using the formula</p>
<div class="math">$$
S(H_1) = w_A A + w_C C + w_Q Q
$$</div>
<div class="math">$$
=\frac{2}{4}(9)+\frac{1}{4}(3)+\frac{1}{4}(7)=7.
$$</div>
<p>Suppose the second potential home is charming indeed (C=9) but not so affordable (A=3), and that it's in a medium-noise environment (Q=5). Then we get</p>
<div class="math">$$
S(H_2) = \frac{2}{4}(3)+\frac{1}{4}(9)+\frac{1}{4}(5)=5,
$$</div>
<p>and the model says to buy the first home, <span class="math">\(H_1\)</span>, because it scores higher.</p>
<p>What if you find yourself not liking that result? Then you can try to figure out why your feelings and the model disagree. Maybe those weights weren't quite right? Maybe the attributes you chose weren't the most important ones to you after all? And if, no matter what you do, you keep finding yourself unhappy when <span class="math">\(H_1\)</span> wins, then the model helped you discover how you feel, and you still learned something significant. (Remember, you were on the fence at first!) </p>
<p>The next model I present will complement this one.</p>
<p><a href="https://twitter.com/Estimatrix/status/1555693184977600512?s=20&t=YFPoxpEQ2Qp14U4FliD7fA">Discuss on Twitter</a></p>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Kayla LewisWed, 20 Jul 2022 00:00:00 -0400tag:calculensis.github.io,2022-07-20:/linear model.htmlbasicsAbouthttps://calculensis.github.io/about.html<p><img align=right src="images/me-summer-2022.jpg" width=150/></p>
<p>Hello, I'm Kayla Lewis, a professor in the New York City area who loves using and thinking about applied rationality, critical systems thinking (an approach that embraces many kinds of systemic perspectives), and artificial intelligence.</p>
<p>Herein I write about how we can use these tools and approaches to improve the quality of our decisions.</p>
<p>Got comments or questions? Contact me here:</p>
<p><a href="mailto:kaylalewis@thedecisionblog.com">kaylalewis@thedecisionblog.com</a></p>Kayla LewisSun, 17 Jul 2022 00:00:00 -0400tag:calculensis.github.io,2022-07-17:/about.htmlaboutsimple tools, part 1https://calculensis.github.io/simple%20tools.html<p><img align=right src="images/linear.jpg" width="200"/></p>
<p>It seems like there are two extreme intuitions that are commonly held about how best to go about decision making: The first is to say "The hell with models - I can do just fine by myself!" and the second is "Sure I can use some help, and the more sophisticated the better! And by sophisticated, I mean AI." </p>
<p>Both of these ideas reject using simple pencil-and-paper models, to their detriment! I'll explain why for each in turn.</p>
<p>Regarding the first idea - that we do just fine by ourselves - there are many factors mitigating against this notion, but in the interest of space I'll focus on just two: the recency effect and incomplete thinking. </p>
<p>The recency effect is our tendency to give whatever we were thinking about most recently a greater weight than other factors that we want to influence our decision. So, for example, to decide where we want to go for vacation, suppose we care about affordability and location. If the last thing we were considering is location, then affordability likely won't get as much weight as it deserves when we are making our final decision. </p>
<p>Again, if we are taking a multi-lens approach - that is, looking at the decision from many different perspectives - then we run the risk of giving the last lens we looked through more power than it deserves. </p>
<p>Incomplete thinking can refer to not generating enough possibilities, i.e. failing at step one of <a href="https://www.thedecisionblog.com/decisions%20and%20the%20search-inference%20framework.html">the search-inference framework</a>, but it also happens when we don't think through each of the possibilities we've generated thoroughly enough. Exacerbating this problem is that it usually feels to us like we're considering everything we need to, like something more complicated is going on in our minds than really is. </p>
<p>This feeling - that we are always doing a good job integrating the information available to us - is similar to the feeling that we experience the full panoply of information about what lies in front of us through the light that reaches our eyes: In reality we only see a small part of that information; our brains fill in the gaps in a way that is usually correct but fools us into thinking that we have a much richer perception of the world than we do.</p>
<p>We can overcome the recency effect, and to a large extent incomplete thinking, by using pencil-and-paper math models. Such models will maintain the proper weights because those weights will be contained in the model equations; moreover, the models will guide us to think all the way through the relevant possibilities.</p>
<p>Another important consideration is that decisions often seem to involve a lot of variables, and to get anywhere it helps to try and boil these dimensions down to just a few things that matter the most; making a model often forces us to go through that process. It's true that AI can also help us simplify things this way...</p>
<p>...which brings us to the second of the extreme intuitions: That the only thing that will do better than we do is something computationally sophisticated like machine learning (a type of AI). For many decisions we don't have time to collect months worth of data (or spend time trying to find and clean data that may already be out there somewhere) and run analytics on it. In fact, for many decisions we don't even start out knowing what data would be relevant or what questions we would want to ask of that data!</p>
<p>A middle approach is to do something that improves on our native mental abilities but that doesn't involve computation - in other words, pencil-and-paper modeling.</p>
<p>In the next few posts I'll share some of these models with you!</p>
<p><a href="https://twitter.com/Estimatrix/status/1555693184977600512?s=20&t=YFPoxpEQ2Qp14U4FliD7fA">Discuss on Twitter</a></p>Kayla LewisSun, 17 Jul 2022 00:00:00 -0400tag:calculensis.github.io,2022-07-17:/simple tools.htmlbasicsbasicssimple toolsdecisions and the search-inference frameworkhttps://calculensis.github.io/decisions%20and%20the%20search-inference%20framework.html<p><img align=right src="images/choosing.jpg" width="200"/></p>
<p>Hello everyone, welcome to my first blog entry!</p>
<p>This blog will be about all things related to effective decision making. </p>
<p>I'm going to start with simpler approaches or models that don't take very much time and build up to the more sophisticated ones.</p>
<p>A common theme will be that looking at a decision through as many lenses as possible, given our time constraints, is extremely useful; in this first post I want to explain why that is.</p>
<p>Making decisions is a kind of thinking, and one of my favorite models describing how thinking works is the search-inference framework. As applied to decision making, it says that thinking has three stages:</p>
<p>(1) We generate a space of possible actions or vantage points</p>
<p>(2) By using the evidence currently available to us, and in light of our goals, we evaluate the strength of each possibility</p>
<p>(3) We choose, or infer, the strongest possibility</p>
<p>The most common mistakes in thinking can be traced to failing at step (1) or (2). For step (1), the common failure is not generating a rich enough space of possibilities; for step (2), it's not weighing the evidence for each possibility fairly. I'll talk about this second way of failing in later posts.</p>
<p>Viewing a decision through multiple lenses or models is a way of lowering the probability that we fail at step (1): Each model gives us a different way of viewing the decision we want to make.</p>
<p>Very often, different models will tell us to take different courses of action. It might seem like this is a problem, but in reality it's a strength. Instead of hoping that all models will give the same answer, what we'll do is try to understand <strong><em>why</em></strong> the different approaches give the answers that they do; in so doing, it will usually become clearer which choice we ultimately want to make.</p>
<p>You can think of asking models what they "think" just as you would ask your friends what they think. Your friends might give conflicting answers, and that's a good thing! You are definitely expanding the space of possibilities to consider when that happens.</p>
<p><a href="https://twitter.com/Estimatrix/status/1555693184977600512?s=20&t=YFPoxpEQ2Qp14U4FliD7fA">Discuss on Twitter</a></p>Kayla LewisMon, 11 Jul 2022 00:00:00 -0400tag:calculensis.github.io,2022-07-11:/decisions and the search-inference framework.htmlbasicsbasicssearch-inference