Why do some people sometimes make mistakes? In higher stakes settings, do people make fewer mistakes? When the setting is more unfamiliar and complex, do people make more mistakes? If we know that we do not know the landscape we face and we know that we are in an unfamiliar “place”, do we use new AI tools to guide us? If we do so, do we make fewer mistakes because AI takes a complex problem and simplifies it for us and leaves us with a manageable choice set that we can handle on our own terms?
In this substack, I want to focus on some themes I sketched out in this Tweet Thread.
This 2015 paper greatly interests me. McFadden, a Nobel Laureate, has argued that health insurance plan choice is really complicated featuring perhaps millions of permutations of possible choices that leaves people with “too much choice” such that they can regret the plan choose. Here is an example of his writing.
EVEN Nobel Laureates can gloss over how markets operate. If enough people are making mistakes in choosing a plan, then entrepreneurs will innovate. Jonathan Kolstad has been one of the innovators. Here is an AER paper he wrote.
As I understand it, Jon has designed AI software that elicits a person’s demographic information and other relevant information such as whether Matthew has pre-existing conditions, my wealth and my risk aversion and then his algorithm tells me what are the right 10 plans for me to choose from and how they compare. This simplification of the choice set helps me to make the right choice for me.
We also see this in the auto industry with tailored suggestions for vehicle choice. Click here to participate.
As more humble people admit to themselves that they know that they do not know what is the right decision to make, they will seek input from AI. How do differnet people adapt to this uncertainty? Steve Levitt wrote about flipping a coin to make a decision. This strikes me as not a great decision. An alternative is to seek independent information for calmly pruning your choice set. Assuming the AI is tuned to truthful information that you have provided, then you are more likely to make a good choice if there is AI supply side competition to be known as a great AI company. If the AI supplies garbage, then this substack’s logic is wrong.
I claim that with the rise of AI that fewer people will make mistakes going forward and the puzzles that behavioral economists point to will diminish in scope and scale. AI is a complement for our own skills in making decisions.
Consider an example from my work with Frank Wolak from 2013.
Thousands of residential electricity consumers didn’t understand their non-linear electricity bill. This complexity of the bill marginal rates meant that they were making mistakes. By taking our 20 minute education course, we taught them about their bill’s implicit marginal price incentives. They subsequently made better choices such that when their marginal price was high, they reduced their consumption. Our paper highlights how augmenting human capital “kills off” behavioral puzzles. AI will only accelerate our point. Behavioral economists will have fewer puzzles to work on going forward!!
This could really have been helpful in COVID if CDC had made the right information available: when to wear a mask or not, when to close a school (or Tesla plant) or not.
I wonder if AI could be better for fiduciary responsibility. Also, I wonder if AI could learn your risk preference by observing some decisions over a time period (kind of like the driver monitoring technology of driving style)