1. Introduction
In this post, I review the sparsity based model of bounded rationality introduced in Gabaix (2011) and then extended in Gabaix (2012).
In the baseline framework presented in Gabaix (2011), a boundedly rational agent faces the problem of which action to choose in order to maximize his utility where his optimal action may depend on many different variables. The agent then chooses an action by following a step algorithm: first, he chooses a sparse representation of the world whereby he completely ignores many of the possible variables that might affect his optimal action choice; second, he uses this endogenously chosen sparse representation of the world to choose a boundedly rational action. Gabaix (2012) then shows how to use this sparsity based framework to solve a dynamic programming problem.
2. Illustrative Example
In this section, I work through a simple example showing how to build bounded rationality into a decision maker’s problem in a tractable way. I begin by defining the problem. Consider the manager of a car factory, call him Bill, who gets to choose how many cars the factory should produce. Let denote Bill’s value function in units of
given an action
over how many cars to produce:
(1)
Here, the vector denotes a collection of factors that should enter into Bill’s decision process. For instance, he might worry about last month’s demand,
, the current US GDP,
, the recent increase in the cost of break pads,
, and the completion of the new St. Croix River bridge in Minneapolis, MN,
. Likewise, the vector
denotes how much weight the Bill should place on each of the
different elements that might enter into his decision process. Each of the elements
are in units that convert decision factors into units of
. So, for example,
would have units of
while
would have units of
.
is a constant with units of
which balances the equation.
Next, consider Bill’s optimal and possible sub-optimal action choices. If Bill is completely unconstrained and can pick any whatsoever, he should choose:
(2)
where has units of
. However, suppose that there were some constraints on Bill’s problem and he could not fully adjust his choice of how many cars to produce in response to every little piece of information in the vector
. Let
denote his choice of how many cars to produce where
for most
:
(3)
For instance, if but
then Bill will adjust the number of cars he produces in response to a change in the US GDP but not in response to a change in the price of break pads. Thus, Bill’s choice of how many cars to produce
can be rewritten as a choice of how much he should weight each potential decision factor
.
In order to complete the construction of Bill’s boundedly rational decision problem, I now have to define a loss function for Bill which trades off the benefits of choosing a weighting vector which is closer to the optimal choice
against the cognitive costs thinking about all of the nitty-gritty details of the
dimensional vector of decision factors
. As a benchmark, suppose that there were not cognitive costs. In such a world, Bill would choose
as he would suffer a quadratic loss from deviating from this optimal strategy defined by the function
below, but no compensating “cognitive” gain from not having to think about how the construction of a bridge in Minneapolis should affect his production decision:
(4)
where the term is a place holder which helps balance the units in the equation. One method for incorporating cognitive costs into Bill’s decision problem would be to charge him
units of utility per
unit of emphasis on each decision factor. So, for example, if
and Bill increased his production by
in GDP growth, then Bill would pay a cognitive cost of
where
has units of
. Below, I formulate this boundedly rational loss function as a function of
which I denote
:
(5)
where the term is constant with units
in order to balance the equation.
The key idea is that Bill’s decision problem is both convex and sparse only when . Below, I give
examples which illustrate this fact:
Example (Quadratic Complexity Costs): First, consider the quadratic case when
. Then, taking the partial derivative of
with respect to an arbitrarily chosen dimension
and setting
for simplicity we the optimality condition:
(6)
This yields an internal optimum for Bill’s choice of
:
(7)
While this is an easy solution to solve for, setting
doesn’t yield any sparsity as
whenever
.
Graphically, in the above example, the boundedly rational agent will choose a weight at the point on the
-axis in the figure below where the slope of the solid pink
curve is exactly equal to the slope of the dashed black line.

The solid colored lines represent the cognitive costs faced by a boundedly rational agent with kappa = 1 when alpha = 2, 3, 4, 5 respectively. The dashed black line represents the gain to the boundedly rational agent to increasing the weight m_n on a particular decision factor when gamma = 1.
Example (Fixed Complexity Costs): On the other hand, let’s not think about a case where
with the convention that
, then I would want to again set:
(8)
However, this problem is no longer convex as the costs to increasing
a little bit away from
will always outweigh the incremental benefits. Thus, I get a solution:
(9)
While well posed, this non-convex problem is computationally hard to solve in an extremely severe way as the solution strategy expands combinatorially rather than linearly in the number of variables
. e.g., see example
in Boyd (2004) describing the regression selector problem.
Graphically, we can see the intuition for the problem posed by non-convexity for in the figure below. We can see for the blue solid line representing the
case, an incremental increase in
where
is just marginally greater than
will have a cognitive cost much greater than the benefit; however, for
where
the increase in the weighting factor
will outweigh the benefit. The
can be seen as an even more extreme limiting case as the blue line becomes increasingly kinked and eventually becomes a flat line at
.

The solid colored lines represent the cognitive costs faced by a boundedly rational agent with kappa = 1 when alpha = 1/2, 1/3, 1/4, 1/5 respectively. The dashed black line represents the gain to the boundedly rational agent to increasing the weight m_n on a particular decision factor when gamma = 1.
Looking at the previous examples, we can follow the goldie locks logic and see that there is a particular parameterization of
which yields both sparsity and convexity… namely,
.
Example (Linear Complexity Costs): Finally, consider the case where
. Here, we find that:
(10)
where
denotes the sign operator which returns the sign of a real constant. Thus, we have
different options for the optimal choice of
:
(11)
Thus, the setting where
is both analytically tractable and sparse as any variables with
will be ignored by the decision maker.
3. Static Problem Formulation
In the above example, I showed how to embed sparsity into Bill’s decision problem using a linear cost function. However, Bill’s problem was extremely simple in the sense that he had a linear-quadratic value function. Can the intuition developed in this simple example be extended to more complicated example with more elaborate utility functions? In the simple example, I found that there was a clean knife-edge result regarding being the only power coefficient which delivered both sparsity and a tractable solution; however, this result depended on Bill’s utility gain to increasing his weighting
on decision factor
being linear. e.g., look at the black dashed line in the
figures above. Will this same intuition regarding
hold for more complicated utility functions?
Gabaix (2011) shows that the intuition indeed holds for a wide variety of utility specification after using an appropriate quadratic approximation of the problem around a reference representation of the world and action
. For instance, in Bill’s problem above, this would be like allowing his value function
to be non-quadratic and then approximating his problem as quadratic around a reference
implying that his default decision is to ignore all variables except for last period’s demand and the current value of GDP and number of cars to produce
.
In order to construct this approximation, I need to define objects. First, for an arbitrary value function
define the partial derivatives
and
as:
(12)
where is negative definite implying that
is strictly concave in the neighborhood of
. I then use these
matrices to define a weighting matrix
which captures how much information is lost via the quadratic approximation:
(13)
This matrix corresponds to the weighting matrix
used in GMM to differentially interpret the error terms from each of the equations a la Cochrane (2005). For instance, when digesting a vector of errors from a set of pricing equations in GMM, a large
term in the weighting matrix
means that a set of coefficients which produce large errors in equations
and
at the same time will be penalized more heavily. In the quadratic approximation below,
will be used to differentially interpret the loadings
on different decision factors.
Step 1 (Choose Representation): The agent chooses his optimal sparse representation of the world
as the solution to the optimization problem:
(14)
where
captures the cognitive cost of a model and is defined as:
(15)
In the formulation above, plays the same role as
in the motivating example and simply controls the units and scale of the agent’s choice of representations. In general,
will not be problem specific in any material sense.
Step 2 (Choose Actions): The agent maximizes over his choice of
actions
:
(16)
where the cognitive cost of deviating from the default action is
:
(17)
In the motivating example, Bill did not face Step of the algorithm at all as his choice of car production levels was unconstrained after choosing a representation of the world
. This second step allows for physical difficulty of adjusting the action given the agent’s understanding of the decision problem. e.g., consider a decision maker who makes a portfolio decision regarding how to allocate his firm’s wealth. After choosing a representation of the world in Step
, he might look at the facts
through the lens of this representation
and decide that he needs to lengthen the duration of his portfolio; however, since there are many different ways to do this in practice, he may face cognitive costs in executing such a change. Step
captures these sort of costs.
In summary, a boundedly rational agent with a preference for sparsity first employs a quadratic approximation of his value function and then uses a linear cost function to price the cognitive cost of the complexity of his model of the world. What’s more, this same linear cognitive cost function can be used in a secondary step to incorporate settings where there are cognitive costs to executing a given action.
4. Application to Dynamic Programming
I conclude by showing how to use these tools to solve for a representative agent’s optimal consumption choice in an infinite horizon economy by treating the terms in a Taylor expansion of the optimal consumption rule around the steady state as increasingly complicated decision factors. Consider a standard, discrete time, consumption based model where a representative agent has power utility with risk aversion parameter and time preference parameter
described by the preferences below:
(18)
Assume for simplicity that there is a single risky asset with return where
for simplicity. Then, I can write this representative agent’s problem recursively as follows:
(19)
where is the agent’s value function given wealth level
and
is the constant endowment rate. In this world, it is easy to derive that the optimal consumption choice is given by:
(20)
In the remainder of this example, I assume that even the boundedly rational representative agent can solve this simple problem in closed form. However, when I tweak the problem and allow the true values to be and
where the idiosyncratic terms
and
evolve according to
processes below with mean
shocks
and
:
(21)
the representative agent will want to seek a sparse solution due to cognitive costs. Introducing these idiosyncratic terms yields a more complicated value function with
additional state variables when writing the problem recursively:
(22)
where there will in general be no closed form solution for the optimal choice of .
However, suppose that we took a Taylor expansion of the optimal around the benchmark solution to get:
(23)
I treat these coefficients asthe boundedly rational representative agent’s
. Then, I can define his optimal log consumption choice as:
(24)
where and
are given by the rule:
(25)
A reasonable choice of this constant might be to choose for
as it would have the interpretation that the agent would set
whenever taking the interest rate or endowment level into accound would change the standard deviation of log consumption by
standard deviations and otherwise
. Thus, an sparsity seeking boundedly rational representative agent has a particular rule in mind when figuring out which higher order Taylor terms to ignore.
You must be logged in to post a comment.