aa r X i v : . [ ec on . T H ] J a n An Axiomatization of Stochastic Utility
Ricky Li † January 29, 2021
Abstract
I provide an axiomatization of stochastic utility over two periods, stating testable necessaryand sufficient conditions under which an agent’s choice behavior under exogenous menu selection can be modeled by a pair of random utility functions. Although static random utility is character-ized by a single axiom,
Block-Marschak nonnegativity , I demonstrate that an additional notionof marginal consistency is needed for the two-period axiomatization. In particular, when eachperiod’s choice set has size three, I restate the characterization using the simpler axiom of stochas-tic regularity . I conclude by stating several corollaries, including an axiomatization of stochasticutility with full support and an axiomatization of n -period stochastic utility. As stated in Sen (1971), it is well-known how to characterize the deterministic choice functions thatcan be represented by a unique, strict preference relation. However, in economics, agents’ choicesoften display some element of randomness. Instead of observing a mapping from each menu toan element of the menu, the analyst may observe a mapping from each menu to a probabilitydistribution over the menu. Analogously, the analyst may wish to represent this stochastic choicefunction with a probability distribution over strict preference relations, also known as a randomutility (RU) representation. Block et al. (1959) and Falmagne (1978) showed that a single axiomcharacterizes the existence of such a representation. † I am a senior at Harvard College, and my email is [email protected]. I thank Tomasz Strzalecki for introducingme to decision theory and for his invaluable guidance throughout my research career. See
Section 2 for the formal result. dynamic nondeterministic choice data,the analyst may similarly wish to microfound the data with a multiperiod random utility repre-sentation. Depending on the primitive, there are multiple variants of this model. One type is totreat menus as endogenous: at any given period, the agent chooses a lottery over the set of pairs ofimmediate consumption and a menu of lotteries for the next period. Given dynamic choice data ofthis type, Frick et al. (2019) obtained an axiomatization of dynamic random expected utility , as well assharper sub-models in which agents are forward-looking.However, there are also settings in which menus may be exogenously selected, such as re-search studies in which the authors present menus to the subjects. There are also settings in whichthe choice set is finite, ruling out lotteries. In particular, the analyst may wonder when this vari-ant of dynamic choice data can be modeled by a stochastic process of random preferences. Themain result of this paper is a characterization of these representations, which I name stochastic util-ity (SU). The rest of the paper proceeds as follows.
Section 2 provides an overview of RU and itsaxiomatization.
Section 3 formally defines SU and states its axiomatization.
Section 4 providessome corollaries to the main result.
Section 5 contains some relevant combinatorics results and allproofs.
Before introducing SU, I provide a brief overview of RU and its axiomatization. Let X be a finitechoice set, and let M : = X \{ ∅ } be the set of all nonempty menus. Given a (exogenously-chosen)menu A ∈ M , the agent makes a choice x ∈ A . The agent’s choice data for all nonempty menus isencoded in the following primitive: Definition 1. A stochastic choice function is a mapping ρ : M → ∆ ( X ) satisfying supp ρ ( A ) ⊆ A forall A ∈ M . Stochastic choice functions must satisfy supp ρ ( A ) ⊆ A because the agent can only pick fromchoices within the menu. As in Strzalecki (2021), I use ρ to denote a stochastic choice function and ρ ( x , A ) to denote the probability that ρ ( A ) assigns x . Let P be the set of strict preference relationsover X , and let C ( x , A ) : = (cid:8) ≻∈ P : x ≻ A \{ x } (cid:9) . Observe that { C ( x , A ) } x ∈ A form a partition of P . P can also be viewed as the set of permutations of X . The notation x ≻ A \{ x } denotes x ≻ y ∀ y ∈ A \{ x } . efinition 2. µ ∈ ∆ ( P ) is a random utility (RU) representation of ρ if ρ ( x , A ) = µ (cid:0) C ( x , A ) (cid:1) for allx ∈ A ∈ M . In the deterministic case, it is well-known that a choice function can be represented by a strictpreference relation if and only if it satisfies Sen’s α condition, as shown in Sen (1971). I will nowstate the analogous axiomatization of RU, first for | X | ≤ X . Axiom 1. ρ satisfies regularity if ρ ( x , A ) ≥ ρ ( x , B ) for all x ∈ A ⊆ B ∈ M . As stated in Strzalecki (2021), regularity serves as the stochastic analog of Sen’s α . Regularityis necessary for RU because C ( x , A ) ⊇ C ( x , B ) for all x ∈ A ⊆ B . In particular, when the choice setis of size three or less, regularity characterizes RU. Lemma 1 ( Block et al. (1959) ) . Suppose | X | ≤ . ρ has a RU representation if and only if it satisfiesregularity. If the choice set satisfies these cardinalities, Strzalecki (2021) shows that the RU representationis also unique. For higher cardinalities, a stronger axiom is needed to characterize RU.
Definition 3 ( Chambers and Echenique (2016) ) . For any A ( X and x ∈ A C , define their Block-Marschak sum to be M x , A : = ∑ B ⊇ A C ( − ) | B \ A C | ρ ( x , B ) Axiom 2 ( Chambers and Echenique (2016) ) . ρ satisfies Block-Marschak nonnegativity if M x , A ≥ for all x ∈ A C = ∅ . Lemma 2 ( Block et al. (1959); Falmagne (1978) ) . ρ has a RU representation if and only if it satisfiesBlock-Marschak nonnegativity. With these two axiomatizations in hand, I turn to the dynamic model.
I begin by generalizing the agent’s static problem to two periods, denoted t =
1, 2, with finite choicesets X t . As before, define M t : = X t \{ ∅ } . In period t , the agent is offered a (exogenously-chosen)3enu A t ∈ M t and makes a choice x t ∈ A t . Importantly, in the dynamic case, the analyst sequen-tially observes the agent’s choices. We can thus encode the agent’s choice data as follows. As before,the analyst observes the (first-period) stochastic choice function ρ . As in Strzalecki (2021), define H : = { ( A , x ) ∈ M × X : ρ ( x , A ) > } as the set of observable choice histories. In addition to ρ , the analyst also observes a family of period-2 stochastic choice functions { ρ ( ·| h ) } h ∈H , indexedby choice histories. Thus, the primitive is the vector ρ : = (cid:0) ρ , { ρ ( ·| h ) } h ∈H (cid:1) . Since the analystdoes not have access to data describing the agent’s period-2 choices after making zero-probabilityperiod-1 choices, WLOG let ρ ( · , A | A , x ) ∈ ∆ ( A ) be an arbitrarily chosen probability distribu-tion when ρ ( x , A ) = As before, let P t be the set of strict preference relations over X t . For x t ∈ A t , define C t ( x t , A t ) : = (cid:8) ≻ t ∈ P t : x t ≻ t A t \{ x t } (cid:9) and C ( x t , A t ) : = C t ( x t , A t ) × P − t . Definition 4 (Strzalecki (2021)) . µ ∈ ∆ ( P × P ) is a (two-period) stochastic utility (SU) representa-tion of ρ if ρ ( x , A ) = µ (cid:0) C ( x , A ) (cid:1) for all x ∈ A ∈ M and ρ ( x , A | A , x ) = µ (cid:0) C ( x , A ) | C ( x , A ) (cid:1) for all x ∈ A ∈ M and ( A , x ) ∈ H . If µ is a SU representation of ρ , note that its marginal over P is a RU representation of ρ and its conditional µ (cid:0) · | C ( x , A ) (cid:1) is a RU representation of ρ ( ·| A , x ) for ( A , x ) ∈ H . Definition 5.
For each t =
1, 2 and x t ∈ A Ct = ∅ , define their joint Block-Marschak sum to beM x , A ; x , A : = ∑ B ⊇ A C ∑ B ⊇ A C ( − ) | B \ A C | + | B \ A C | ρ ( x , B | B , x ) ρ ( x , B ) and their joint upper edge set to beE ( x , A ; x , A ) : = { ( ≻ , ≻ ) ∈ P × P : A t ≻ t x t ≻ t A Ct \{ x t } , t =
1, 2 } Axiom 3. ρ satisfy stochastic Block-Marschak nonnegativity if M x , A ; x , A ≥ for each t =
1, 2 and x t ∈ A Ct = ∅ . This is to ensure that expressions like “ ρ ( x , A | A , x ) ρ ( x , A ) =
0” make sense when ρ ( x , A ) =
0, so that theforthcoming axioms are well-defined. As long as ρ ( · , A | A , x ) is an honest-to-god probability distribution over A , itsvalues will not affect the axiomatization. Let { t , − t } = {
1, 2 } . Joint upper edge sets are a generalization of what Chambers and Echenique (2016) define as upper contour sets .
4s its name suggests,
Axiom 3 serves as the two-period analog of
Axiom 2 . Unlike the staticcase,
Axiom 3 is not sufficient, and another axiom that enforces consistency between periods isneeded to complete the characterization.
Axiom 4. ρ satisfy marginal consistency ifP ( x , A ; A ) : = ∑ x ∈ A ρ ( x , A | A , x ) ρ ( x , A ) is invariant in A for all x ∈ A ∈ M . The main result of this paper is that
Axioms 1 and 2 characterize two-period SU:
Theorem 1. ρ has a SU representation if and only if it satisfies stochastic Block-Marschak nonnegativityand marginal consistency. The full proof of
Theorem 1 is in
Section 5 , but I will provide a sketch here. First, I will stateseveral helpful propositions.
Propositions 1 and 2 serve as useful identities for the joint Block-Marschak sums, and Proposition 3 characterizes SU representations as probability measures thatassign each joint upper edge set its corresponding joint Block-Marschak sum. Proposition 1.
For each t =
1, 2 and x t ∈ A Ct = ∅ ρ ( x , A C | A C , x ) ρ ( x , A C ) = ∑ B ⊆ A ∑ B ⊆ A M x , B ; x , B Proposition 2.
For any x ∈ A C = ∅ and ∅ ( A ( X ∑ x ∈ A C M x , A ; x , A = ∑ y ∈ A M x , A ; y , A \{ y } Proposition 3. µ is a SU representation of ρ if and only if µ (cid:0) E ( x , A ; x , A ) (cid:1) = M x , A ; x , A for eacht =
1, 2 and x t ∈ A Ct = ∅ . To prove the forwards direction of
Theorem 1 , note that
Proposition 3 immediately impliesthat stochastic Block-Marschak nonnegativity is necessary, since probability measures assign non- In Strzalecki (2021), this is stated as the
LTP axiom . Proposition 1 is the two-period analog of
Lemma 7.4.I in Chambers and Echenique (2016), while
Proposition 2 is apartial generalization of
Lemma 7.4.II in the same book. This is the two-period analog of
Proposition 7.3. in Chambers and Echenique (2016). Marginal consistency is necessary because of the Law of Total Prob-ability. With
Proposition 3 in hand, it follows that to prove the backwards direction, it suffices toconstruct a probability measure µ ∈ ∆ ( P × P ) that assigns each joint upper edge set its corre-sponding joint Block-Marschak sum. I do this in Section 5 via the following steps:
1. Using marginal consistency, prove the period-1 equivalent of
Proposition 2 .2. Using stochastic Block-Marschak nonnegativity, recursively define a “partial measure” ν . ν is“partial” in the following sense: it is not defined on all subsets of P × P , but rather on pairsof subsets called t - cylinders .3. Verify that ν satisfies two crucial additivity properties over the pairs of t -cylinders.4. Use both additivity properties to define a probability measure µ that is an extension of ν , andverify that µ assigns each joint upper edge set its corresponding joint Block-Marschak sum. | X | = | X | = At lower choice set cardinalities, we can restate stochastic Block-Marschak nonnegativity as a sim-pler axiom.
Axiom 5. ρ satisfies stochastic regularity if for each t =
1, 2 and x t ∈ A t ⊆ B t , ρ ( x , A ) ρ ( x , B ) ≥ ρ ( x , A | B , x ) − ρ ( x , B | B , x ) ρ ( x , A | A , x ) − ρ ( x , B | A , x ) Stochastic regularity is necessary because x t ∈ A t ⊆ B t for each t =
1, 2 implies C ( x , A ) ∩ (cid:18) C ( x , A ) \ C ( x , B ) (cid:19) ⊇ C ( x , B ) ∩ (cid:18) C ( x , A ) \ C ( x , B ) (cid:19) When | X | = | X | = Proposition 4.
Suppose | X | = | X | = . ρ has a unique SU representation if and only if it satisfiesstochastic regularity and marginal consistency. Analogous reasoning provides intuition for why Block-Marschak nonnegativity is necessary for static RU. The proof strategy for this direction is adapted from the proof of the static case in Chambers and Echenique (2016). Corollaries
As shown by Fishburn (1998), for arbitrary finite X RU representations need not be unique. Thisalso implies that, in general, SU representations need not be unique. Indeed, sometimes it maybe desirable to represent ρ using a SU representation with full support over P × P . Definition 6. µ ∈ ∆ ( P × P ) has full support if µ ( ≻ , ≻ ) > for all ( ≻ , ≻ ) ∈ P × P . It turns out that characterizing this case requires only a slightly stronger version of
Axiom 3 . Axiom 6. ρ satisfies stochastic Block-Marschak positivity if M x , A ; x , A > for each t =
1, 2 andx t ∈ A Ct = ∅ . Corollary 1. ρ has a SU representation with full support if and only if it satisfies stochastic Block-Marschakpositivity and marginal consistency. Observe that stochastic Block-Marschak positivity is necessary because probability measureswith full support assign strictly positive probability to all nonempty events.
I am currently working on extending
Theorem 1 to more than two periods with multiperiod ver-sions of both axioms. Fix n >
2. The primitive is now the vector ρ n : = (cid:0) ρ , { ρ ( ·| h ) } h ∈H , . . . , { ρ n ( ·| h n − ) } h n − ∈H n − (cid:1) where H : = { ( A , x ) : ρ ( x , A ) > } and H t : = { ( A t , x t ; h t − ) ∈ M t × X t × H t − : ρ t ( x t , A t | h t − ) > } for all t >
1. As before, WLOG let ρ t ( · , A t | A t − , x t − ; · · · ; A , x ) ∈ ∆ ( A t ) be an arbitraryprobability distribution if ρ t ′ ( x t ′ , A t ′ | A t ′ − , x t ′ − ; · · · ; A , x ) =
0. Define P t , C t ( x t , A t ) , C ( x t , A t ) asbefore. Given t > h t = ( A t , x t ; h t − ) , define C ( h t ) : = C ( x t , A t ) ∩ C ( h t − ) . To see this, let µ , µ ′ ∈ ∆ ( P ) be distinct RU representations of ρ , and let µ ∈ ∆ ( P ) . Let µ = µ × µ , µ ′ = µ ′ × µ : itfollows that µ (cid:0) C ( x , A ) | C ( x , A ) (cid:1) = µ (cid:0) C ( x , A ) (cid:1) = µ ′ (cid:0) C ( x , A ) | C ( x , A ) (cid:1) . The existence of such a representation is equivalent to the existence of a distribution over R X × R X with positivedensity. Analogously, ρ has a RU representation with full support if and only if it satisfies the strict version of Axiom 2 . Theproof proceeds analogously to the proof of this corollary. efinition 7 ( Strzalecki (2021) ) . µ ∈ ∆ (cid:0) × nt = P t (cid:1) is a (n-period) stochastic utility (SU) representa-tion of ρ n if ρ ( x , A ) = µ (cid:0) C ( x , A ) (cid:1) for all x ∈ A ∈ M and ρ t ( x t , A t | h t − ) = µ (cid:0) C ( x t , A t ) | C ( h t − ) (cid:1) for all x t ∈ A t ∈ M t and h t − ∈ H t − . Now, we generalize the axioms. Let ( x , A ) : = ( x t , A t ) nt = and ( x − t , A − t ) = ( x t ′ , A t ′ ) nt ′ = t ′ = t .Let A C = ( A Ct ) nt = , and say B ≥ A ⇐⇒ B t ⊇ A t for each t =
1, . . . , n . Let j ( x , A ) = ρ ( x , A ) ∏ nt = ρ t ( x t , A t | A t − , x t − ; . . . , A , x ) . Axiom 7. ρ n satisfies (n-period) stochastic Block-Marschak nonnegativity ifM ( x , A ) : = ∑ B ≥ A C ( − ) ∑ nt = | B t \ A Ct | j ( x , B ) ≥ for all ( x , A ) satisfying x t ∈ A Ct = ∅ for each t =
1, . . . , n. Axiom 8. ρ n satisfies (n-period) marginal consistency if for any ( x , A ) and t =
1, . . . , n − ,P ( x − t , A − t ; A t ) : = ∑ x t ∈ A t j ( x − t , A − t ; x t , A t ) is invariant in A t . Corollary 2 ( Conjecture ) . ρ n has a SU representation if and only if it satisfies stochastic Block-Marschaknonnegativity and marginal consistency. As before, to prove
Corollary 2 it will be helpful to have the following generalization of
Proposition 3 in hand.
Corollary 3 ( Conjecture ) . µ is a SU representation of ρ n if and only if µ (cid:0) E ( x , A ) (cid:1) = M ( x , A ) for all ( x , A ) satisfying x t ∈ A Ct = ∅ for each t =
1, . . . , n, whereE ( x , A ) : = (cid:8) ( ≻ , . . . , ≻ n ) ∈ n × t = P t : A t ≻ t x t ≻ t A Ct \{ x t } , t =
1, . . . , n (cid:9) Let ( L , ≤ ) be a finite, partially ordered set (poset).8 efinition 8 ( Van Lint et al. (2001), 25.2 ) . The
M ¨obius function m L : L → Z ism L ( a , b ) = a = b a (cid:2) b − ∑ a ≤ c < y m L ( a , c ) a < b Lemma 3 ( Van Lint et al. (2001), 25.5 ) . Given a function f : L → R , define F ( a ) : = ∑ b ≥ a f ( b ) . Thenf ( a ) = ∑ b ≥ a m ( a , b ) F ( b ) This is known as the
M ¨obius inversion . I close this section with two more lemmas that will help for the following proofs.
Lemma 4 ( Van Lint et al. (2001), 25.1 ) . Fix finite X and let L = X , ≤ = ⊆ . Thenm L ( A , B ) = ( − ) | B |−| A | A ⊆ B else Lemma 5 ( Godsil (2018), 3.1 ) . Let L , S be posets with respective M¨obius functions m L , m S . Thenm L × S (cid:0) ( a L , a S ) , ( b L , b S ) (cid:1) = m L ( a L , b L ) m S ( a S , b S ) Proof.
Let L = X × X and ( A , A ) ≤ ( B , B ) ⇐⇒ A ⊆ B , A ⊆ B . Then ( L , ≤ ) is the(finite) product poset of ( X , ⊆ ) and ( X , ⊆ ) . By Lemmas 4 and 5 , its M ¨obius function is m L (cid:0) ( A , A ) , ( B , B ) (cid:1) = ( − ) | B |−| A | + | B |−| A | A ⊆ B , A ⊆ B t =
1, 2, fix any x t ∈ A Ct = ∅ and define f : L → R as f ( B , B ) : = ( − ) | B |−| A C | + | B |−| A C | ρ ( x , B | B , x ) ρ ( x , B ) F : L → R as F ( D , D ) = ∑ B ⊇ D ∑ B ⊇ D f ( B , B ) By Lemma 3 , f ( D , D ) = ∑ B ⊇ D ∑ B ⊇ D ( − ) | B |−| D | + | B |−| D | F ( B , B )= ⇒ f ( A C , A C ) = ρ ( x , A C | A C , x ) ρ ( x , A C ) = ∑ B ⊇ A C ∑ B ⊇ A C ( − ) | B |−| A C | + | B |−| A C | F ( B , B ) To see that ∑ B ⊇ A C ∑ B ⊇ A C ( − ) | B |−| A C | + | B |−| A C | F ( B , B ) = ∑ D ⊆ A ∑ D ⊆ A M x , D ; x , D we can match terms as follows. Fix any D ⊆ A and D ⊆ A . Then ( − ) | D C |−| A C | + | D C |−| A C | F ( D C , D C ) = ∑ B ⊇ D C ∑ B ⊇ D C ( − ) | D C |−| A C | + | D C |−| A C | f ( B , B )= ∑ B ⊇ D C ∑ B ⊇ D C ( − ) | B |−| D C | + | B |−| D C | ρ ( x , B | B , x ) ρ ( x , B ) = M x , D ; x , D where the second equality follows by observing that ( − ) n = ( − ) − n . Proof.
Fix any x ∈ A C = ∅ and ∅ ( A ( X . I will use the notation ρ ( D , B | B , x ) : = ∑ x ∈ D ρ ( x , B | B , x ) . We can write ∑ x ∈ A C M x , A ; x , A = ∑ x ∈ A C (cid:18) ∑ B ⊇ A C ∑ B ⊇ A C ( − ) | B |−| A C | + | B |−| A C | ρ ( x , B | B , x ) ρ ( x , B ) (cid:19) = ∑ B ⊇ A C ρ ( x , B )( − ) | B |−| A C | (cid:18) ∑ B ⊇ A C ( − ) | B |−| A C | ρ ( A C , B | B , x ) (cid:19) ∑ y ∈ A M x , A ; y , A \{ y } = ∑ B ⊇ A C ρ ( x , B )( − ) | B |−| A C | (cid:18) ∑ y ∈ A ∑ B ⊇ A C ∪{ y } ( − ) | B |−| A C |− ρ ( y , B | B , x ) (cid:19) Thus, to complete the proof it suffices to show ∑ B ⊇ A C ( − ) | B |−| A C | ρ ( A C , B | B , x ) = ∑ y ∈ A ∑ B ⊇ A C ∪{ y } ( − ) | B |−| A C |− ρ ( y , B | B , x ) To see this, observe that ∑ B ⊇ A C ( − ) | B |−| A C | ρ ( A C , B | B , x )= ρ ( A C , A C | B , x ) − ∑ B = A C ∪{ a } ρ ( A C , A C ∪ { a }| B , x ) + . . . + ( − ) | A | ρ ( A C , X | B , x )= − ∑ B = A C ∪{ a } (cid:0) − ρ ( a , A C ∪ { a }| B , x ) (cid:1) + . . . + ( − ) | A | (cid:0) − ρ ( A , X | B , x ) (cid:1) Since there are ( | A | k ) sets of the form B = A C ∪ { a , . . . , a k } and for | A | ≥ | A | ∑ k = ( − ) k (cid:18) | A | k (cid:19) = = ∑ B = A C ∪{ a } ρ ( a , A C ∪ { a }| B , x ) − ∑ B = A C ∪{ a , a } ρ ( { a , a } , A C ∪ { a , a }| B , x )+ . . . + ( − ) | A | + ρ ( A , X | B , x ) Observe that there is a bijection between nonempty D ⊆ A and terms in this sum: D ←→ ( − ) | D | + ρ ( D , A C ∪ D | B , x ) D ⊆ A and terms in the following sum ∑ y ∈ A ∑ B ⊇ A C ∪{ y } ( − ) | B |−| A C |− ρ ( y , B | B , x ) given by D ←→ ∑ y ∈ D ( − ) | A C ∪ D |−| A C |− ρ ( y , A C ∪ D | B , x )= ( − ) | D | + ρ ( D , A C ∪ D | B , x ) Since both sums are comprised of precisely the same terms, they are equal.
Proof.
Forwards direction: suppose µ is a SU representation of ρ . For each t =
1, 2, fix any x t ∈ A Ct = ∅ . Since x t ≻ t A Ct \{ x t } if and only if B Ct ≻ t x t ≻ t B t \{ x t } for some B t ⊇ A Ct , C ( x , A C ) ∩ C ( x , A C ) = [ B ⊇ A C [ B ⊇ A C E ( x , B C ; x , B C ) Furthermore, this union is disjoint, so ρ ( x , A C | A C , x ) ρ ( x , A C ) = ∑ B ⊇ A C ∑ B ⊇ A C µ (cid:0) E ( x , B C ; x , B C ) (cid:1) By Lemmas 4 and 5 , m (cid:0) ( A , A ) , ( B , B ) (cid:1) = ( − ) | B |−| A | + | B |−| A | . By Lemma 3 with f ( B , B ) = µ (cid:0) E ( x , B C ; x , B C ) (cid:1) and F ( A , A ) = ρ ( x , A | A , x ) ρ ( x , A ) , we get µ (cid:0) E ( x , A ; x , A ) (cid:1) = ∑ B ⊇ A C ∑ B ⊇ A C ( − ) | B |−| A C | + | B |−| A C | ρ ( x , B | B , x ) ρ ( x , B ) = M x , A ; x , A as desired. Backwards direction: suppose there exists µ ∈ ∆ ( P × P ) satisfying µ (cid:0) E ( x , A ; x , A ) (cid:1) = M x , A ; x , A for all t =
1, 2 and x t ∈ A Ct = ∅ , and fix any y t ∈ D t ∈ M t . As before, observe that y t ≻ t D t \{ y t } The proof strategy for this direction is adapted from Strzalecki (2021).
12f and only if B t ≻ t y t ≻ t B Ct \{ y t } for some B t ⊆ D Ct , so C ( y , D ) ∩ C ( y , D ) = [ B ⊆ D C [ B ⊆ D C E ( x , B ; x , B ) Furthermore, this union is disjoint, so µ (cid:0) C ( y , D ) ∩ C ( y , D ) (cid:1) = ∑ B ⊆ D C ∑ B ⊆ D C M x , B ; x , B = ρ ( y , D | D , y ) ρ ( y , D ) where the second equality follows because D t = ∅ = ⇒ B t = X t , and the third equality followsfrom Proposition 1 . Since µ (cid:0) C ( y , D ) (cid:1) = ∑ y ∈ D µ (cid:0) C ( y , D ) ∩ C ( y , D ) (cid:1) = ρ ( y , D ) and µ (cid:0) C ( y , D ) | C ( y , D ) (cid:1) = µ ( C ( y , D ) ∩ C ( y , D )) µ ( C ( y , D )) = ρ ( y , D | D , y ) we conclude that µ is a SU representation of ρ . Proof.
Suppose ρ satisfies Block-Marschak nonnegativity and marginal consistency. As outlinedin Section 3 , the proof rests the following series of
Claims , whose proofs are included in the forth-coming subsections.
Claim 1.
For any x ∈ A C = ∅ and ∅ ( A ( X , ∑ x ∈ A C M x , A ; x , A = ∑ y ∈ A M y , A \{ y } ; x , A Claim 1 is the first-period analog of
Proposition 2 and follows from a similar argument byusing marginal consistency. Now, I define the t -cylinders.13 efinition 9. Given an ordered, distinct k-sequence ( x t , . . . , x kt ) , its t-cylinder isI ( x t ,..., x kt ) = (cid:8) ≻ t ∈ P t : x t ≻ t · · · ≻ t x kt ≻ t X t \{ x t , . . . , x kt } (cid:9) Given a menu A t , let π ( A t ) denote the set of permutations of A t . Let I t ( k ) = { I ( x t ,..., x kt ) : ( x t , . . . , x kt ) ∈ π ( A t ) , A t ∈ M t , | A t | = k } be the set of all t -cylinders induced by k -sequences, andlet I t = S | X t | k = I t ( k ) . Observe that I t contains all singletons, since I ( x t ,..., x | Xt | t ) = {≻ t } ⇐⇒ x t ≻ t · · · ≻ t x | X t | t Next, I recursively define a function ν : I × I → R ≥ . Define ν (cid:0) I x × I x (cid:1) : = M x , ∅ ; x , ∅ = ρ ( x , X | X , x ) ρ ( x , X ) Now, let (distinct) i , j ∈ {
1, 2 } . For any 1 < k ≤ | X j | and ( x j , . . . , x kj , x k + j ) , let A j = { x j , . . . , x kj } and define ν (cid:0) I x i × I ( x j ,..., x kj , x k + j ) (cid:1) : = ∑ τ j ∈ π ( A j ) ν ( I x i × I τ j ) = ν ( I xi × I ( x j ,..., xkj ) ) M xi , ∅ ; xk + j , Aj ∑ τ j ∈ π ( Aj ) ν ( I xi × I τ j ) elseSimilarly, for any 1 ≤ k < | X | , 1 ≤ ℓ < | X | and ( x , . . . , x k , x k + ) , ( x , . . . , x ℓ , x ℓ + ) , let A = { x , . . . , x k } , A = { x , . . . , x ℓ } and define ν (cid:0) I ( x ,..., x k , x k + ) × I ( x ,..., x ℓ , x ℓ + ) (cid:1) : = ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν ( I τ × I τ ) = ν ( I ( x xk ) × I ( x x ℓ ) ) M xk +
11 , A x ℓ +
12 , A ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν ( I τ × I τ ) else Definition 10.
For any ≤ k < | X | , ≤ ℓ < | X | , the first additive property p ( k , ℓ ) holds if for allA , A s.t. | A | = k , | A | = ℓ and all x t ∈ A Ct ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ , x × I τ , x (cid:1) = M x , A ; x , A This definition is the two-period analog of Chambers and Echenique (2016)’s definition of cylinders. Again, my definition of ν is the two-period analog of Chambers and Echenique (2016), (7.4). or any < k ≤ | X | , < ℓ ≤ | X | , the second additive property p ( k , ℓ ) holds if for all A , A s.t. | A | = k , | A | = ℓ and all τ t ∈ π ( A t ) ∑ x ∈ A C ∑ x ∈ A C ν (cid:0) I τ , x × I τ , x (cid:1) = ν (cid:0) I τ × I τ (cid:1) Observe that S τ ∈ π ( A ) S τ ∈ π ( A ) ( I τ , x × I τ , x ) = E x , A ; x , A and S x ∈ A C S x ∈ A C ( I τ , x × I τ , x ) = I τ × I τ and these are disjoint unions, so these additive properties are necessary. Claim 2. p ( k , ℓ ) holds for all ≤ k < | X | , ≤ ℓ < | X | . Claim 3. p ( k , ℓ ) holds for all < k ≤ | X | , < ℓ ≤ | X | . With these additive properties in hand, I am ready to define the candidate SU representation.Given ( ≻ , ≻ ) ∈ P × P , denote x t ≻ t · · · ≻ t x | X t | t and define µ : 2 P × P → R as µ (cid:0) {≻ , ≻ } (cid:1) = ν (cid:0) I ( x ,..., x | X | ) × I ( x ,..., x | X | ) (cid:1) µ (cid:0) S (cid:1) = ∑ ( ≻ , ≻ ) ∈ S µ (cid:0) {≻ , ≻ } (cid:1) Claim 4. µ is a probability measure. Claim 5. µ = ν on I × I . Claim 5 shows that µ is an extension of ν . Thus, I can leverage Claim 2 as follows. Recall from
Proposition 1 that to show that µ is a SU representation of ρ , it suffices to show µ (cid:0) E ( x , A ; x , A ) (cid:1) = M x , A ; x , A for all t =
1, 2 and x t ∈ A Ct = ∅ . For each t =
1, 2, fix any x t ∈ A Ct = ∅ (note thisimplies that | A t | < | X t | ): then µ (cid:0) E ( x , A ; x , A ) (cid:1) = ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) µ (cid:0) I τ , x × I τ , x (cid:1) = ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ , x × I τ , x (cid:1) = M x , A ; x , A where the penultimate equality follows because µ = ν on I × I , and the last equality followsfrom Claim 2 . These additive properties are the two-period analogs of Chambers and Echenique (2016), (7.2) and (7.3), respectively. This definition is the two-period analog of Chambers and Echenique (2016)’s definition of “ ν .” .5.1 Proof of Claim 1 Proof.
We can write ∑ x ∈ A C M x , A ; x , A = ∑ B ⊇ A C ( − ) | B |−| A C | ∑ B ⊇ A C ( − ) | B |−| A C | (cid:18) ∑ x ∈ A C ρ ( x , B | B , x ) ρ ( x , B ) (cid:19)! and ∑ y ∈ A M y , A \{ y } ; x , A = ∑ B ⊇ A C ( − ) | B |−| A C | ∑ y ∈ A ∑ B ⊇ A C ∪{ y } ( − ) | B |−| A C |− ρ ( x , B | B , y ) ρ ( y , B ) ! Since marginal consistency is satisfied, we can write ∑ B ⊇ A C ( − ) | B |−| A C | (cid:18) ∑ x ∈ A C ρ ( x , B | B , x ) ρ ( x , B ) (cid:19) = ∑ B ⊇ A C ( − ) | B |−| A C | (cid:18) P ( x , B ) − ∑ x ∈ B \ A C ρ ( x , B | B , x ) ρ ( x , B ) (cid:19) = P ( x , B ) ∑ B ⊇ A C ( − ) | B |−| A C | + ∑ B ) A C ( − ) | B |−| A C | + ∑ x ∈ B \ A C ρ ( x , B | B , x ) ρ ( x , B ) Again observing that ∑ B ⊇ A C ( − ) | B |−| A C | = | A | ∑ k = ( − ) k (cid:18) | A | k (cid:19) = ∑ B ) A C ( − ) | B |−| A C | + ∑ x ∈ B \ A C ρ ( x , B | B , x ) ρ ( x , B )= ∑ y ∈ A ∑ B ⊇ A C ∪{ y } ( − ) | B |−| A C |− ρ ( x , B | B , y ) ρ ( y , B ) which immediately follows from matching terms.Note that Proposition 2 and
Claim 1 together imply that for any ∅ ( A t ( X t , ∑ x ∈ A C ∑ x ∈ A C M x , A ; x , A = ∑ x ∈ A C ∑ y ∈ A M x , A ; y , A \{ y } = ∑ y ∈ A ∑ x ∈ A C M x , A ; y , A \{ y } ∑ y ∈ A ∑ y ∈ A M y , A \{ y } ; y , A \{ y } Proof.
I will prove this via induction.
Base case : fix A = A = ∅ (these are the only menus satisfying | A | = | A | = x t ∈ X t ).Then, by definition, ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ , x × I τ , x (cid:1) = ν (cid:0) I x × I x (cid:1) = M x , ∅ ; x , ∅ First inductive step : Fix A i = ∅ , x i ∈ X i , and A j = { x j , . . . , x kj } for any k > ∑ τ j ∈ π ( A j ) ν (cid:0) I x i × I τ j (cid:1) = ∑ y j ∈ A j (cid:18) ∑ ( y j ,..., y k − j ) ∈ π ( A j \{ y j } ) ν (cid:0) I x i × I ( y j ,..., y k − j , y j ) (cid:1)(cid:19) = ∑ y j ∈ A j M x i , ∅ ; y j , A j \{ y j } = ∑ x j ∈ A Cj M x i , ∅ ; x j , A j where the first equality follows because permuting A j is equivalent to picking the last element andpermuting the remaining k − p ( k − ) holds” if i = j = p ( k −
1, 0 ) holds” otherwise), and the thirdequality follows from Proposition 2 and
Claim 1 . There are two cases:1. Suppose ∑ τ j ∈ π ( A j ) ν (cid:0) I x i × I τ j (cid:1) = M x i , ∅ ; x j , A j = x j ∈ A Cj .Fix any x j ∈ A Cj : then we have ∑ τ i ∈ π ( A i ) ∑ τ j ∈ π ( A j ) ν (cid:0) I τ i , x i × I τ j , x j (cid:1) = ∑ τ j ∈ π ( A j ) ν (cid:0) I x i × I τ j , x j (cid:1) = = M x i , ∅ ; x j , A j where the penultimate equality follows by definition of ν . Thus, p ( i , k j ) holds.17. Suppose ∑ τ j ∈ π ( A j ) ν (cid:0) I x i × I τ j (cid:1) > ν , we have ∑ τ i ∈ π ( A i ) ∑ τ j ∈ π ( A j ) ν (cid:0) I τ i , x i × I τ j , x j (cid:1) = ∑ τ j ∈ π ( A j ) ν (cid:0) I x i × I τ j , x j (cid:1) = ∑ τ j ∈ π ( A j ) (cid:18) ν ( I x i × I τ j ) ∑ α j ∈ π ( A j ) ν ( I x i × I α j ) M x i , ∅ ; x j , A j (cid:19) = M x i , ∅ ; x j , A j Thus, p ( i , k j ) holds. Second inductive step : This step proceeds similarly to the first inductive step. For any 0 < k < | X | and 0 < ℓ < | X | , fix A = { x , . . . , x k } , A = { x , . . . , x ℓ } , and x t ∈ A Ct . Our inductive hypothesisis that p ( k − ℓ − ) holds. Again, observe that ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ × I τ (cid:1) = ∑ y ∈ A ∑ y ∈ A (cid:18) ∑ ( y ,..., y k − ) ∈ π ( A \{ y } ) ∑ ( y ,..., y ℓ − ) ∈ π ( A \{ y } ) ν (cid:0) I ( y ,..., y k − , y ) × I ( y ,..., y ℓ − , y ) (cid:1)(cid:19) = ∑ y ∈ A ∑ y ∈ A M y , A \{ y } ; y , A \{ y } = ∑ x ∈ A C ∑ x ∈ A C M x , A ; x , A where the penultimate equality follows from the inductive hypothesis, and the last equality followsfrom Proposition 2 and
Claim 1 . Again, there are two cases:1. Suppose ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ × I τ (cid:1) = M x , A ; x , A = x t ∈ A Ct . Fix any x t ∈ A Ct : by definition of ν , we have ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ , x × I τ , x (cid:1) = = M x , A ; x , A p ( k , ℓ ) holds.2. Suppose ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ × I τ (cid:1) > ν , it follows that ∑ τ ∈ π ( A ) ∑ τ ∈ π ( A ) ν (cid:0) I τ , x × I τ , x (cid:1) = M x , A ; x , A and p ( k , ℓ ) holds. Proof.
Fix any 0 < k ≤ | X | and 0 < ℓ ≤ | X | , A = { x , . . . , x k } and A = { x , . . . , x ℓ } , and τ t ∈ π ( A t ) . As before, there are two cases:1. Suppose ∑ α ∈ π ( A ) ∑ α ∈ π ( A ) ν (cid:0) I α × I α (cid:1) = ν ≥ ν (cid:0) I τ × I τ (cid:1) =
0. Furthermore, by definition, for each x t ∈ A Ct ν (cid:0) I τ , x × I τ , x (cid:1) = = ⇒ ∑ x ∈ A C ∑ x ∈ A C ν (cid:0) I τ , x × I τ , x (cid:1) = = ν (cid:0) I τ × I τ (cid:1) as desired.2. Suppose ∑ α ∈ π ( A ) ∑ α ∈ π ( A ) ν (cid:0) I α × I α (cid:1) > < k ≤ | X | and 0 < ℓ ≤ | X | , 0 ≤ k − < | X | and 0 ≤ ℓ − < | X | , so by Claim 2 we can apply p ( k − ℓ − ) as before to write ∑ α ∈ π ( A ) ∑ α ∈ π ( A ) ν (cid:0) I α × I α (cid:1) = ∑ x ∈ A C ∑ x ∈ A C M x , A ; x , A which implies ∑ x ∈ A C ∑ x ∈ A C ν (cid:0) I τ , x × I τ , x (cid:1) = ∑ x ∈ A C ∑ x ∈ A C (cid:18) ν ( I τ × I τ ) ∑ α ∈ π ( A ) ∑ α ∈ π ( A ) ν ( I α × I α ) M x , A ; x , A (cid:19) = ν ( I τ × I τ ) ∑ x ∈ A C ∑ x ∈ A C M x , A ; x , A (cid:18) ∑ x ∈ A C ∑ x ∈ A C M x , A ; x , A (cid:19) = ν ( I τ × I τ ) Proof.
I have already shown that µ ≥
0. First, fix x ∈ B and observe that ∑ x ∈ X ∑ B ⊇{ x } ( − ) | B |− ρ ( x , B | B , x ) = | X | ∑ i = ( − ) i − (cid:18) | X | i (cid:19) = ≤ i ≤ | X | and B = { x , . . . , x i } . Then the terms ∑ ik = ( − ) i − ρ ( x k , B | B , x ) =( − ) i − appear exactly once in the sum above, and there are ( | X | i ) menus of size i . Thus, ∑ ≻ ∈ P ∑ ≻ ∈ P µ ( ≻ , ≻ ) = ∑ τ ∈ π ( X ) ∑ τ ∈ π ( X ) ν (cid:0) I τ × I τ (cid:1) = ∑ x ∈ X ∑ x ∈ X M x , X \{ x } ; x , X \{ x } = ∑ x ∈ X ∑ x ∈ X ∑ B ⊇{ x } ∑ B ⊇{ x } ( − ) | B |− + | B |− ρ ( x , B | B , x ) ρ ( x , B )= ∑ x ∈ X ∑ B ⊇{ x } (cid:18) ∑ x ∈ X ∑ B ⊇{ x } ( − ) | B |− ρ ( x , B | B , x ) (cid:19) ( − ) | B |− ρ ( x , B )= ∑ x ∈ X ∑ B ⊇{ x } ( − ) | B |− ρ ( x , B ) = .5.5 Proof of Claim 5 Proof.
I will use induction.
Base case : for any | X | -sequence ( x , . . . , x | X | ) and | X | -sequence ( x , . . . , x | X | ) , let ≻ t be the (unique)preference satisfying x t ≻ t · · · ≻ t x | X t | t . Then, by definition, µ (cid:0) I ( x ,..., x | X | ) × I ( x ,..., x | X | ) (cid:1) = µ (cid:0) {≻ , ≻ } (cid:1) = ν (cid:0) I ( x ,..., x | X | ) × I ( x ,..., x | X | ) (cid:1) First inductive step : suppose µ (cid:0) I ( x i ,..., x | Xi | i ) × I ( x j ,..., x kj ) (cid:1) = ν (cid:0) I ( x i ,..., x | Xi | i ) × I ( x j ,..., x kj ) (cid:1) for all | X i | -sequences ( x i , . . . , x | X i | i ) and k -sequences ( x j , . . . , x kj ) , where k >
1. Then, for any | X i | -sequence ( x i , . . . , x | X i | i ) and k − ( x j , . . . , x k − j ) , µ (cid:0) I ( x i ,..., x | Xi | i ) × I ( x j ,..., x k − j ) (cid:1) = ∑ y j ∈{ x j ,..., x k − j } C µ (cid:0) I ( x i ,..., x | Xi | i ) × I ( x j ,..., x k − j , y j ) (cid:1) = ∑ y j ∈{ x j ,..., x k − j } C ν (cid:0) I ( x i ,..., x | Xi | i ) × I ( x j ,..., x k − j , y j ) (cid:1) = ν (cid:0) I ( x i ,..., x | Xi | i ) × I ( x j ,..., x k − j ) (cid:1) where the first equality follows from µ being a probability measure, the second equality followsfrom the inductive hypothesis, and the third equality follows because p ( | X | , k − ) and p ( k − | X | ) hold, by Claim 3 . Second inductive step : suppose µ (cid:0) I ( x ,..., x k ) × I ( x ,..., x ℓ ) (cid:1) = ν (cid:0) I ( x ,..., x k ) × I ( x ,..., x ℓ ) (cid:1) for all k -sequences ( x , . . . , x k ) and ℓ -sequences ( x , . . . , x ℓ ) , where k , ℓ >
1. Then for any k − ( x , . . . , x k − ) and ℓ − ( x , . . . , x ℓ − ) , µ (cid:0) I ( x ,..., x k − ) × I ( x ,..., x ℓ − ) (cid:1) = ∑ y ∈{ x ,..., x k − } C ∑ y ∈{ x ,..., x ℓ − } C µ (cid:0) I ( x ,..., x k − , y ) × I ( x ,..., x ℓ − , y ) (cid:1) = ∑ y ∈{ x ,..., x k − } C ∑ y ∈{ x ,..., x ℓ − } C ν (cid:0) I ( x ,..., x k − , y ) × I ( x ,..., x ℓ − , y ) (cid:1) = ν (cid:0) I ( x ,..., x k − ) × I ( x ,..., x ℓ − ) (cid:1) Since every pair of cylinders in I × I is induced by a pair of k , ℓ -sequences with 1 ≤ k ≤ | X | and1 ≤ ℓ ≤ | X | , I have shown that µ = ν on I × I .Now that each Claim has been verifed, the proof is complete.21 .6 Proof of Proposition 4 (Backwards Direction)
Proof.
Suppose | X t | = X = { a , b , c } and X = { d , e , f } . Suppose ρ satisfies stochas-tic regularity and marginal consistency. Then in particular, for any x t ∈ X t , A t = { x t } C = { y t , z t } ,and B t = X t , ρ ( y , A ) ρ ( y , B ) ≥ ρ ( y , A | B , y ) − ρ ( y , B | B , y ) ρ ( y , A | A , y ) − ρ ( y , B | A , y ) ⇐⇒ ρ ( y , z |{ y , z } , y ) ρ ( y , z ) − ρ ( y , X |{ y , z } , y ) ρ ( y , z ) − (cid:18) ρ ( y , z | X , y ) ρ ( y , X ) − ρ ( y , X | X , y ) ρ ( y , X ) (cid:19) ≥ ⇐⇒ M y , { x } ; y , { x } = ∑ B ⊇{ y , z } ∑ B ⊇{ y , z } ( − ) | B | + | B |− ρ ( y , B | B , y ) ρ ( y , B ) ≥ Proposition 2 and
Claim 1 are satisfied, so for any i , j ∈{
1, 2 } M x i , A i ; y j , { x j } + M x i , A i ; z j , { x j } = M x i , A i ; x j ; ∅ M x i , A i ; x j , { y j , z j } = M x i , A i ; y j , { z j } + M x i , A i ; z j , { y j } which implies that stochastic Block-Marschak nonnegativity is satisfied. Thus, by Theorem 1 , ρ has a SU representation. Of course, we can also directly verify this by defining µ ( x y z , x y z ) : = M y , { x } ; y , { x } ≥ µ ( x y z ) : = ∑ τ ∈ π ( X ) µ ( x y z , τ ) = ρ ( y , z ) − ρ ( y , X )= ⇒ ∑ τ ∈ π ( X ) ∑ τ ∈ π ( X ) µ ( τ , τ ) = µ is indeed a probability measure. Furthermore, E ( x i , A i ; y j , { x j } ) ∪ E ( x i , A i ; z j , { x j } ) = E ( x i , A i ; x j ; ∅ ) E ( x i , A i ; x j , { y j , z j } ) = E ( x i , A i ; y j , { z j } ) ∪ E ( x i , A i ; z j , { y j } ) Proposition 3 it follows that µ is a SU representation. Finally,let µ ′ be a SU representation. Then by Proposition 3 , µ ′ ( x y z , x y z ) = µ ′ (cid:0) E ( y , { x } ; y , { x } ) (cid:1) = M y , { x } ; y , { x } = µ ( x y z , x y z ) so µ is unique. Define ν as in the proof of Theorem 1 . Since this definition is recursive and the base case is a jointBlock-Marschak sum, it immediately follows that ν is strictly positive on I × I , so µ is strictlypositive on P × P . All other parts of the proof of Theorem 1 still hold, so we conclude that µ is aSU representation of ( ρ , { ρ ( ·| h ) } h ∈H ) with full support. References
Amartya K Sen. Choice functions and revealed preference.
The Review of Economic Studies , 38(3):307–317, 1971.Henry David Block, Jacob Marschak, et al. Random orderings and stochastic theories of response.Technical report, Cowles Foundation for Research in Economics, Yale University, 1959.Jean-Claude Falmagne. A representation theorem for finite random scale systems.
Journal of Math-ematical Psychology , 18(1):52–72, 1978.Mira Frick, Ryota Iijima, and Tomasz Strzalecki. Dynamic random utility.
Econometrica , 87(6):1941–2002, 2019.Tomasz Strzalecki.
Stochastic Choice . 2021.Christopher P Chambers and Federico Echenique.
Revealed preference theory , volume 56. CambridgeUniversity Press, 2016.Peter C Fishburn. Stochastic utility.
Handbook of utility theory , 1:273–318, 1998.23acobus Hendricus Van Lint, Richard Michael Wilson, and Richard Michael Wilson.
A course incombinatorics . Cambridge university press, 2001.Chris Godsil. An introduction to the moebius function. arXiv preprint arXiv:1803.06664arXiv preprint arXiv:1803.06664