PParallel Search for Information * T. Tony Ke
Massachusetts Institute of [email protected]
Wenpin Tang
University of California, Los [email protected]
J. Miguel Villas-Boas
University of California, [email protected]
Yuming Zhang
University of California, Los [email protected]
April 2020 * We thank Andrej Zlatos for the helpful discussions regarding the proof of Proposition 6. We also thank Zuo-Jun(Max) Shen for helpful comments on an earlier version of this manuscript. a r X i v : . [ ec on . T H ] A p r arallel Search for Information Abstract
We consider the problem of a decision-maker searching for information on multiple alternativeswhen information is learned on all alternatives simultaneously. The decision-maker has a runningcost of searching for information, and has to decide when to stop searching for information andchoose one alternative. The expected payoff of each alternative evolves as a diffusion processwhen information is being learned. We present necessary and sufficient conditions for the solution,establishing existence and uniqueness. We show that the optimal boundary where search is stopped(free boundary) is star-shaped, and present an asymptotic characterization of the value functionand the free boundary. We show properties of how the distance between the free boundary and thediagonal varies with the number of alternatives, and how the free boundary under parallel searchrelates to the one under sequential search, with and without economies of scale on the search costs.Keywords:
Optimal Stopping, Free Boundary Problem, Search Theory, Brownian Motion . Introduction
In several situations a decision-maker (DM) has to decide how long to gain information on severalalternatives simultaneously at a cost before stopping to make an adoption decision. An importantaspect considered here, is that the DM gains information on all alternatives at the same time andcannot choose which alternative to gain information on—which we call parallel search . This can be,for example, the case of a consumer trying to decide among several products in a product categoryand passively learning about the product category, or browsing through a web site that comparesseveral products side by side.If all the alternatives have a relatively low expected payoff, the DM may decide to stop thesearch, and not choose any of the alternatives. If two or more alternatives have a similar andsufficiently high expected payoff, the DM may decide to continue to search for information untilfinding out which alternative may be the best. If the expected payoff of the best alternative isclearly higher than the expected payoff of the second best alternative, the DM may decide to stopthe search process and choose the best alternative. We characterize the solution to this problem,considering some comparative statics, and comparing it with the case in which there is sequentialsearch for information.The problem of the DM can be set up by a value function for the DM, which is the expectedpayoff for the DM going forward under the optimal policy. We give necessary conditions for thevalue function in Section 2: it is a viscosity solution to some partial differential equation (PDE)with at most linear growth. We then show in Section 3 that the condition derived is also sufficientby establishing the existence and uniqueness of the solution to the PDE. We obtain this result withunbounded domain, which is essential for our asymptotic results. One important ingredient of the problem considered is that there is a free boundary where it isoptimal to stop, and this boundary is determined by the solution to the PDE. In Section 4, we showa geometric property of the free boundary: it is star-shaped with respect to the origin. Moreover,and interestingly, how much is required from the best alternative in order to stop the process isincreasing in the values of the other alternatives.Although it is not possible to derive closed-form expressions for the value function or the freeboundary, we can study the asymptotics of the value function as well as the free boundary whenthe expected payoff of all alternatives is large, which is presented in Section 5. We provide fineestimates of the distance from the free boundary to the line when all alternatives have the same highexpected payoff for the case of two alternatives, while for the case of more than two alternatives For similar results with a bounded domain, see, for example, Peskir and Shiryaev (2006).
1e show that this distance is increasing in the number of alternatives, and is at most linear in thenumber of alternatives. To the best of our knowledge, this is one of the few results concerning theasymptotic geometry of the optimal stopping problem in dimension d ≥ . See Peskir and Shiryaev(2006), Guo and Zervos (2010), Assing et al. (2014) for studies of optimal stopping problems forthe case of two alternatives. The main difficulty in our problem is lack of closed-form expressionsfor the value function. Here we rely heavily on the PDE machinery.We also compare the stopping boundary with the boundary that results from the problemwhere alternatives can only be learnt sequentially—one alternative at each instance of time. If thecost of parallel search is just the cost of sequential search for one alternative times the number ofalternatives (i.e., no economies of scale in the number of alternatives on which the DM is searchingfor information), then we can show that there is more search for information in the sequential searchcase, than in the parallel search case, as the sequential search for information can replicate parallelsearch. In this case, when the expected payoffs of the alternatives go to infinity (or equivalently,when the outside option is sufficiently low), we can also show that boundaries for search undersequential search converge to the boundaries for search under parallel search.We also consider what happens if the DM can choose, at different costs, to gain informationsequentially on one alternative at a time, or to gain information on all alternatives simultaneously,with the cost of parallel search being less than the cost of sequential search multiplied by thenumber of alternatives (i.e., economies of scale in search over multiple alternatives). This is aninteresting case to consider as decision-makers may have a chance to get sometimes information onall alternatives at a lower cost via parallel search (for example, browsing a website that comparesall alternatives, or reading a magazine with general information about the product category), butother times may choose to dive into getting information about a particular alternative via sequentialsearch. We find in this case that, if the expected payoffs of the alternatives are high enough, thenit is always optimal to do parallel search.There is some literature on the case of learning about a single alternative in comparison toan outside option (e.g., Roberts and Weitzman 1981, Moscarini and Smith 2001, Branco et al.2012, Fudenberg et al. 2018). When there is more than one uncertain alternative the problembecomes more complex, as choosing one alternative means giving up potential high payoffs fromother alternatives about which the decision maker could also learn more. This paper can then be seenas extending this literature to allow for more than one alternative, which requires the solution to apartial differential equation. Another possibility, considered in Ke et al. (2016), is that the DM can The case of learning a single alternative was considered with discrete costly sequential sampling in Wald (1945). Che and Mierendorff (2019) consider which type of information to collect in a Poisson-type model, when thedecision maker has to choose between two alternatives, with one and only one alternative having a high payoff. See
2. The Problem and Necessary Conditions for the Solution
Consider a consumer, whose utility of product i , U i , is the sum of the utility derived from eachattribute of the product. U i = x i + (cid:80) Tt =1 a it , where x i is the consumer’s initial expected utility, and a it is the utility of attribute t of product i , which is uncertain to the consumer before search. It isalso assumed that a it is i.i.d. across t and i , and without loss of generality, E [ a it ] = 0 . There is anoutside option which is worth zero.Each time by paying a search cost c , the consumer checks one attribute a it for all products i = 1 , . . . , d . The consumer decides when to stop searching and upon stopping which product tobuy so as to maximize the expected utility. After checking t attributes, the consumer’s conditional also Nikandrova and Pancs (2018), Ke and Villas-Boas (2019), and Hébert and Woodford (2017). For problems wherethe DM gets rewards while learning see, for example, Bergemann and Välimäki (1996), Keller and Rady (1999). i is, X i ( t ) = E t [ U i ] = x i + t (cid:88) s =1 a is . Therefore, X i ( t ) is a random walk, which converges to the Brownian motion B x i i ( t ) , when we scale a is and the search cost c proportionally to infinitesimally small and take T to infinity. The problemof the consumer is to decide when to stop the process, and then choose the best alternative.An alternative formulation of this problem has Bayesian learning with an evolving state. Supposethat the true value of the alternatives, (cid:98) X ( t ) , follows the process d (cid:98) X ( t ) = σ dB ( t ) where σ is adiagonal matrix, with general element σ ii in the diagonal, and that the signal of (cid:98) X ( t ) , S ( t ) , a d -dimension vector, follows dS ( t ) = (cid:98) X ( t ) dt + y d (cid:101) B ( t ) , with (cid:101) B ( t ) being a d -dimensional Brownianmotion independent of B ( t ) , y is a diagonal matrix, with general element in the diagonal y ii . Supposealso that the prior of (cid:98) X (0) is a normal with mean (cid:98) X (0) and variance-covariance (cid:98) ρ (0) , with (cid:98) ρ (0) being a diagonal matrix, with general element in the diagonal (cid:98) ρ ii (0) . Then, the posterior mean of (cid:98) X ( t ) , X ( t ) , follows X i ( t ) = (cid:98) ρ ii ( t ) /y ii · dB i ( t ) , for all i, with B ( t ) being a d -dimensional Brownianmotion, and d (cid:98) ρ ii ( t ) /dt = σ ii − (cid:98) ρ ( t ) /y ii for all i . So, we have (cid:98) ρ ( t ) → σy as t → ∞ . Then, if (cid:98) ρ (0) = σy, we have that X ( t ) is stationary, dX ii ( t ) = σ ii /y ii · dB i ( t ) for all i and the analysis thatfollows would be done on the process X ( t ) . In both formulations of the problem, we can let the expected payoffs of the d alternatives at time t be B x = ( B x ( t ) , . . . , B x d d ( t )) t ≥ , a d -dimensional Brownian motion starting at x = ( x , . . . , x d ) .Each component of this Brownian motion could be the value of the alternative if the process isstopped. In the consumer learning application, this would be the expected value of that productat the time when the consumer makes the purchase decision. In a financial option application, thiswould be the value of the asset when the option is exercised. Let T be a suitable set of stoppingtimes with respect to the natural filtration of B x . We aim to determine the following value function, u ( x ) := sup τ ∈T E [max { B x ( τ ) , . . . , B x d d ( τ ) , } − cτ ] , (1)where c > is the cost per unit time. The problem could also be considered with time discounting. The case of the cost per unit of time could be seenas the costs of processing information when learning about different alternatives. .2. General Framework We start with the general framework of the optimal stopping problem (1). Let Ω ⊂ R d be anopen domain. Consider the following stochastic differential equation (SDE): dX x ( t ) = b ( X x ( t )) dt + σ ( X x ( t )) dB ( t ) , (2)where the superscript x denotes X x (0) = x ∈ Ω . Here ( B ( t ); t ≥ is a d -dimensional Brownianmotion starting at , b : R n → R n and σ : R n → R n × n satisfy • Lipschitz condition : there exists
C > such that | b ( x ) − b ( y ) | + | σ ( x ) − σ ( y ) | ≤ C | x − y | . • Linear growth condition : there exists
K > such that | b ( x ) | + | σ ( x ) | ≤ K | x | . It is well known that under these conditions, the SDE (2) has a strong solution which is pathwiseunique. See, for example, Karatzas and Shreve (1991), Section 5.2, for background on strongsolutions to SDEs. The vector X ( t ) has as each element i the expected utility obtained if the DMwere to decide to stop the search process at time t and choose alternative i. Let J x ( τ ) := E (cid:20)(cid:90) τ f ( X x ( s )) ds + g ( X x ( τ )) (cid:21) , (3)where τ is a stopping time, and f , g are two continuous functions with polynomial growth, or simplyLipschitz continuous functions. We are interested in the value function u ( x ) = sup τ ∈T J x ( τ ) , (4)where T is a suitable set of stopping times. Let L be the infinitesimal generator of the SDE (2).That is, L h = d (cid:88) i =1 b i ∂h∂x i + 12 d (cid:88) i,j =1 ( σσ T ) ij ∂ h∂x i ∂x j , for any suitably smooth test function h : R n → R .A standard dynamic programming argument shows that u is a viscosity solution to the following5artial differential equation (PDE): min( −L u − f, u − g ) = 0 . (5)We state the definition of viscosity solutions (and the associated definitions of subsolutions andsupersolutions) in the Appendix and we also refer readers to Crandall and Lions (1983), Ishii (1987,1989) and Crandall et al. (1992) for this notion.Equation (5) is known as an obstacle problem , or a variational inequality (see Frehse 1972,Kinderlehrer and Stampacchia 1980). It exhibits two regimes: • −L u = f when u > g , • −L u ≥ f when u = g .The set { x | u ( x ) = g ( x ) } is called the contact set , or coincidence set . In general, a solution u to (5)is of class C but not C , and the regularity depends on those of f , g . We refer to Caffarelli (1998)for details. Furthermore, let g ( x ) have at most linear growth, that is, g ( x ) ≤ a (cid:80) di =1 | x i | , for some a > , which is a condition satisfied by g ( x ) = max { x , ..., x d , } , which is the function g in our application.Let also f ( x ) be bounded from above by a negative number, which is also satisfied in our application.Considering the optimal stopping problem (4), we can then obtain Lemma A1, presented in theAppendix, charactering the value function u for this general case. Specializing to the optimal stopping problem (1), which is the focus of the analysis in the nextsections, we have f ( x , . . . , x d ) = − c and g ( x , . . . , x d ) = max { x , . . . , x d , } . (6)We can then get the following corollary. See also Strulovici and Szydlowski (2015). In terms of the SDE (2) this is the case when b = 0 , and σ = I where I is the identity matrix. Several of theresults in the next section can also be obtained for the general SDE (2) under some conditions. This is a standardtechnical issue that is not central to the results presented here, and therefore not considered for ease of presentation. orollary 1: Let u be the value function defined by (1), with T := { τ is a stopping time : E τ < ∞} . Then u is a viscosity solution to min (cid:26) −
12 ∆ u + c, u − g (cid:27) = 0 , (7) where ∆ is the Laplacian operator, (cid:80) di =1 ∂ /∂x i . Moreover, we have for some C > , g ≤ u ≤ d (cid:88) i =1 | x i | + C. (8)Corollary 1 asserts that the value function u satisfies the PDE (7), with at most linear growth.We will show in the next Section that such a solution is unique. Once the value function u isdetermined, then we construct an optimal strategy τ ∗ by J x ( τ ∗ ) = u ( x ) . (9)More precisely, starting at a position x ∈ { u > g } , the search will continue until it enters thecontact set: τ ∗ = inf { t > B x ∈ { u = g }} . (10)
3. Uniqueness
In this section, we prove that there exists a unique viscosity solution to the PDE (7). In thesequel, let B R be the ball of radius R > . Suppose O ⊆ R d is open, and we write ∂O as itsboundary and O := O ∪ ∂O . We first prove a comparison principle in bounded domains. Lemma 1 (Comparison principle in B R ): Assume that u is a supersolution to (7), and u is asubsolution to (7). If u ≥ u on ∂ B R for some R > , then u ≥ u in B R . Proof:
Let us consider the domain O := B R ∩ { u > g } . Since u is a supersolution, u ( x ) ≥ g ( x ) for all x ∈ B R and therefore u ≥ u on B R \ O . Then apply the comparison principle (Theorem 3.3Crandall et al. (1992)) in O , we have u ≥ u also on O which completes the proof.Finally, we consider uniqueness and show that among continuous functions that have less thanquadratic growth at infinity, the solution u obtained is unique.7 emma 2 (Comparison principle): Let u , u be respectively a subsolution and a supersolution to(7) in an open subset Ω of R d . Suppose there is a continuous function h : R + → R + such that lim R →∞ h ( R ) = 0 , and for all R ≥ | x |≥ R,x ∈ Ω max { u ( x ) , } + max {− u ( x ) , }| x | ≤ h ( R ) . (11) Suppose u ≥ u on ∂ Ω (note ∂ Ω = ∅ if Ω = R d ). Then we have u ≥ u in Ω . We prove this Lemma in the Appendix. With this Lemma, we are able to compare sub andsupersolutions in R d as long as condition (11) is satisfied. Combined with Lemma 1, we get acomplete characterization of the value function u, which is presented in the next proposition. Asignificant new result is that this characterization is obtained for unbounded domains. Proposition 1:
Let u be the value function defined by (1), with T := { τ is a stopping time : E τ < ∞} . Then u is the unique viscosity solution to (7) with at most linear growth. Proof:
By Corollary 1, u is a solution to (1) with linear growth at infinity. By the comparisonprinciple we know this u is the unique viscosity solution to (7) among all continuous functionssatisfying lim | x |→∞ | u ( x ) | / | x | = 0 .
4. Star-shapedness of the Free Boundary
Let u be a solution of (7). The free boundary of u is defined as the interface of the sets { x | u ( x ) >g ( x ) } and { x | u ( x ) = g ( x ) } which we denote by Γ( u ) . Several regularity results of Γ( u ) can befound in Caffarelli (1998). In this paper, we are interested in the global geometric property of Γ( u ) .In this section we prove the star-shapedness.Recall “star-shapedness” of a subset S ⊆ R d : S is star-shaped if there exists a point z such thatfor each point s ∈ S the segment connecting s and z lies entirely within S . We say that the freeboundary Γ( u ) = ∂ { u > g } is star-shaped with respect to the origin if the set { u > g } is star-shapedwith z = 0 . The star-shapedness property of a set rules out holes in the set. Proposition 2:
Let u be a solution to (7). The free boundary Γ( u ) is star-shaped with respect tothe origin. roof: To prove star-shapedness, we only need to show that if u ( x ) = g ( x ) for some x ∈ R d ,then u ( tx ) = g ( tx ) holds for all t ≥ .Let v ( x ) := t u ( tx ) . We first show that v is a subsolution to (7). In fact, for any x ∈ R d , if v ( x ) > g ( x ) then u ( tx ) > tg ( x ) = t max { x , ...x d , } = g ( tx ) . Thus, −
12 (∆ u )( tx ) ≤ − c in the viscosity sense. (12)To show that − ∆ v ( x ) ≤ − c in the viscosity sense, take any ϕ ∈ C that touches v at x fromabove. Then ϕ t ( · ) := tϕ ( · /t ) touches u at tx from above. It follows from (12) that −
12 (∆ ϕ t )( tx ) = − t ∆ ϕ ( x ) ≤ − c, which implies − ∆ ϕ ( x ) ≤ − tc ≤ − c . Therefore − ∆ v ( x ) ≤ − c in the viscosity sense. So weconclude that v is a subsolution. Now take x ∈ R d such that u ( x ) = g ( x ) . From the order of u and v , we get u ( tx ) ≤ tu ( x ) = tg ( x ) = g ( tx ) . On the other hand, u ( tx ) ≥ g ( tx ) by definition, so we must have u ( tx ) = g ( tx ) .Figure 1 shows the continuation and stopping regions, as well as the free boundary separatingthem for the case of d = 2 . The figure illustrates the star-shapedness of the free boundaries.As shown by Figure 1, the optimal search strategy is quite intuitive—roughly speaking, the DMshould stop searching and adopt alternative i if and only if x i is relatively high compared with x j and the outside option of , and she should stop searching and adopt the outside option when both x and x are relatively low. When x j is relatively low, the DM will continue to search on the twoalternatives if and only if x i is near , so as to make a clear distinction between alternative i andthe outside option. When both x and x are relatively high, the DM will continue to search ifand only if x and x are close to each other, so as to to make a clear distinction between the twoalternatives 1 and 2.
5. Asymptotics
In this section, we study the free boundary of the solution near x = . . . = x d → ∞ . We providea detailed analysis for the case with d = 2 , and compare it with the case in which the DM can only9 dopt SearchTakeOutsideOption x x Figure 1: Optimal parallel search strategy in two dimensions.search sequentially, learning one alternative at a time. We also provide lower and upper bounds forthe general case with d ≥ . d = 2 In the case of d = 2 , the PDE (7) specializes to min (cid:26) −
12 ∆ u + c, u − max { x , x , } (cid:27) = 0 . (13)The PDE (13) does not have an explicit solution for the case of d = 2 , so it is natural toask about the properties of the solution, in particular those of free boundaries. There are threeinteresting regimes of asymptotic behavior:1. x → and x → −∞ ,2. x → −∞ and x → ,3. x = x → ∞ . 10he cases 1 and 2 boil down to the search problem of one alternative, since the other alternativehas large negative value and thus loses the competition to its counterpart. A classical smooth-pasting technique shows that the distance of the free boundaries to x -axis (resp. y -axis) at −∞ is / (4 c ) , as illustrated in Figure 1. The case (3) is subtle, since the values of two products are closeso there is a competitive search. One interesting question is to determine the distance from the freeboundary to the line x = x at infinity.We start with the following change of coordinates: t = x + x √ and s = x − x √ . Consider thedomain t ≥ , and the PDE (13) becomes min (cid:26) −
12 ∆˜ u + c, ˜ u − ˜ g (cid:27) = 0 for all ( t, s ) ∈ R (14)where ˜ u ( t, s ) := u (cid:18) t + s √ , t − s √ (cid:19) and ˜ g ( t, s ) := max (cid:26) t + | s |√ , (cid:27) . We first prove a lower bound on the free boundary Γ( u ) for t ≥ by the following lemma. Lemma 3 (Lower bound of the free boundary):
For θ > , let η θ ( t, s ) := t √ θs + 18 θ for | s | ≤ √ θ t √ | s |√ for | s | > √ θ . , (15) which is a C function. Then we have, ˜ u ( t, s ) ≥ η c ( t, s ) in R . Moreover, for t ≥ , the free boundary Γ( u ) lies inside {| x − x | = √ | s | ≥ c } . Proof:
Note that η θ is an approximation of ˜ g for t ≥ . Moreover, it is not hard to check when θ = c , min (cid:26) −
12 ∆ η c + c, η c − ρ (cid:27) = 0 for ( t, s ) ∈ R where ρ := t + | s |√ ≤ ˜ g. We know that η c ( x + x √ , x − x √ ) is actually a subsolution to (13) and the comparison principle yields ˜ u ≥ η c . Observe that η c = ρ = ˜ g for | s | ≥ √ c and t ≥ . Therefore, in the half plane t ≥ , thefree boundary Γ( u ) lies inside {| s | ≥ √ c } . 11he result that Γ( u ) lies inside {| x − x | ≥ c } can be viewed as a “lower bound” of the freeboundary.Now we turn to look for an “upper” bound of the free boundary. We need the following result. Lemma 4:
For (cid:15) ∈ (0 , c ] , let ϕ (cid:15) ( t, s ) := 14 c h ( αt ) + η c − (cid:15) ( t, s ) where h ( t ) := max { − t, } and α := 2 √ c(cid:15) . Then we have for all t ≥ , ˜ u ( t, s ) ≤ ϕ (cid:15) ( t, s ) . Proof:
It follows from (ii) that ˜ g ( t, s ) ≤ ˜ u ( t, s ) ≤ max (cid:26) t + s √ , (cid:27) + max (cid:26) t − s √ , (cid:27) + 14 c . We get | s |√ g (0 , s ) ≤ ˜ u (0 , s ) ≤ | s |√ c . (16)Now we want to compare ϕ (cid:15) with u in the half plane t > . On the boundary of t = 0 , ϕ (cid:15) (0 , s ) = 14 c + η c − (cid:15) (0 , s ) ≥ c + ˜ g (0 , s ) (by definition of η c − (cid:15) ) ≥ ˜ u (0 , s ) (by (16)) . Also it is not hard to check that ϕ (cid:15) ∈ C and ϕ (cid:15) ( t, s ) ≥ ˜ g ( t, s ) for all t ≥ , s ∈ R . Moreover when | s | ≤ √ c − (cid:15) ) , we have ∆ ϕ (cid:15) = α c + 2( c − (cid:15) ) ≤ c if α ≤ √ c(cid:15). When | s | ≥ √ c − (cid:15) ) , there is ∆ ϕ (cid:15) ≤ (cid:15) ≤ c . Finally note that both ˜ u and ϕ (cid:15) have linear growthat infinity. The comparison principle (Lemma 2 with Ω = { t > } ) yields ˜ u ≤ ϕ (cid:15) for t > .Based on Lemmas 3 and 4, we can obtain the asymptotic behavior of solutions u close to x = x → + ∞ . To provide a quantitative description about the convergence of the free boundaryof u to the one of η c as t = x + x → ∞ , we define the distance function d F B ( T ) := distance between Γ( u ) | x x √ ≥ T and (cid:26) | x − x | = 12 c (cid:27) . {| x − x | = c } is the free boundary of η c . By symmetry of Γ( u ) with respect to the line of x − x = 0 , we only need to consider the situation when x − x ≥ .The following proposition characterizes the asymptotic behaviors of both the value function andthe free boundary close to x = x → + ∞ , with the proof provided in Appendix. Proposition 3 (Upper bound of the free boundary):
For ( x , x ) in the neighborhood of x = x → + ∞ , u ( x , x ) → η c (cid:18) x + x √ , x − x √ (cid:19) = x + x c | x − x | c for | x − x | ≤ c x + x | x − x | for | x − x | > c . , Γ( u ) → (cid:26) | x − x | = 12 c (cid:27) . Moreover, for all T ≥ c , then d F B ( T ) ≤ √ c T + O (cid:18) c T (cid:19) . As for the limit η c ( x + x , x − x ) , the distance of the free boundary to the line of x = x is always / c . Note that / c > c which is the distance of the free boundaries to x or y -axis at −∞ . This means that the search region is larger in case of competition. In other words, peoplehave larger tolerance for search if two products are as good as each other. One could consider a different technology for information search, as the one considered in Keet al. (2016), where the DM searches costly and sequentially over multiple alternatives, learningonly one alternative at a time. Let the sequential search cost be c (cid:48) .Suppose c (cid:48) = c/ . That is, it costs twice as much to search two alternatives in parallel as tosearch one alternative at a time. Note that in the sequential search case, the DM could replicate anyparallel search strategy considered above by alternating infinitely fast between the two alternatives.Therefore, we have that the region in x - x space where it is optimal to continue to search (i.e. { u > g } ) is larger for the case of sequential search compared with that for the case of parallel search.In other words, the contact set is further away from the origin for the case of sequential search.Figure 2 illustrates the sequential and parallel search strategies for the case with d = 2 . Theblack solid lines represent the free boundaries for the case of parallel search, the same as Figure 1;13hile the gray solid lines represent the free boundaries for the case of sequential search. The graydashed line represents x = x . For the case of sequential search, when it is optimal for the DM tocontinue to search, the DM optimally searches alternative i if and only if x i ≥ x j (Ke et al. 2016).The figure illustrates that the gray lines are further away from the origin than the black lines. x x Figure 2: Comparison of the optimal parallel search strategy under search cost c with the optimalsequential search strategy under search cost c (cid:48) = c/ .One could also wonder how the asymptotic behavior of the free boundary compares betweensequential and parallel search. On can obtain that when the DM searches sequentially, the distanceof the free boundary to the line x = x when x = x → + ∞ converges to √ c (cid:48) = √ c (Ke et al.2016) which is the same as the distance in the case of parallel search. That is, the gray and blacklines in Figure 2 will converge to | x − x | = c as x and x go to positive infinity.It is also interesting to consider the case in which the DM has the option to either searchonly one alternative at cost c (cid:48) or search both alternatives at cost c with economies of scale onthe number of alternatives searched, that is, c (cid:48) ∈ ( c/ , c ) . Although the full-scale analysis of thisproblem is beyond the scope of this paper, one could expect that in such a setting when it isoptimal to continue to search for information, the DM will choose to search for information onboth alternatives simultaneously when the expected valuations of the two alternatives are relativelyclose, and choose to search for information on only one alternative otherwise.14t is interesting, however, that one can obtain a general result in that setting that for the stateclose to x = x → ∞ it is always optimal to choose the search technology where both alternativesare being searched simultaneously. Proposition 4:
Consider a DM, who can search either in parallel at cost c or sequentially at cost c (cid:48) ∈ ( c/ , c ) . For x and x sufficiently high and close to each other, it is optimal for the DM tosearch in parallel. Here we provide an intuitive sketch of proof for the proposition. A formal proof can be obtainedby applying Lemma 6 below and invoking the dynamic programming principle, and is omitted.When x and x are high, the DM is most likely to choose one of the alternatives rather thanthe outside option, and just does not know which alternative to choose. The DM is then mostlyconcentrated on the difference x − x to see when this difference is high enough so that the DMmakes a decision on which alternative to pick and stop the search process. As shown above, at thelimit, when | x − x | ≥ c , the DM prefers to stop and choose one alternative than to continue tosearch either sequentially or in parallel. On the other hand, at the limit, when | x − x | < c , the DMwill choose to continue to search. By searching the two alternatives in parallel in an infinitesimaltime dt , the DM pays a search cost of cdt and gets an update on x − x as dx − dx , the varianceof which is dt ; on the other hand, by searching one alternative (say, alternative 1) sequentially inan infinitesimal time dt , the DM pays a search cost of c (cid:48) dt and gets an update on x − x as dx ,the variance of which is dt . Therefore, the parallel search yields variance per search cost /c , whichis greater than /c (cid:48) , the variance per search cost in the case of sequential search. To summarize,for c (cid:48) ∈ ( c/ , c ) , it is less expensive to obtain a certain variation when searching two alternativessimultaneously, than just searching sequentially on one alternative. This implies that it is morecost-effective for the DM to search in parallel.Figure 3 presents an example of the DM’s optimal search strategy in this context of economicsof scale of search costs, and illustrates that for x and x sufficiently high and close to each other,it is optimal to search the two alternatives in parallel. Now we study the quantitative properties of the free boundary in the general dimension case. Weprovide an “upper” and “lower” bound of the free boundary.First for the upper bound, we will show that in the positive regime x i ≥ for all i , the freeboundary can not be too far away from the set { x i = x j for some i (cid:54) = j } , and the distance growsat most linearly in the dimension d . 15 dopt x x Figure 3: The DM’s optimal search strategy when he can search either sequentially or in parallel,with c (cid:48) = 2 c/ .For any γ > , define N ( γ ) = { ( x , ..., x d ) | x i ≥ , | x i − x j | ≥ γ for all i (cid:54) = j } . (17)The following proposition presents the first main result in this section, with the proof in the Ap-pendix. Proposition 5:
Let u d be the solution to (7) in R d . There exists γ > independent of d, c suchthat Γ( u d ) lies inside the complement of N ( γdc ) , i.e., u d ( x ) = g ( x ) for x ∈ N ( γdc ) . Note that in the case of parallel search here, for fixed c, this result does not yield that thedistance between the free boundary and { x = . . . = x d } is bounded when d → ∞ . We show thatthis is indeed the case when we investigate a “lower bound” of the free boundary in Proposition 6.In contrast, in the case of sequential search with cost c (cid:48) , when d → ∞ , that distance converges to √ c (cid:48) (see Ke et al. (2016), p. 3591). However, if we set c (cid:48) = c/d for a fixed c, then as d → ∞ , wewould get c (cid:48) → , and correspondingly that distance for sequential search becomes unbounded. Onthe other hand, if we fix c (cid:48) and let c = dc (cid:48) grow linearly with d , then by Proposition 5, we wouldhave the distance between the free boundary and { x = . . . = x d } for parallel search to be bounded.16his can also be seen, by a similar argument used above, that as we can replicate parallel searchby alternating among alternatives in sequential search, it must be that the “search region” (i.e., { u > g } ) in the case of parallel search with cost c is a subset of that in the case of sequential searchwith cost c (cid:48) = c/d . As the distance between the free boundary and { x = . . . = x d } is bounded forsequential search for a fixed c (cid:48) , it must be that it is also bounded for parallel search for c = dc (cid:48) . Notealso that even though the free boundary is unbounded when d → ∞ for fixed c, the search processends in finite time with probability one as the state moves, over time, away from x = ... = x d . Thequestion of whether the distance for parallel search increases in d for fixed c (cid:48) = c/d remains open.Next we study the “lower bound” of the free boundary. Let us consider the following auxiliaryproblems: for each d ≥ consider min (cid:26) −
12 ∆ w d + c, w d − ρ (cid:27) = 0 in R d , (18)where ρ = max { x , x , ..., x d } . The free boundary of w d , ( Γ( w d ) ) is defined as the boundary of theset { w d = ρ } .When d = 1 , , by direct computation, we have that, w = ψ c , and w = η c , where ψ c ( x ) = for x ≤ − θ c (cid:18) x + 14 c (cid:19) for x ∈ ( − θ , θ ) x for x ≥ θ , and η c is given by (15). Since ρ ≤ g , w d is a subsolution to the original PDE (7) and by comparison u (= u d ) ≥ w d . We will show that w d provides the full information of the behavior of u near x = ... = x d → ∞ .Let us introduce some notation. We write the positive x , ..., x d directions as e , ..., e d respectivelyand τ d := (cid:80) di =1 e i √ d , H τ d := { v | v · τ d = 0 } , and t = (cid:80) dj =1 x j √ d . (19)The following lemma shows that we can reduce the study of w d to H τ d , where the proof is providedin Appendix. Lemma 5:
The expression w d − (cid:80) dj =1 x j /d is a constant function in the τ d direction. The freeboundary of w d is the surface of one infinitely long columnar with τ d as its longitudinal axis.
17n the following lemma, we show that the free boundary of w d can be arbitrarily close to theone of u if (cid:80) dj =1 x j is large. Since we are only interested in the region near x = ... = x d , let usdefine the following open neighborhood: N ( R ) := (cid:8) x ∈ R d | dist ( x, { sτ d , s ∈ R } ) ≤ R (cid:9) . Lemma 6:
For any (cid:15) ∈ (0 , and R ≥ , the distance between the free boundaries of u and w d isbounded by R(cid:15) in the set N ( R ) ∩ (cid:40) d (cid:88) j =1 x j ≥ dc (cid:114) γ(cid:15) (cid:41) where γ is a universal constant given in Proposition 5. We provide the proof to Lemma 6 in the Appendix. Though more complicated, the idea of theproof follows from the one of Lemma 4.We are interested in the most competitive region x = ... = x d → ∞ where d products are close.From Proposition 5, we know that Γ( u ) can not be too far away from the axis x = ... = x d . Nowwe try to answer the question that how close this distance can be. By Lemma 6, we can identify Γ( u ) asymptotically with Γ( w d ) , and by Lemma 5, we only need to study Γ( w d ) ∩ H τ d .We make the following definition: for each d ≥ , define r d to be the smallest number such thatthere exists x ∈ H r d satisfying | x | = r d and w d ( x ) = ρ ( x ) . From the definition whenever x ∈ H r d and w d ( x ) = ρ ( x ) = g ( x ) , then | x | ≥ r d . For example, when d = 1 , , by the definition of ψ c and η c , we have that r = c and r = √ c .Before the proposition, we need one technical lemma which compares w d and w d (cid:48) for d (cid:54) = d (cid:48) . Lemma 7:
For any k > j ≥ , let { i , i , ..., i k } be a permutation of { , ..., k } . Consider twosolutions w k ( x , ..., x k ) and w j ( x i , ..., x i j ) . We can view w j as a function in R k by trivial extension: ˜ w j ( x i , ..., x i k ) := w j ( x i , ..., x i j ) . Then w k ≥ ˜ w j in R k . The lemma is a direct result of the comparison principle. With a slight abuse of notation, westill write w j instead of ˜ w j .Now we prove the second main result of this section, which provides a lower bound on the freeboundary. 18 roposition 6: Let u = u d be the solution to (7) in dimension d and r d be given as the above. Inthe half plane (cid:26) t ≥ c (cid:113) γd(cid:15) (cid:27) , the distance from the free boundary of u to the ray { sτ d , s ≥ } lies inthe interval [ r d , r d + γdc (cid:15) ] , where τ d is defined in (19), and γ is a universal constant given in Proposition 5. Furthermore, foreach d ≥ , (cid:18) d − − d (cid:19) r d ≥ ( d − r d − . In particular, r d is increasing in d. Furthermore, r d → ∞ as d → ∞ . Proof:
Due to Proposition 5, r d ≤ γdc . We apply Lemma 6 with R = γdc and then the first partof the result follows from the definition of r d .For the second part, take k ≥ and x ∈ H τ k . Without loss of generality we assume ρ k ( x ) := ρ ( x , ..., x k ) = max { x , ..., x k } = x > . The inequality holds due to x ∈ H τ k . Suppose w k ( x ) = x = ρ k ( x ) . Take any k − differentnumbers { i , i , ..., i k − } ⊂ { , ..., k } . If x + x i + ... + x i k − < r k − , by Lemma 7 it follows that w k ( x , ..., x k ) ≥ w k − ( x , x i , ..., x i k − ) > max { x , x i , ..., x i k − } = x = ρ k ( x ) , which cannot happen due to our assumption w k = ρ k at x . Thus we must have x + x i + ... + x i k − ≥ r k − . We can vary the subscripts and add up all the inequalities with respect to different combinationsof { i , ..., i k − } . It ends up with ( k − x + ( k − k (cid:88) j =2 x j ) ≥ ( k − r k − . (20)19ue to the facts that (cid:80) kj =1 x j = 0 and x = max { x , ..., x k } , we can show x ≤ k − k | x | , and equality can be obtained when, x = (cid:114) k − k | x | , x = x = ... = x k = − (cid:112) k ( k − | x | . Therefore (20) leads to (cid:18) k − k − k (cid:19) | x | ≥ ( k − r k − . According to the assumption w k = ρ k and the definition of r k , r k ≥ | x | ≥ ( k − k − − k ) r k − . To prove that r d → ∞ as d → ∞ , suppose by contradiction that r d is bounded as d → ∞ .Then for each d ≥ , there exists ( x d , . . . , x dd ) ∈ { u = g } such that sup ≤ i ≤ d x di ≤ K for some K independent of d . It is well known that for Z , . . . , Z d i.i.d. N (0 , , E (max ≤ i ≤ d Z i ) ∼ √ d as d → ∞ . This implies that given any c > , E (cid:16) max (cid:16) B x d (1) , . . . , B x dd d (1) , (cid:17)(cid:17) > max( x d , . . . , x dd ,
0) + c. for d sufficiently large. Therefore, u ( x d , . . . , x dd ) ≥ E (cid:16) max (cid:16) B x d (1) , . . . , B x dd d (1) , (cid:17)(cid:17) − c > g ( x d , . . . , x dd ) . This contradicts the fact that ( x d , . . . , x dd ) ∈ { u = g } . This concludes the proof.20 PPENDIXDefinition of Viscosity Solutions
Definition A1:
Let u be a continuous function and x ∈ R d .1. We say that −L u ≤ f at x in the viscosity sense if for any ϕ ∈ C which touches u at x from above, we have − ( L ϕ )( x ) ≤ f ( x ) . We call u a subsolution to (5) if −L u ≤ f in theviscosity sense at all points where u − g > .2. We say that −L u ≥ f at x in the viscosity sense if for any ϕ ∈ C which touches u at x from below, we have − ( L ϕ )( x ) ≥ f ( x ) . We call u a supersolution to (5) if u − g ≥ and −L u ≥ f in the viscosity sense in R d .3. We call u a viscosity solution to (5) if and only if u is both a subsolution and a supersolutionto (5). Lemma for General Case
Lemma A1:
Let u be the value function defined by (4), with T := { τ is a stopping time : E τ < ∞} .Then u is a viscosity solution to min {−L u − f, u − g } = 0 . (i) Moreover, if there exist K , K > such that d (cid:88) i =1 sup x ∈ R d | b i ( x ) | < K and sup i,j sup x ∈ R d | σ ij ( x ) | < K , and there exist c > K and a > such that max x ∈ R d f ( x ) ≤ − c and g ( x ) ≤ a d (cid:88) i =1 | x i | for all x ∈ R d , then we have for some C > , g ≤ u ≤ a d (cid:88) i =1 | x i | + C. (ii)21 roof: The fact that u is a viscosity solution to (i) follows from the dynamic programmingprinciple (5). By taking τ = 0 , we get u ≥ g ( x , . . . , x d ) . Moreover, g ( X x ( τ ) , . . . , X x d d ( τ )) ≤ a d (cid:88) i =1 | X x i i ( τ ) |≤ a (cid:34) d (cid:88) i =1 | x i | + d (cid:88) i =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:90) τ b i ( X x ( s )) ds + d (cid:88) j =1 (cid:90) τ σ ij ( X x ( s )) dB j ( s ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:35) ≤ a (cid:34) d (cid:88) i =1 | x i | + d (cid:88) i =1 (cid:90) τ | b i ( X x ( s )) | ds + d (cid:88) i =1 d (cid:88) j =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:90) τ σ ij ( X x ( s )) dB j ( s ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:35) . Note that E (cid:32) d (cid:88) i =1 (cid:90) τ | b i ( X x ( s )) | ds (cid:33) ≤ K E τ, and there exists L > such that for any ≤ i, j ≤ n , E (cid:12)(cid:12)(cid:12)(cid:12)(cid:90) τ σ ij ( X x ( s )) dB j ( s ) (cid:12)(cid:12)(cid:12)(cid:12) ( ∗ ) ≤ L E (cid:34)(cid:18)(cid:90) τ σ ij ( X x ( s )) ds (cid:19) (cid:35) ≤ LK √ E τ . where the inequality ( ∗ ) is due to the Burkholder-Davis-Gundy inequality (see Revuz and Yor 1999,Chapter IV). Consequently, u ≤ a (cid:34) d (cid:88) i =1 | x i | + sup τ ∈T (cid:110) ( K − c ) E τ + Ld K √ E τ (cid:111)(cid:35) ≤ a d (cid:88) i =1 | x i | + aL d K c − K ) , which yields (ii). Proof of Lemma 2:
Proof:
Fix r ≥ . For any R > r + 1 and (cid:15) ∈ (0 , /d ) , let u (cid:15) := (1 − d(cid:15) ) u + c(cid:15) (cid:32) d (cid:88) i =1 x i + d c (cid:33) .
22e claim that u (cid:15) is a supersolution to min (cid:26) −
12 ∆ u + c, u − g (cid:27) = 0 for x ∈ B R ,u = g for x ∈ ∂ B R . (iii)Since ∆ u ≤ c , we have ∆ u (cid:15) = (1 − d(cid:15) )∆ u + 2 cd(cid:15) ≤ c. Also because u ≥ g , we get u (cid:15) = (1 − d(cid:15) ) u + (cid:15) d (cid:88) i =1 (cid:18) cx + d c (cid:19) ≥ (1 − d(cid:15) ) max { x , ..., x d , } + d(cid:15) d (cid:88) i =1 | x i |≥ max { x , ..., x d , } = g. Next for any small (cid:15) > , if we pick R large enough (depending on h and (cid:15) ) and then by thecondition (11), u ≤ c(cid:15)R ≤ u (cid:15) on ∂ B R ∪ ∂ Ω . By comparison, u (cid:15) ≥ u in B R ∩ Ω and in particular in B r ∩ Ω . Consequently, (1 − d(cid:15) ) u + c(cid:15) (cid:18) r + d c (cid:19) ≥ u . Since we can choose (cid:15) to be arbitrarily small and then r to be large, we conclude that u ≤ u in Ω . Proof of Proposition 3:
Proof:
Consider the line x + x = √ t with fixed t ≥ . By Lemma 4, u (cid:18) t + s √ , t − s √ (cid:19) = ˜ u ( t, s ) ≤ ϕ (cid:15) ( t, s ) . By definition, using the notation in Lemma 4, when αt ≥ i.e. (cid:15) ≥ / (4 ct ) , (iv)23e have ˜ u ≤ ϕ (cid:15) = η c − (cid:15) . This, combining with the fact that ˜ u ≥ η c + (cid:15) , implies, ˜ u ( t, s ) = u ( x , x ) ≥ max { x , x , } = g ( x , x ) if | x − x | ≥ c ;˜ u ( t, s ) = u ( x , x ) ≥ c | x − x | c + x + x > g ( x , x ) if | x − x | < c ;˜ u ( t, s ) = u ( x , x ) ≤ g ( x , x ) if | x − x | ≥ c − (cid:15) ) . Thus, u ( x , x ) > g ( x , x ) if | x − x | < c ,u ( x , x ) = g ( x , x ) if | x − x | ≥ c − (cid:15) ) . We see that free boundary is between | x − x | ∈ ( c , c − (cid:15) ) ) once t = x + x √ satisfying (iv). Nowtake (cid:15) = ct and to have (cid:15) < c , we require t ≥ c . Finally, we conclude that, d F B ( t ) ≤ (cid:18) c − (cid:15) ) − c (cid:19) / √ (cid:15) √ c ( c − (cid:15) )= (cid:15) √ c + O (cid:18) (cid:15) c (cid:19) = 18 √ c t + O (cid:18) c t (cid:19) . Proof of Proposition 5:
Proof:
We first prove the following technical lemma.
Lemma A2:
There exists a universal constant C such that for all d ≥ (cid:90) (cid:12)(cid:12)(cid:12)(cid:12) ddR e − / (1 − R ) (cid:12)(cid:12)(cid:12)(cid:12) R d − dR ≤ C ( d − (cid:90) e − / (1 − R ) R d − dR. Proof:
Denote J d := (cid:90) e − / (1 − R ) R d dR. (v)Integration by parts gives (cid:90) (cid:12)(cid:12)(cid:12)(cid:12) ddR e − / (1 − R ) (cid:12)(cid:12)(cid:12)(cid:12) R d − dR = (cid:90) (cid:18) − ddR e − / (1 − R ) (cid:19) R d − dR ( d − (cid:90) e − / (1 − R ) R d − dR = ( d − J d − . By the Cauchy-Schwarz inequality, we have J d − ≤ J d − J d − . Thus, (cid:90) (cid:12)(cid:12)(cid:12)(cid:12) ddR e − / (1 − R ) (cid:12)(cid:12)(cid:12)(cid:12) R d − dR/ (cid:18) ( d − (cid:90) e − / (1 − R ) R d − dR (cid:19) = J d − /J d − ≤ J d − /J d − ≤ ... ≤ J /J = C. Now, we prove the main proposition. From previous arguments, we know that g is a subsolutionand u ≥ g . We are going to construct a supersolution through g and it leads to an estimate of Γ( u ) from above.Consider a symmetric modifier ϕ ( x ) = µ ( | x | ) /I d such that µ ( R ) = (cid:40) e − / (1 − R ) if R ≤ , if R > . The numerical constant I d ensures normalization, i.e., I d = (cid:90) R d ϕ ( x ) dx = A d J d − where A d is the surface area of a unit d -dimensional ball and J d − is given in (v).Set ϕ r := r d ϕ ( rx ) and then supp { ϕ r } ⊂ {| x | ≤ /r } , (cid:90) R d |∇ ϕ r | dx = r (cid:90) B |∇ ϕ | dx = rI d (cid:90) (cid:90) B |∇ µ ( R ) | R d − dRdω = rA d I d (cid:90) | µ (cid:48) ( R ) | R d − dR, (vi)25here ∇ is the gradient operator. According to Lemma A2, (cid:90) R d |∇ ϕ r | dx ≤ C ( d − rA d J d − /I d = C ( d − r. We claim that Φ r := ϕ r ∗ g = (cid:90) R d ϕ r ( x − y ) g ( y ) dy is a supersolution for some r small enough. Let us check the following two conditions, ∆Φ r ≤ c, and Φ r ≥ g. Since (by symmetry) ϕ r ∗ x i = x i and g = max { x , ..., x d , } , we have Φ r = ϕ r ∗ g ≥ g. Next we compute | ∆Φ r | = |∇ x (cid:90) R d ( ∇ ϕ r )( x − y ) g ( y ) dy | = (cid:12)(cid:12)(cid:12)(cid:12) ∇ x (cid:90) R d ( ∇ ϕ r )( y ) g ( x − y ) dy (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:90) R d |∇ ϕ r | ( y ) |∇ g | ( x − y ) dy. By the fact |∇ g | ≤ and (vi), we obtain | ∆Φ r | ≤ C ( d − r. Thus for some universal γ > , we have | ∆Φ r | ≤ c if r ≤ c/ ( γd ) . In all we conclude that with thischoice of r , Φ r is a supersolution and u ≤ Φ r .Fix any x ∈ N ( γd/c ) . By definition, g ( x ) = x k for some k for all x = ( x , ..., x d ) ∈ B γd/c ( x ) and therefore Φ r = g ∗ φ r = x k . Hence in N ( γd/c ) , we have u ≤ Φ r = g. Since u ≥ g , we concludethat u = g for x ∈ N ( γd/c ) . 26 roof of Lemma 5: Proof:
Let us sketch the proof below. We are going to use the following cylindrical coordinates:for each x ∈ R d , write x = tτ d + d (cid:88) j =1 s j e j , where (cid:80) dj =1 s j e j ∈ H τ d . Then ω := w d − t √ d solves min (cid:26) −
12 ∆ ω + c, ω − (cid:18) ρ − t √ d (cid:19)(cid:27) = 0 in R d . (vii)Notice that shifts in the τ d direction preserve the value of ( ρ − t √ d ) . Therefore by uniqueness ofsolutions to (vii), the shifts also preserve ω i.e. ω ( x ) = ω ( x + sτ d ) for all s ∈ R .Now we consider the free boundary property of w d . Again, for any x ∈ R d , write x = tτ k + y with y ∈ H τ d and t ( x ) = (cid:80) dj =1 x j / √ d . From the above, w d ( y ) = ρ ( y ) if and only if w d ( y ) + t √ d = ρ ( y ) + t √ d , if and only if w ( x ) = max { y , y , ..., y d } + t √ d = max { x , x , ..., x d } = ρ ( x ) . We used x = y + tτ d in the second equality. Therefore Γ( w d ) equals { Γ( w d ) ∩ H τ d } × R τ d . Proof of Lemma 6:
Proof:
First, we want to give an upper bound of u − g on H τ d . From the proof of Proposition5, Φ /r = ϕ /r ∗ g ≥ u where ϕ /r is a modifier supported in B r with r = γdc . Because |∇ g | ≤ and ϕ /r ∗ g ( x ) can be viewed as a weighted average of g in B r ( x ) , we have | Φ /r − g | ≤ r. In all, we find for x ∈ H τ d u − g ≤ Φ /r − g ≤ r. (viii)Second, let us construct a supersolution to (7). For (cid:15) (cid:28) min { c, /d } , set w (cid:15)d := 11 − (cid:15) w d ((1 − (cid:15) ) x ) , min (cid:18) −
12 ∆ w (cid:15)d + (1 − (cid:15) ) c, w (cid:15)d − ρ (cid:19) = 0 . Next define a C function ϕ (cid:15)d ( x ) := rh ( α x · τ d ) + w (cid:15)d ( x ) , where h ( t ) = (max { − t, } ) and α = α ( (cid:15) ) are to be determined.In the third step, we want to show that ϕ (cid:15)d is indeed a supersolution in the half hyperplane D := (cid:26) (cid:80) dj =1 x j √ d =: t > (cid:27) . Since ρ = g in D , we have ϕ (cid:15)d ≥ g in D . On the boundary ∂ D = H τ d , itfollows from (viii) that ϕ (cid:15)d = rh (0) + w (cid:15)d ≥ r + g ≥ u. Also by direct computation, ∆ ϕ (cid:15)d = ∆( rh ) + ∆ w (cid:15)d ≤ rα + 2(1 − (cid:15) ) c. To make ϕ (cid:15)d a subsolution, we only need rα ≤ c(cid:15) which is equivalent to α ≤ c (cid:113) (cid:15)γd . Finally we canconclude that by comparison, ϕ (cid:15)d ≥ u in D .When t ≥ α = c (cid:113) γd(cid:15) , we have ϕ (cid:15)d = w (cid:15)d ≥ u . Hence we know w (cid:15)d ≥ u ≥ w d . Since w (cid:15)d ( x ) = 11 − (cid:15) w d ((1 − (cid:15) ) x ) , then w d ( x ) = g ( x ) implies w (cid:15)d ( x ) = g ( x ) . Therefore the free boundary of u ( Γ( u ) ) lies between Γ( w d ) and Γ( w (cid:15)d ) when t is large. By Lemma 5, it is sufficient to compare Γ( w d ) and Γ( w (cid:15)d ) on H τ d .We consider a R -neighbourhood of the origin in H τ d (seeing from Proposition 5, we may pick R = γdc ). Again by definition of w (cid:15)d , inside B R ( tτ d ) ∩ H τ d , the distance between Γ( w d ) and Γ( w (cid:15)d ) isbounded by R(cid:15).
We conclude that the distance between Γ( u ) and Γ( w d ) is bounded by R(cid:15) in theset N ( R ) ∩ { t ≥ c (cid:113) γd(cid:15) } . 28 eferences Sigurd Assing, Saul Jacka, and Adriana Ocejo. Monotonicity of the value function for a two-dimensional optimal stopping problem.
The Annals of Applied Probability , 24(4):1554–1584,2014.Dirk Bergemann and Juuso Välimäki. Learning and strategic pricing.
Econometrica , 64(5):1125–1149, 1996.Fernando Branco, Monic Sun, and J. Miguel Villas-Boas. Optimal search for product information.
Management Science , 58(11):2037–2056, 2012.M. Broadie and J. Detemple. The valuation of American options on multiple assets.
MathematicalFinance , 7(3):241–286, 1997.Luis Caffarelli. The obstacle problem revisited.
Journal of Fourier Analysis and Applications , 4(4-5):383–402, 1998.Yeon-Koo Che and Konrad Mierendorff. Optimal sequential decision with limited attention.
Amer-ican Economic Review , 108(8):2993–3029, 2019.Michael Crandall and Pierre-Louis Lions. Viscosity solutions of hamilton-jacobi equations.
Trans-actions of the American mathematical society , 277(1):1–42, 1983.Michael Crandall, Hitoshi Ishii, and Pierre-Louis Lions. User’s guide to viscosity solutions of secondorder partial differential equations.
Bulletin of the American Mathematical Society , 27(1):1–67,1992.Jens Frehse. On the regularity of the solution of a second order variational inequality.
Boll. Un.Mat. Ital.(4) , 6:312–315, 1972.Drew Fudenberg, Philipp Strack, and Tomasz Strzalecki. Speed, accuracy, and the optimal timingof choices.
American Economic Review , 108:3651–3684, 2018.Xin Guo and Mihail Zervos. π options. Stochastic Processes and their Applications , 120(7):1033–1059, 2010.Benjamin Hébert and Michael Woodford. Rational inattention with sequential information sam-pling. Working paper, Stanford University and Columbia University, 2017.Hitoshi Ishii. Perron’s method for Hamilton-Jacobi equations.
Duke Mathematical Journal , 55(2):369–384, 1987. 29itoshi Ishii. On uniqueness and existence of viscosity solutions of fully nonlinear second-orderelliptic pde’s.
Communications on Pure and Applied Mathematics , 42(1):15–45, 1989.H. Johnson. Options on the maximum or the minimum of several assets.
Journal of FinancialQuantitative Analysis , 22(3):277–283, 1987.Ioannis Karatzas and Steven Shreve.
Brownian motion and stochastic calculus , volume 113.Springer-Verlag, New York, second edition, 1991.T.Tony Ke and J.Miguel Villas-Boas. Optimal learning before choice.
Journal of Economic Theory ,180:383–437, 2019.T.Tony Ke, Zuo-Jun Max Shen, and J.Miguel Villas-Boas. Search for information on multipleproducts.
Management Science , 62(12):3576–3603, 2016.Godfrey Keller and Sven Rady. Optimal experimentation in a changing environment.
Review ofEconomic Studies , 66(3):475–507, 1999.David Kinderlehrer and Guido Stampacchia.
An introduction to variational inequalities and theirapplications , volume 31. SIAM, 1980.Giuseppe Moscarini and Lones Smith. The optimal level of experimentation.
Econometrica , 69(6):1629–1644, 2001.Arina Nikandrova and Roman Pancs. Dynamic project selection.
Theoretical Economics , 13:115–144, 2018.Goran Peskir and Albert Shiryaev.
Optimal stopping and free-boundary problems . Springer, 2006.Daniel Revuz and Marc Yor.
Continuous martingales and Brownian motion , volume 293. Springer-Verlag, Berlin, third edition, 1999.Kevin Roberts and Martin L. Weitzman. Funding criteria for research, development, and explorationprojects.
Econometrica , 49(5):1261–1288, 1981.M. Rubinstein. Somewhere over the rainbow.
Risk , 4:63–66, 1991.Bruno Strulovici and Martin Szydlowski. On the smoothness of value functions and the existenceof optimal strategies in diffusion models.
Journal of Economic Theory , 159:1016–1055, 2015.R.M. Stulz. Options on the minimum or the maximum of two risky assets: analysis and applications.
Journal of Financial Economics , 10(2):161–185, 1982.30braham Wald. Sequential tests of statistical hypotheses.