Thomas Otter
Goethe University Frankfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Otter.
Foundations and Trends in Marketing | 2007
Sandeep R. Chandukala; Jaehwan Kim; Thomas Otter; Peter E. Rossi; Greg M. Allenby
Direct utility models of consumer choice are reviewed and developed for understanding consumer preferences. We begin with a review of statistical models of choice, posing a series of modeling challenges that are resolved by considering economic foundations based on constrained utility maximization. Direct utility models differ from other choice models by directly modeling the consumer utility function used to derive the likelihood of the data through Kuhn-Tucker conditions. Recent advances in Bayesian estimation make the estimation of these models computationally feasible, offering advantages in model interpretation over models based on indirect utility, and descriptive models that tend to be highly parameterized. Future trends are discussed in terms of the antecedents and enhancements of utility function specification.
Journal of Business & Economic Statistics | 2004
Sylvia Frühwirth-Schnatter; Regina Tüchler; Thomas Otter
We consider Bayesian estimation of a finite mixture of models with random effects, which is also known as the heterogeneity model. First, we discuss the properties of various Markov chain Monte Carlo samplers that are obtained from full conditional Gibbs sampling by grouping and collapsing. Whereas full conditional Gibbs sampling turns out to be sensitive to the parameterization chosen for the mean structure of the model, the alternative sampler is robust in this respect. However, the logical extension of the approach to the sampling of the group variances does not further increase the efficiency of the sampler. Second, we deal with the identifiability problem due to the arbitrary labeling within the model. Finally, a case study involving metric conjoint analysis serves as a practical illustration.
Journal of Marketing Research | 2008
Thomas Otter; Greg M. Allenby; Trisha Van Zandt
Computer and web-based interviewing tools have made response times ubiquitous in marketing research. These data are used as an indicator of data quality by practitioners, and of latent processes related to memory, attributes and decision making by academics. We investigate a Poisson race model with choice and response times as dependent variables. The model facilitates inference about respondent preference for choice alternatives, their diligence in providing responses, and the accessibility of attitudes/the speed of thinking. Thus, the model distinguishes respondents who are quick to think versus those who react quickly but without much thought. Empirically, we find support for the endogenous nature of response times and demonstrate that models that treat response times as exogenous variables may result in misleading inferences.
Marketing Science | 2013
Joachim Büschken; Thomas Otter; Greg M. Allenby
The canonical design of customer satisfaction surveys asks for global satisfaction with a product or service and for evaluations of its distinct attributes. Users of these surveys are often interested in the relationship between global satisfaction and attributes; regression analysis is commonly used to measure the conditional associations. Regression analysis is only appropriate when the global satisfaction measure results from the attribute evaluations and is not appropriate when the covariance of the items lie in a low-dimensional subspace, such as in a factor model. Potential reasons for low-dimensional responses are that responses may be haloed from overall satisfaction and there may be an unintended lack of item specificity. In this paper we develop a Bayesian mixture model that facilitates the empirical distinction between regression models and relatively much lower-dimensional factor models. The model uses the dimensionality of the covariance among items in a survey as the primary classification criterion while accounting for the heterogeneous usage of rating scales. We apply the model to four different customer satisfaction surveys that evaluate hospitals, an academic program, smartphones, and theme parks, respectively. We show that correctly assessing the heterogeneous dimensionality of responses is critical for meaningful inferences by comparing our results to those from regression models.
Marketing Science | 2013
Stephan Wachtel; Thomas Otter
We reanalyze endogenous sample selection in the context of customer scoring, targeting, and influencing decisions. Scoring relies on ordered lists of probabilities that customers act in a way that contributes revenues, e.g., purchase something from the firm. Targeting identifies constrained sets of covariate patterns associated with high probabilities of these acts. Influencing aims at changing the probabilities that individual customers act accordingly through marketing activities. We show that successful targeting and influencing decisions require inference that controls for endogenous selection, whereas scoring can proceed relatively successfully based on simpler models that provide local approximations, capitalizing on spurious effects of observed covariates. To facilitate the type of inference required for targeting and influencing, we develop a prior that frees the analyst from having to specify often arbitrary exclusion restrictions for model identification a priori or to explicitly compare all possible models. We cover exclusions of observed as well as unobserved covariates that may cause the successive selections to be dependent. We automatically infer the dependence structure among selection stages using Markov chain Monte Carlo-based variable selection, before identifying the scale of latent variables. The adaptive parsimony achieved through our prior is particularly helpful in applications where the number of successive selections exceeds two, a relevant but underresearched situation.
Archive | 2010
Joachim Bueschken; Thomas Otter; Greg M. Allenby
Identifying the drivers of overall customer satisfaction assumes that the component scores can be uniquely recalled and reported from memory. If the component scores are a reflection of an overall measure, such as with haloed responses, instead of containing independent information on its formation, then they should not be used in a driver analysis. There is likely a mixture of formed and haloed responses in all surveys of satisfaction, which potentially distorts inferences about the relationship between the component scores and the overall measure of satisfaction. In this paper we develop a Bayesian mixture model that effectively separates out the haloed responses and apply it to two customer satisfaction datasets. The proposed model results in improved fit to the data, stronger driver effects, and more reasonable inferences.
Journal of Marketing Research | 2018
Jeffrey P. Dotson; Jeff D. Brazell; John R. Howell; Peter Lenk; Thomas Otter; Steven N. MacEachern; Greg M. Allenby
Distributional assumptions for random utility models play an important role in relating observed product attributes to choice probabilities. Choice probabilities derived with independent errors have the IIA property, which often does not match consumer behavior and leads to inaccurate source of volume predictions. Correlated errors, such as in the correlated probit model, give more realistic results. However, the source of the correlation among the utility functions for different alternatives is often not well studied. In practice, covariance matrices are frequently associated with presentation order or brands in conjoint studies. However, other structures allow richer specifications of substitution patterns. In this paper, we parameterize the covariance matrix for probit models so that similar brands in the preference space have higher correlation than dissimilar brands, resulting in higher rates of substitution. We investigate alternative measures of similarity in the context of a conjoint model, and compare the resulting substitution patterns to those of standard choice models. The proposed model fits the data better and results in more realistic measures of substitution for a product line extension. The structured covariance matrix approach allows marketing managers to predict the substitution pattern for product profiles not included in the conjoint analysis.
Archive | 2015
Selin Akca; Thomas Otter
In this paper we study the identification of discrete choice models of dynamically optimizing consumers. We first provide additional formal results for existing identification solutions. We then investigate the ‘last in, last out’ (LILO) constraint on consuming from the inventory as a means to identifying the discount factor in these models. We find that the LILO constraint (over-)identifies the discount parameter in the absence of assumptions about consumers’ expectations and show how LILO results in efficient estimates in a parametric, maximum likelihood framework using simulated data. Finally, we report survey based empirical evidence for the relevance of LILO strategies in four different categories.
Archive | 2014
Keyvan Dehmamy; Thomas Otter
Models of consumer decision making often condition on attention to the different offers or alternatives to choose from. However, in many environments offers not only compete through their utility but also for the attention of decision makers. In this case, it is important to distinguish between attention and utility - it makes a difference whether an offering is overlooked, or rejected conditional on awareness - for optimal marketing control and empirical measures of competition. We show how discrete-continuous choices, in contrast to multinomial choices, facilitate the empirical distinction between attention and utility, and more generally the identification of two-stage decision models. In our illustrative application we analyze choices from simulated store shelves. We find that the number of facings of a brand on the shelf influence attention to, but not utility from the brand. We then formulate a parametric model that identifies attention based considerations sets and document clearly misleading inferences from a model that ignores attention and motivates choices from utility only.
Archive | 2018
Max J. Pachali; Peter Kurz; Thomas Otter
Models of consumer heterogeneity play a pivotal role in marketing and economics, specifically in random coefficient or mixed logit models for aggregate or individual data and in hierarchical Bayesian models of heterogeneity. In applications, the inferential target often pertains to a population beyond the sample of consumers providing the data. For example, optimal prices inferred from the model are expected to be optimal in the population and not just optimal in the observed, finite sample. The population model, random coefficients distribution, or heterogeneity distribution is the natural and correct basis for generalizations from the observed sample to the market. However, in many if not most applications standard heterogeneity models such as the multivariate normal, or its finite mixture generalization lack economic rationality because they support regions of the parameter space that contradict basic economic arguments. For example, such population distributions support positive price coefficients or preferences against fuel-efficiency in cars. Likely as a consequence, it is common practice in applied research to rely on the collection of individual level mean estimates of consumers as a representation of population preferences that often substantially reduce the support for parameters in violation of economic expectations. To overcome the choice between relying on a mis-specified heterogeneity distribution and the collection of individual level means that fail to measure heterogeneity consistently, we develop an approach that facilitates the formulation of more economically faithful heterogeneity distributions based on prior constraints. In the common situation where the heterogeneity distribution comprises both constrained and unconstrained coefficients (e.g., brand and price coefficients), the choice of subjective prior parameters is an unresolved challenge. As a solution to this problem, we propose a marginal-conditional decomposition that avoids the conflict between wanting to be more informative about constrained parameters and only weakly informative about unconstrained parameters. We show how to efficiently sample from the implied posterior and illustrate the merits of our prior as well as the drawbacks of relying on means of individual level preferences for decision-making in two illustrative case studies.