Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul D. Berger is active.

Publication


Featured researches published by Paul D. Berger.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Can you relate in multiple ways? Multiple linear regression and stepwise regression

Mike Fritz; Paul D. Berger

Chapter 9 considers the relationship between two variables. Chapter 10 expands the perspective of Chapter 9 and considers the relationship between one dependent variable and several independent variables. For example, we might wish to predict an overall evaluation of a product (Y, the dependent variable) based on the knowledge of several X’s (independent variables), each of which is an evaluation of individual components of the product. When there is more than one independent (X) variable, we refer to the analysis as multiple regression . We also introduce a prominent variation of the traditional multiple regression process called stepwise regression . This technique was invented specifically to help the user determine the best model—in essence, when there are several independent variables, which ones should we include in the final prediction equation?


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Comparing two designs (or anything else!) using paired sample T-tests

Mike Fritz; Paul D. Berger

This chapter continues the basic issue addressed in the previous chapter, but considers the comparison of two means when you have dependent or paired samples. This usually means in practice that you have the same group of people comparing different “variables.” For example, you may be considering mean satisfaction level for two different designs, where each design is evaluated by the same sample of people. Or, the quantity of interest might be the time to complete a task, or a myriad of other variables. In all these cases, we use what is called the paired two-sample t-test. This chapter discusses and illustrates this technique and goes into detail instructing the reader on how to perform the test using Excel and SPSS, and how to interpret the results. It also contrasts this test with the two-sample t-test of the previous chapter.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Comparing more than two means: One factor ANOVA with a within-subject design

Mike Fritz; Paul D. Berger

This chapter extends the material in Chapter 6 to the case in which you do not have independent samples. Rather, you have one group of respondents that complete the same tasks, rate the same features for ease of use, or a variety of other measures. The idea in this chapter is similar to that of Chapter 3 when we had two designs or tasks and discussed “paired data.” That meant that the same person saw both designs or performed both tasks. When the same person is involved in each columns data value (i.e., a complete row of data is from the same person ), the fancy term that is used is “repeated measures.” Perhaps a more common expression in some fields is to call this situation a “within-subjects design.” With repeated measures, we use an F-test and ANOVA, just like in Chapter 6 , but we need to follow a slightly different procedure.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Comparing more than two means: One factor ANOVA with independent samples. Multiple comparison testing with the Newman-Keuls test

Mike Fritz; Paul D. Berger

This chapter explains how to compare means when you have more than two. For example, you may be comparing the average ratings of ease-of-use for the different tasks that you just asked participants to complete in a usability test. Or, you could be comparing attractiveness of a specific design across different age brackets. Or, you may be comparing more than two task-completion times from a usability test. In all these cases, we use what is called a one-factor analysis of variance (ANOVA). This chapter discusses and illustrates this ANOVA technique, and goes into detail instructing the reader on how to perform the comparison tests using Excel and SPSS, and how to interpret the results. We then introduce the Newman–Keuls test; this test dissects the different means further, by comparing all pairs of them. The test allows us to determine which of the different “treatments” (e.g., designs) differ from one another.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Can you relate? Correlation and simple linear regression

Mike Fritz; Paul D. Berger

This chapter considers the relationship between two variables. An illustration might be to investigate if there is a relationship between perceived sophistication of a design and the amount of previous experience buying consumer products online—and, if so, what is that relationship? We introduce simple regression analysis , which explores the relationship between an interval-scale or ratio-scale variable and a second variable of any scale, although more often than not, the second variable is also an interval-scale or ratio-scale variable. Examples could be the two variables that are each measured on a Likert scale or the above-mentioned example of perceived design sophistication and previous experience buying online. We also introduce in this chapter the correlation between two variables. The correlation is a dimensionless measure of the linear relationship between two variables, indicating its strength and direction, while regression analysis provides a quantification of precisely how one variable changes value as the other variable changes value.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Comparing more than two means: Two factor ANOVA with independent samples: the important role of interaction

Mike Fritz; Paul D. Berger

This chapter considers the effect of two factors on any kind of metric you collect, including ease-of-use of a task, time it takes to complete a task, or the sophistication of a design. For each of the two factors, we separately test the null hypothesis that the mean for each level of the factor is the same, vs. the alternate hypothesis that the means are not the same. We also bring to bear the Student-Newman-Keuls (S-N-K) test for each factor to further determine what the differences are, if, indeed, we find that there are differences in the means. The chapter also introduces interaction effects , an important and often overlooked source of variability in the results in the real world. As in the past few chapters, analysis of variance is the statistical technique used.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Chapter 2 – Comparing two designs (or anything else!) using independent sample T-tests

Mike Fritz; Paul D. Berger

This chapter considers the comparison of two means when you have independent samples. This usually means in practice that you have two different groups of people comparing either the same or different variables. For example, you may be considering mean satisfaction (the dependent variable) of a specific design for two different categories of people—e.g., experts vs. novices (the independent variable); or you may be comparing the mean satisfaction level for two different designs, with each design having its own evaluation sample of people. Or the quantity of interest might be the time to complete a task (e.g., make a purchase of a particular product on a particular Web site), or a myriad of other variables. In all these cases, we use what is called the two-sample t-test for independent samples. This chapter discusses and illustrates this technique, and goes into detail instructing the reader on how to perform the test using Excel and SPSS, and how to interpret the results.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Will anybody buy? Logistic regression

Mike Fritz; Paul D. Berger

This chapter considers binary logistic regression. This is a regression analysis in which the Y (dependent variable) is binary. That is, the Y variable is not interval or ratio scale (such as a Likert scale), but a two-category variable. The two categories are usually recoded as “1” and “0,” in order to best be processed by a statistical software program. However, the two categories can be virtually anything, such as “adopted the search engine vs. did not do so,” “completed a task successfully or did not do so,” or, in the world of general database marketing, “responded to an offer (i.e., made a purchase) or did not respond to the offer.” In these situations, regular linear regression models (whether simple or multiple) are not appropriate.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Introduction to a variety of useful statistical ideas and techniques

Mike Fritz; Paul D. Berger

In this chapter, we discuss several basic statistical techniques. We begin by introducing the normal distribution, also called the bell curve, the bell-shaped curve, and (mostly in an engineering context) the Gaussian curve. This probability distribution is the root of several other probability distributions that you will cover in this text, such as the t-distribution, chi-square distribution, and F-distribution. We then describe what needs to change when we wish to study the probability behavior of a mean or average of many data points (e.g., satisfaction ratings). Finally, we introduce the basic techniques of confidence interval and hypothesis testing. These techniques, especially the latter, are used frequently in subsequent chapters.


Improving the User Experience Through Practical Data Analytics#R##N#Gain Meaningful Insight and Increase your Bottom Line | 2015

Pass or fail? Binomial-related hypothesis testing and confidence intervals using independent samples

Mike Fritz; Paul D. Berger

In this chapter we introduce what to do when we are looking at pass/fail data and simple preference data. Pass/fail is a common scenario for UX researchers, where they are asked to make conclusions from binomial data (i.e., two outcomes) rather than normally distributed data. For example, using the logic of hypothesis testing to determine whether the (true) proportion who pass two or more tasks are the same or different, or to determine whether the pass/fail rate differs for the same task using two or more Web sites. In addition, a UX researcher may need to find a confidence interval for the true proportion of people who will pass the task. The confidence procedure is different for small samples (for which we introduce the adjusted Wald method for finding these confidence intervals) and for large samples (for which we use the more common approach directly based on the normal distribution without adjustment). To perform hypothesis testing, we introduce the chi-square test of independence (sometimes called the chi-square contingency-table test), and for specific cases, Fishers exact test, for this same purpose.

Collaboration


Dive into the Paul D. Berger's collaboration.

Researchain Logo
Decentralizing Knowledge