Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael O. Finkelstein is active.

Publication


Featured researches published by Michael O. Finkelstein.


Archive | 2015

Comparing Multiple Proportions

Michael O. Finkelstein; Bruce Levin

Chi-squared is a useful and convenient statistic for testing hypotheses about multinomial distributions (see Section 4.2 at p. 103). This is important because a wide range of applied problems can be formulated as hypotheses about “cell frequencies” and their underlying expectations. For example, in Section 4.6.2, Silver “butterfly” straddles, the question arises whether the price change data are distributed normally; this question can be reduced to a multinomial problem by dividing the range of price changes into subintervals and counting how many data points fall into each interval. The cell probabilities are given by the normal probabilities attaching to each of the intervals under the null hypothesis, and these form the basis of our expected cell frequencies. Chi-squared here, like chi-squared for the fourfold table, is the sum of the squares of the differences between observed and expected cell frequencies, each divided by its expected cell frequency. While slightly less powerful than the Kolmogorov-Smirnov test—in part because some information is lost by grouping the data—chi-squared is easier to apply, can be used in cases where the data form natural discrete groups, and is more widely tabled.


Archive | 2015

More Complex Regression Models

Michael O. Finkelstein; Bruce Levin

When observations of the dependent variable form a series over time, special problems may be encountered. Perhaps the most significant difference from the models discussed thus far is the use as an explanatory variable of the value of the dependent variable itself for the preceding period. A rationale for lagged dependent variables is that they account for excluded explanatory factors. Lagged dependent and independent variables may also be used to correct for “stickiness” in the response of the dependent variable to changes in explanatory factors. For example, in a price equation based on monthly prices, changes in cost or demand factors might affect price only after several months, so that regression estimates reflecting changes immediately would be too high or too low for a few months; the error term would be autocorrelated. A lagged value of the dependent variable might be used to correct for this. Note, however, that inclusion of a lagged dependent variable makes the regression essentially predict change in the dependent variable because the preceding period value is regarded as fixed; this may affect interpretation of the equation’s coefficients. For previous examples, see Sections 13.6.2 and 13.6.3.


Archive | 2015

Statistical Inference for Two Proportions

Michael O. Finkelstein; Bruce Levin

When comparing two proportions, it is common practice simply to quote a figure representing the contrast between them, such as their difference or ratio. Several such measures of association have already been introduced in Section 1.5, and we discuss others in Section 6.3. The properties of these measures and the choice of a “best” one are topics in descriptive statistics and the theory of measurement. There are interesting questions here, but what gives the subject its depth is the fact that the data summarized in the description may often be regarded as informative about some underlying population that is the real subject of interest. In such contexts, the data are used to test some hypothesis or to estimate some characteristic of that population. In testing hypotheses, a statistician computes the statistical significance of, say, the ratio of proportions observed in a sample to test the null hypothesis H0 that their ratio is 1 in the population. In making estimates, the statistician computes a confidence interval around the sample ratio to indicate the range of possibilities for the underlying population parameter that is consistent with the data. Methods for constructing confidence intervals are discussed in Section 5.3. We turn now to testing hypotheses.


Archive | 2015

How to Count

Michael O. Finkelstein; Bruce Levin

The key to elementary probability calculations is an ability to count outcomes of interest among a given set of possibilities called the sample space. Exhaustive enumeration of cases often is not feasible. There are, fortunately, systematic methods of counting that do not require actual enumeration. In this chapter we introduce these methods, giving some applications to fairly challenging probability problems, but defer to the next chapter the formal theory of probability.


Archive | 2015

Some Probability Distributions

Michael O. Finkelstein; Bruce Levin

Suppose an urn contains a certain number of chips, a proportion p of which are labeled ‘1,’ the rest labeled ‘0.’ Chips are withdrawn at random and replaced in the urn after each drawing, so that the contents of the urn remain constant. After n drawings, what is the probability of obtaining exactly r chips labeled ‘1’? An equivalent problem is to find the probability of obtaining exactly r heads among n tosses of a coin with probability of heads on a single toss equal to p. Both problems involve a sequence of n binary random variables, each identically distributed, with successive outcomes being statistically independent.1 Being “independent” means that probabilities of subsequent outcomes do not depend on prior outcomes. “Identically distributed” here means that the probability p remains constant from one observation to the next.


Archive | 2015

Combining Evidence Across Independent Strata

Michael O. Finkelstein; Bruce Levin

Quite often, a party seeking to show statistical significance combines data from different sources to create larger numbers, and hence greater significance for a given disparity. Conversely, a party seeking to avoid finding significance disaggregates data insofar as possible. In a discrimination suit brought by female faculty members of a medical school, plaintiffs aggregated faculty data over several years, while the school based its statistics on separate departments and separate years (combined, however, as discussed below).


Archive | 2015

Elements of Probability

Michael O. Finkelstein; Bruce Levin

Many probability puzzles can be made transparent with knowledge of a few basic rules and methods of calculation. We summarize some of the most useful ideas in this section.


Chance | 2005

Chanceat the Bar: Statistical Evidence in an Education Finance Case

Michael O. Finkelstein; Bruce Levin

In the waves of litigation over the last 30 years that have restructured the financing of public education in many states, statistical evidence has been at the core of plaintiffs’ claims. A prime example is Campaign for Fiscal Equity v. The State of New York, decided by the New York Court of Appeals (New York’s highest court) in the summer of 2003 and now pending again before the trial court. In this landmark litigation, Chief Judge Judith Kaye, writing for the Court, held that the state had failed in its duty under the Education Article of the New York State constitution to provide school districts in New York City (NYC) with sufficient funds to give its students a “sound basic education.” This the Court equated with a meaningful high school education. The Court directed the state to determine the “actual cost” of providing the opportunity for such an education in NYC and then to come up with reforms to the system of school financing and management to provide the necessary funds to accomplish that result. The Court gave the state until July 30, 2004, to enact the necessary reforms. When the state, beset by legislative paralysis, failed to act by last July, the trial judge, J. Leland CHANCE AT THE BAR


Chance | 2004

Chanceat the Bar: Stopping Rules in Clinical Trials

Michael O. Finkelstein; Bruce Levin

Sequential analysis of trial data is a big subject in statistics but not so in law. That may be changing with what seems to be an increase in litigation over the conduct of clinical trials of new medical therapies involving human subjects. There are issues here with regard to early stopping to make a successful new treatment available to persons in a control group, but the conduct most likely to produce claims is the failure to stop an experiment that is generating adverse effects. Since experimental therapies in clinical trials often fail to produce the hoped-for benefit, or any benefit compared to control therapy, the risks on the downside in such cases are painfully exposed; this can create a fertile field for lawsuits. An important recent case involving the failure to stop a clinical trial appears to be one of the first in which a statistician has testified. The case arose out of a blood-cancer trial of a new therapy conducted from 1981 to 1993 by the Fred Hutchinson Cancer Research Center in Seattle, Washington. The Hutchinson Center is a leader in transplant therapy, and performs hundreds of bone-marrow transplants for blood-cancer patients annually. The theory behind bone-marrow transplants is that blood-cancer patients willbenefit from higher doses of chemotherapy and radiation, but those doses tend to destroy bone marrow that produces new blood cells; bone marrow transplants replace


Chance | 2004

Chanceat the Bar: Epidemiologic Evidence in the Silicone Breast Implant Cases

Michael O. Finkelstein; Bruce Levin

Last October, an advisory panel of the Food and Drug Administration, after a two-day hearing, approved an application by Inamed Corporation to remarket silicone breast implants. The panel, by a nine-to-six vote, found that Inamed had provided reasonable assurances of safety for such implants. The dissenters had not concluded that silicone implants caused disease, but believed that Inameds two-year study was not sufficient to demonstrate long-term safety. In January, the FDA, in a surprise move, in effect agreed with the dissenters and deferred a decision on the implants. The FDA found that while information developed over the past ten years had increased its assurance of implant safety, it needed more information on the rates and long-term effects of leaking or rupture for the product to pass the threshold for approval. Inamed responded by vowing to move forward diligently with new studies, in cooperation with the agency. If the FDA ultimately approves the implants, that would be a singular turn of events because, as everyone knows, the last time the implants were widely marketed the result was a legal disaster: the manufacturers were sued by tens of thousands of women who contended that the devices caused connective tissue disease and many other problems. This tsunami of litigation drove one manufacturer, Dow

Collaboration


Dive into the Michael O. Finkelstein's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David H. Kaye

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge