Did you know? This test can help us make an informed choice between two competing models!

In statistics, the likelihood-ratio test is a hypothesis testing method used to compare the goodness of fit of two competing statistical models. Of these two models, one is a maximization model of the entire parameter space, and the other is a model obtained after certain restrictions. When the observed data support the more restricted model (i.e., the null hypothesis), the two likelihoods should not differ much due to sampling error.

Thus, the purpose of the likelihood ratio test is to test whether this likelihood ratio is significantly different from one, or more equivalently, whether its natural logarithm is significantly different from zero.

This test, also known as the Wilks test, is the earliest of the three traditional hypothesis testing methods, the other two being the Lagrange multiplier test and the Wald test. The two can be viewed as approximations of the likelihood ratio test and are asymptotically equivalent. In models with no unknown parameters, the use of the likelihood ratio test can be justified using the Neyman–Pearson lemma. It is worth mentioning that the lemma shows that among all competing tests, this test has the highest detection power.

General Definition

Suppose we have a statistical model with parameter space Θ. The null hypothesis usually states that the parameter θ is in a specified subset Θ0, while the alternative hypothesis states that θ is in Θ0 's complement. That is, the alternative hypothesis holds that θ belongs to Θ \ Θ0. If the null hypothesis is true, the calculation formula for the likelihood ratio test statistic is:

λLR = −2 ln [ supθ∈Θ0 L(θ) / supθ∈Θ L(θ) ]

Here sup means supremum. Since all likelihoods are positive, the likelihood ratios have values ​​between zero and one, since the constrained maximum cannot exceed the unconstrained maximum. The likelihood ratio test statistic is often expressed as the log-likelihood difference:

λLR = −2 [ ℓ(θ0)ℓ(θ^) ]

Here, the key to the likelihood ratio test is the mutual test between different models. If the models are nested (i.e., the more complex model can be transformed into a simpler one by imposing restrictions on its parameters), then many common test statistics can be viewed as analogous log-likelihood ratio tests. This includes the Z test, F test, G test, and Pearson's chi-square test, among others.

Simple hypothetical case

In simple-versus-simple hypothesis testing, the distribution of the data is fully specified under both the null and alternative hypotheses. Therefore, a variation of the likelihood ratio test can be used, for example:

Λ(x) = L(θ0 | x) / L(θ1 | x)

If Λ > c, then do not reject the null hypothesis H0; if Λ < c, then reject the null hypothesis H0< /code>. In this case, the Neyman-Pearson lemma further shows that this likelihood ratio test is the most powerful of all the alpha level tests.

Understanding the Likelihood Ratio Test

The likelihood ratio is a function of the data and is an indicator of the performance of one model relative to another. If the value of the likelihood ratio is small, it means that the probability of the observed result under the null hypothesis is much lower than that under the alternative hypothesis, thus rejecting the null hypothesis. Conversely, a high likelihood ratio indicates that the observed result is almost as likely under the null hypothesis as it is under the alternative hypothesis, so the null hypothesis cannot be rejected.

Actual Example

Suppose we have n samples from a normal distribution. We wish to test whether the mean μ of the population is a given value μ0. At this time, the null hypothesis can be expressed as H0: μ = μ0, and the alternative hypothesis is H1: μ ≠ μ0. After the corresponding calculations, the expression of the likelihood ratio can be obtained:

λLR = n ln [ 1 + t^2 / (n - 1) ]

Then, the specific distribution is used to guide subsequent inferences.

Asymptotic Distribution: Wilkes' Theorem

Although the exact distribution of the likelihood ratio is difficult to determine in many cases, Wilkes's theorem states that if the null hypothesis is true and the sample size n tends to infinity, then The test statistic will asymptotically follow a chi-square distribution. This enables us to calculate the likelihood ratio and compare it to the desired significance level.

Is it possible to further improve the process of choosing between statistical models through other methods?

Trending Knowledge

What is a likelihood ratio test? How does it help us find hidden patterns in data?
In statistics, the Likelihood-Ratio Test is a hypothesis testing method that compares the fit of two competing statistical models to determine which one is more consistent with the observed data. The
nan
Since Magic: The Gathering was first released by Wizards of the Coast in 1993, the card game has launched a large number of sets and cards.3 to 4 main sets are launched every year, allowing countless
Why does statistics seem to have a mysterious power to reveal the truth of hypotheses?
Statistics provides a powerful tool to help people make informed decisions under uncertainty, and in this process, the likelihood ratio test plays a crucial role. Likelihood ratio testing is a hypothe

Responses