alternative assumption to the null hypothesis
Null hypothesis and Alternative Hypothesis
I don't understand the reasoning behind alternative hypothesis and how a "=" or "<" or ">" H1 is able to shape the experiment
Why do we need alternative hypothesis? - Cross Validated
Alternative Asgard Death Hypothesis
Can the alternative hypothesis be proven?
When do you accept an alternative hypothesis?
What is an alternative hypothesis in simple terms?
Videos
Hey! Can someone explain to me in simple terms the definition of null hypothesis? If u can use an example it would be great! Also if we reject the null hypothesis does it mean that the alternative hypothesis is true?
I understand the null hypothesis and how we can prove it wrong, but at least in my textbook I do not find it clear how the alternative hypothesis work.
It says things like: for an experiment we have the H0: u = 3 and the H1: u = 4.
Ok, then we prove H0 is wrong. How does it support H1? I mean, the real u could be like u = 2311. And we would be dealing with both hypothesis being useless.
Also, why should we change our experiments when H0: u = 3 and H1: u < 3, or H1: u < 2, or H1: u > 3, when the reason of the experiment is just to reject H0?
There was, historically, disagreement about whether an alternative hypothesis was necessary. Let me explain this point of disagreement by considering the opinions of Fisher and Neyman, within the context of frequentist statistics, and a Bayesian answer.
Fisher - We do not need an alternative hypothesis; we can simply test a null hypothesis using a goodness-of-fit test. The outcome is a $p$-value, providing a measure of evidence for the null hypothesis.
Neyman - We must perform a hypothesis test between a null and an alternative. The test is such that it would result in type-1 errors at a fixed, pre-specified rate, $\alpha$. The outcome is a decision - to reject or not reject the null hypothesis at the level $\alpha$.
We need an alternative from a decision theoretic perspective - we are making a choice between two courses of action - and because we should report the power of the test $$ 1 - p\left(\textrm{Accept $H_0$} \, \middle|\, H_1\right) $$ We should seek the most powerful tests possible to have the best chance of rejecting $H_0$ when the alternative is true.
To satisfy both these points, the alternative hypothesis cannot be the vague 'not $H_0$' one.
Bayesian - We must consider at least two models and update their relative plausibility with data. With only a single model, we simple have $$ p(H_0) = 1 $$ no matter what data we collect. To make calculations in this framework, the alternative hypothesis (or model as it would be known in this context) cannot be the ill-defined 'not $H_0$' one. I call it ill-defined since we cannot write the model $p(\text{data}|\text{not }H_0)$.
I will focus on "If we do not talk about accepting alternative hypothesis, why do we need to have alternative hypothesis at all?"
Because it helps us to choose a meaningful test statistic and design our study to have high power---a high chance of rejecting the null when the alternative is true. Without an alternative, we have no concept of power.
Imagine we only have a null hypothesis and no alternative. Then there's no guidance on how to choose a test statistic that will have high power. All we can say is, "Reject the null whenever you observe a test statistic whose value is unlikely under the null." We can pick something arbitrary: we could draw Uniform(0,1) random numbers and reject the null when they are below 0.05. This happens under the null "rarely," no more than 5% of the time---yet it's also just as rare when the null is false. So this is technically a statistical test, but it's meaningless as evidence for or against anything.
Instead, usually we have some scientifically-plausible alternative hypothesis ("There is a positive difference in outcomes between the treatment and control groups in my experiment"). We'd like to defend it against potential critics who would bring up the null hypothesis as devil's advocates ("I'm not convinced yet---maybe your treatment actually hurts, or has no effect at all, and any apparent difference in the data is due only to sampling variation").
With these 2 hypotheses in mind, now we can setup up a powerful test, by choosing a test statistic whose typical values under the alternative are unlikely under the null. (A positive 2-sample t-statistic far from 0 would be unsurprising if the alternative is true, but surprising if the null is true.) Then we figure out the test statistic's sampling distribution under the null, so we can calculate p-values---and interpret them. When we observe a test statistic that's unlikely under the null, especially if the study design, sample size, etc. were chosen to have high power, this provides some evidence for the alternative.
So, why don't we talk about "accepting" the alternative hypothesis? Because even a high-powered study doesn't provide completely rigorous proof that the null is wrong. It's still a kind of evidence, but weaker than some other kinds of evidence.