Videos
What’s the difference between a research hypothesis and a statistical hypothesis?
What is hypothesis testing?
What are null and alternative hypotheses?
statistical concept
Hey! Can someone explain to me in simple terms the definition of null hypothesis? If u can use an example it would be great! Also if we reject the null hypothesis does it mean that the alternative hypothesis is true?
The purpose of the null is to convert a problem from one of inductive reasoning to one of deductive reasoning. The alternative, and the method that preceded it was the method of inverse probability. That method is now generally called Bayesian statistics.
Imagine that you had three scientific hypotheses, denoted a, b, and c. Imagine that the true model is d, but no one has yet to discover this. The world is still flat, white is still a color, and Mercury follows Newton's laws.
A Bayesian test would create three hypotheses, ,
, and
. For a data set that is large enough, you would end up with the hypothesis or the combination of hypotheses that are most likely true. However, since
wasn't tested, you may continue to be fooled by the idea yet to be thought of.
The Frequentist hypothesis testing regime would assume that the alternative hypothesis is , and the null is
. The null contains every hypothesis that is not the alternative.
The first example in the academic literature, but not the first null hypothesis, is where R.A. Fisher assumed that Mendel's laws were false as the null. If you discredit the null, then you exclude every explanation, including those not yet considered. The first null hypothesis was that Muriel Bristol (Fisher's boss) could not correctly detect the difference between tea poured into mild from milk poured into tea. That was the very first statistical test.
There is a slight difference between R.A. Fisher's idea of a null and Pearson and Neyman's idea of a null. Fisher felt there was a null, but no alternative hypothesis. If you rejected the null, it told you what was wrong, but was not directive as to what was correct automatically. Pearson and Neyman championed acceptance and rejection regions based on frequencies, and they felt the method directed behavior.
The logic was that the method created a probabilistic version of modus tollens. Modus tollens is "if A then B; and, not B; therefore, not A." Or, if the null is true, then the test will appear in the acceptance region if it does not, then you can reject the null.
The weakness of the methodology was proposed by an author that I cannot currently locate in this somewhat tongue-in-cheek way. There are 535 elected members representing the states in the U.S. Congress. There are 360 million Americans. Therefore since 535/360000000 is less than .05, if you randomly sample a group of Americans and pick a member of Congress, they cannot be an American (p<.05).
While Fisher's no effect hypothesis is the most common version, because of its implication would be that something has an effect in the alternative, it is not a requirement that a parameter equals zero, or a set of parameters all equal zero.
What matters is that the null is the opposite of what you are wanting to assert before seeing the data.
That makes the null hypothesis method a potent tool. Think about this as a rhetorical device. Your opponent opposes that you recently believe you have discovered.
You do not assert is true. You assert your opponent's position of
is true and build your probabilities on the assumption that you are the only person that is wrong. Everyone is right, and you are wrong.
It is a powerful rhetorical tool to concede the argument from the beginning, but then ask, "what would the world look like if I am the one that is wrong?" That is the null. If you reject the null, then what you are really saying is that "nature rejects all other ideas except mine."
Now as to your question, you want to show that college algebra matters, therefore your null hypothesis is that college algebra does not matter. We will ignore all the other methodological issues that would really be present since people without college algebra may have other self-selection issues as will the people with college algebra.
Your null is that algebra does not matter. The alternative is that it does. If the p-value is less than your cutoff, chosen before collecting the data, then you can reject the null. If it is not, then you should behave as if it is true until you either do more research or find another way to come to a conclusion.
It would be dubious, ignoring the methodology issues, to assert that college algebra matters as you only have one sample. The method is intended for repetition. Nonetheless, you would only be made a fool of no more than percent of the time, ignoring the methodological issues by following the results of the test.
It appears you are asking for clarification..
A null, Ho, essentially predicts no effect (no difference between groups, no correlation/association between variables etc), whereas an alternative/experimental, Ha or H1 predicts an effect.
So in your example, you have the gist of Ho and Ha (though the wording could be improved).
Your Chi-square test gives you a chi-square value - you need to either a) compare this with a 'critical' chi-square value b) know the p-value associated with your chi-square value and compare this with an 'alpha' p-value (typically .05 in psychology for example)
These amount to the same kind of thing For this example, if your alpha/cutoff is .05, then your 'critical' chi-square is 3.841.
NHST requires that, if your p-value is LESS than your alpha/cutoff, then you reject the null.
Here's where the confusion might arise: As chi-square values increase, associated p values decrease.
So, if your chi-square value is SMALLER than the critical, your associated p-value would be LARGER than the alpha/cutoff. If p is larger, the null is NOT rejected.
If your chi-square value is LARGER than the critical, your associated p-value would be SMALLER than the alpha/cutoff. If p is smaller, the null IS rejected.
Your question starts out as if the statistical null and alternative hypotheses are what you are interested in, but the penultimate sentence makes me think that you might be more interested in the difference between scientific and statistical hypotheses.
Statistical hypotheses can only be those that are expressible within a statistical model. They typically concern values of parameters within the statistical model. Scientific hypotheses almost invariably concern the real world, and they often do not directly translate into the much more limited universe of the chosen statistical model. Few introductory stats books spend any real time considering what constitutes a statistical model (it can be very complicated) and the trivial examples used have scientific hypotheses so simple that the distinction between model and real-world hypotheses is blurry.
I have written an extensive account of hypothesis and significance testing that includes several sections dealing with the distinction between scientific and statistical hypotheses, as well as the dangers that might come from assuming a match between the statistical model and the real-world scientific concerns: A Reckless Guide to P-values
So, to answer your explicit questions:
• No, statisticians do not always use null and alternative hypotheses. Many statistical methods do not require them.
• It is common practice in some disciplines (and maybe some schools of statistics) to specify the null and alternative hypothesis when a hypothesis test is being used. However, you should note that a hypotheses test requires an explicit alternative for the planning stage (e.g. for sample size determination) but once the data are in hand that alternative is no longer relevant. Many times the post-data alternative can be no more than 'not the null'.
• I'm not sure of the mental heuristic thing, but it does seem possible to me that the beginner courses omit so much detail in the service of simplicity that the word 'hypothesis' loses its already vague meaning.
You wrote
the declaration of a null and alternative hypothesis is the "first step" of any good experiment and subsequent analysis.
Well, you did put quotes around first step, but I'd say the first step in an experiment is figuring out what you want to figure out.
As to "subsequent analysis", it might even be that the subsequent analysis does not involve testing a hypothesis! Maybe you just want to estimate a parameter. Personally, I think tests are overused.
Often, you know in advance that the null is false and you just want to see what is actually going on.