Hey! Can someone explain to me in simple terms the definition of null hypothesis? If u can use an example it would be great! Also if we reject the null hypothesis does it mean that the alternative hypothesis is true?
What is the difference between "testing of hypothesis" and "test of significance"? - Cross Validated
Eli5: What is a null hypothesis and how do type 1 and type 2 errors work.
Null hypothesis and Alternative Hypothesis
ELI5: What is the Null hypothesis?
What’s the difference between a research hypothesis and a statistical hypothesis?
What is hypothesis testing?
What are null and alternative hypotheses?
Videos
Significance testing is what Fisher devised and hypothesis testing is what Neyman and Pearson devised to replace significance testing. They are not the same and are mutually incompatible to an extent that would surprise most users of null hypothesis tests.
Fisher's significance tests yield a p value that represents how extreme the observations are under the null hypothesis. That p value is an index of evidence against the null hypothesis and is the level of significance.
Neyman and Pearson's hypothesis tests set up both a null hypothesis and an alternative hypothesis and work as a decision rule for accepting the null hypothesis. Briefly (there is more to it than I can put here) you choose an acceptable rate of false positive inference, alpha (usually 0.05), and either accept or reject the null based on whether the p value is above or below alpha. You have to abide by the statistical test's decision if you wish to protect against false positive errors.
Fisher's approach allows you to take anything you like into account in interpreting the result, for example pre-existing evidence can be informally taken into account in the interpretation and presentation of the result. In the N-P approach that can only be done in the experimental design stage, and seems to be rarely done. In my opinion the Fisherian approach is more useful in basic bioscientific work than is the N-P approach.
There is a substantial literature about inconsistencies between significance testing and hypothesis testing and about the unfortunate hybridisation of the two. You could start with this paper: Goodman, Toward evidence-based medical statistics. 1: The P value fallacy. https://pubmed.ncbi.nlm.nih.gov/10383371/
In many cases, these two statements mean the same thing. However, they can also be quite different.
Testing a hypothesis consists of first saying what you believe will occur with some phenomenon, then developing some kind of test for this phenomenon, and then determining whether or not the phenomenon actually occurred. In many cases, testing of a hypothesis need not involve any kind of statistical test. I am reminded of this quote by the physicist Ernest Rutherford - If your experiment needs statistics, you ought to have done a better experiment. That being said, testing of hypotheses normally does use some kind of statistical tool.
In contrast, testing of significance is a purely statistical concept. In essence, one has two hypotheses - the null hypothesis, which states that there is no difference between your two (or more) collections of data. The alternative hypothesis is that there is a difference between your two samples that did not occur by chance.
Based on the design of your study, you then compare the two (or more) samples using a statistical test, which gives you a number, which you then compare to a reference distribution (like the normal, t, or F distributions) and if this test statistic exceeds a critical value, you reject the null hypothesis and conclude that there is a difference between the two (or more) samples. This criterion is normally that the probability of the difference occurring by chance is less than one in twenty (p<0.05), though others are sometimes used.