alternative ways of computing the statistical significance of a parameter inferred from a data set
Videos
I'm doing a task for my Psych class that requires I know what a one-tailed hypothesis is and what a two-tailed hypothesis is, and what the difference is between the two.
I've tried looking it up online, but it just gives me bell curve graphs and statistics jargon that I don't understand.
Actually, in the context of the test of mean differences, it tends to be the other way around --- it is almost never appropriate to use a one-sided test. The reason for this is that we need to specify our objects of inference (e.g., hypothesis tests, confidence intervals, etc.) prior to seeing the data, or we will induce bias in these objects. When seeking to make inference about two unknown quantities, it is generally best not to assume that the direction of interest is known a priori, and so it is usually best to test for a difference rather than a directional difference. Others will argue that it is legitimate to use a one-sided test when you have specified a direction of interest a priori, but I am sceptical even in this case. I would counsel that you should either avoid classical hypothesis testing altogether (e.g., using a confidence interval instead) or use a two-sided hypothesis test, even if you are interested in a relationship with a specified direction.
In regard to this issue, it is worth noting that classical hypothesis tests have some unusual (and not very helpful) properties when you compare across different tests. One of their properties is that, for a symmetric test, the p-value of the two-sided test is twice as high as the p-value for the one-sided test when data is in the relevant tail. This means that if you do a one-sided test for a disparity in the direction of the data, the p-value will be half the size of a two-sided test. So, if you correctly guess the direction of the trend a priori, the result of using the one-sided test is that you see evidence that looks twice as strong for the more specific hypothesis! This property of classical hypothesis tests gives good reason to avoid one-sided tests.
In any case, whether or not you agree with my view here, what you are proposing is definitely a bad idea. If you identify the direction of the test from the observed data, and then perform a one-sided test in the identified direction, you will bias your test towards rejection of the null hypothesis.
A one-tailed test is appropriate if you only want to test if there is a difference between your groups in a specific direction. You would use a two-tailed test if you want to determine if there is any difference between the two groups you're comparing.
As user Mur1lo says in their comment - you should never design your analysis after the data is collected. Therefore a two-tailed test is often more appropriate. A one-tailed test can only be justified if you have made a prediction prior to data collection about the direction of the difference, and you are completely uninterested in the possibility that the opposite outcome could be true.