how do I determine the null and alternative hypothesis with this given information?
self study - How to choose the null and alternative hypothesis? - Cross Validated
Did you find the idea of a null hypothesis to be confusing when you were first learning statistics? If you did, then what did you find confusing about it?
Yes. If you didn't find the idea of the null hypothesis confusing, you didn't understand it.
There's an article by Gigerenzer called "Mindless Statstics" here: http://library.mpib-berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf where he talks about this. The problem is that hypothesis testing, as we think of it, is a mash up of two (or three) very different ways of thinking about what p-values mean. (The three are Fisher, Neyman-Pearson, and Bayes). These people had arguments that went on for decades about these things, and now we act like those differences don't exist. The notion of a type I error doesn't make sense under Fisher's approach. Under Neyman-Pearson's approach, a p-value is greater than 0.05, or it's not. You don't report p=0.035. So you can't report exact p-values, and talk about type I and type II errors, and be logically consistent. But we try.
More on reddit.comELI5: Can you ever confirm a null hypothesis, or are you always limited to "failing to reject?"
Science isn't about proving things true, as that's rarely possible outside of pure logic or mathematics. The null hypothesis won't ever be demonstrably True in a philosophical sense. The aim is always to test it, thus showing it false.
But yes, you can often demonstrate that the null is likely to be correct, given X, Y, and Z as assumptions. It can be really tricky to know that your assumptions are really correct though. For practical reasons you're better off trying to reject it.
More on reddit.comWhen to accept a null hypothesis?
What is the difference between a hypothesis and a null hypothesis?
What is the purpose of the null hypothesis?
Videos
The rule for the proper formulation of a hypothesis test is that the alternative or research hypothesis is the statement that, if true, is strongly supported by the evidence furnished by the data.
The null hypothesis is generally the complement of the alternative hypothesis. Frequently, it is (or contains) the assumption that you are making about how the data are distributed in order to calculate the test statistic.
Here are a few examples to help you understand how these are properly chosen.
Suppose I am an epidemiologist in public health, and I'm investigating whether the incidence of smoking among a certain ethnic group is greater than the population as a whole, and therefore there is a need to target anti-smoking campaigns for this sub-population through greater community outreach and education. From previous studies that have been published in the literature, I find that the incidence among the general population is $p_0$. I can then go about collecting sample data (that's actually the hard part!) to test $$H_0 : p = p_0 \quad \mathrm{vs.} \quad H_a : p > p_0.$$ This is a one-sided binomial proportion test. $H_a$ is the statement that, if it were true, would need to be strongly supported by the data we collected. It is the statement that carries the burden of proof. This is because any conclusion we draw from the test is conditional upon assuming that the null is true: either $H_a$ is accepted, or the test is inconclusive and there is insufficient evidence from the data to suggest $H_a$ is true. The choice of $H_0$ reflects the underlying assumption that there is no difference in the smoking rates of the sub-population compared to the whole.
Now suppose I am a researcher investigating a new drug that I believe to be equally effective to an existing standard of treatment, but with fewer side effects and therefore a more desirable safety profile. I would like to demonstrate the equal efficacy by conducting a bioequivalence test. If $\mu_0$ is the mean existing standard treatment effect, then my hypothesis might look like this: $$H_0 : |\mu - \mu_0| \ge \Delta \quad \mathrm{vs.} \quad H_a : |\mu - \mu_0| < \Delta,$$ for some choice of margin $\Delta$ that I consider to be clinically significant. For example, a clinician might say that two treatments are sufficiently bioequivalent if there is less than a $\Delta = 10\%$ difference in treatment effect. Note again that $H_a$ is the statement that carries the burden of proof: the data we collect must strongly support it, in order for us to accept it; otherwise, it could still be true but we don't have the evidence to support the claim.
Now suppose I am doing an analysis for a small business owner who sells three products $A$, $B$, $C$. They suspect that there is a statistically significant preference for these three products. Then my hypothesis is $$H_0 : \mu_A = \mu_B = \mu_C \quad \mathrm{vs.} \quad H_a : \exists i \ne j \text{ such that } \mu_i \ne \mu_j.$$ Really, all that $H_a$ is saying is that there are two means that are not equal to each other, which would then suggest that some difference in preference exists.
The null hypothesis is nearly always "something didn't happen" or "there is no effect" or "there is no relationship" or something similar. But it need not be this.
In your case, the null would be "there is no relationship between CRM and performance"
The usual method is to test the null at some significance level (most often, 0.05). Whether this is a good method is another matter, but it is what is commonly done.