I would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a statistical hypothesis or it is a scientific hypothesis. They are usually quite different things.
A scientific hypothesis usually concerns a something to do with the true state of the real world, whereas a statistical hypothesis concerns only conditions within a statistical model. It is very common for the real world to be more complicated and less well-defined than a statistical model and so inferences regarding a statistical hypothesis will need to be thoughtfully extrapolated to become relevant to a scientific hypothesis.
For your example a scientific hypothesis concerning the two drugs in question might be something like 'drug x can be substituted for drug y without any noticeable change in results experienced by the patients'. A relevant statistical hypothesis would be much more restricted along the lines of 'drug x and drug y have similar potencies' or that 'drug x and drug y have similar durations of action' or maybe 'doses of drug x and drug y can be found where they have similar effects'. Of course, the required degree of similarity and the assays used for evaluation of the statistical hypothesis will have to be defined. Apart from the enormous differences in scope of the scientific and potential statistical hypotheses, the first may require several or all of the others to be true.
If you want to know if a hypothesis is a statistical hypothesis then if it concerns the value of a parameter within a statistical model or can be restated as being about a parameter value, then it is.
Now, the "alternative hypothesis". For the hypothesis testing framework there are two things that are commonly called 'alternative hypotheses'. The first is an arbitrary effect size that is used in the pre-data calculation of test power (usually for sample size determination). That alternative hypothesis is ONLY relevant before the data are in hand. Once you have the data the arbitrarily specified effect size loses its relevance because the observed effect size is known. When you perform the hypothesis test the effective alternative becomes nothing more than 'not the null'.
It is a bad mistake to assume that a rejection of the null hypothesis in a hypothesis test leads to the acceptance of the pre-data alternative hypothesis, and it is just about as bad to assume that it leads to the acceptance of the observed effect size as a true hypothesis.
Of course, the hypothesis test framework is not the only statistical approach, and I would argue, it is not even the most relevant to the majority of scientific endeavours. If you use a likelihood ratio test then you can compare the data support for two specified parameter values within the statistical model and that means that you can do the same within a Bayesian framework.
Answer from Michael Lew on Stack ExchangeI would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a statistical hypothesis or it is a scientific hypothesis. They are usually quite different things.
A scientific hypothesis usually concerns a something to do with the true state of the real world, whereas a statistical hypothesis concerns only conditions within a statistical model. It is very common for the real world to be more complicated and less well-defined than a statistical model and so inferences regarding a statistical hypothesis will need to be thoughtfully extrapolated to become relevant to a scientific hypothesis.
For your example a scientific hypothesis concerning the two drugs in question might be something like 'drug x can be substituted for drug y without any noticeable change in results experienced by the patients'. A relevant statistical hypothesis would be much more restricted along the lines of 'drug x and drug y have similar potencies' or that 'drug x and drug y have similar durations of action' or maybe 'doses of drug x and drug y can be found where they have similar effects'. Of course, the required degree of similarity and the assays used for evaluation of the statistical hypothesis will have to be defined. Apart from the enormous differences in scope of the scientific and potential statistical hypotheses, the first may require several or all of the others to be true.
If you want to know if a hypothesis is a statistical hypothesis then if it concerns the value of a parameter within a statistical model or can be restated as being about a parameter value, then it is.
Now, the "alternative hypothesis". For the hypothesis testing framework there are two things that are commonly called 'alternative hypotheses'. The first is an arbitrary effect size that is used in the pre-data calculation of test power (usually for sample size determination). That alternative hypothesis is ONLY relevant before the data are in hand. Once you have the data the arbitrarily specified effect size loses its relevance because the observed effect size is known. When you perform the hypothesis test the effective alternative becomes nothing more than 'not the null'.
It is a bad mistake to assume that a rejection of the null hypothesis in a hypothesis test leads to the acceptance of the pre-data alternative hypothesis, and it is just about as bad to assume that it leads to the acceptance of the observed effect size as a true hypothesis.
Of course, the hypothesis test framework is not the only statistical approach, and I would argue, it is not even the most relevant to the majority of scientific endeavours. If you use a likelihood ratio test then you can compare the data support for two specified parameter values within the statistical model and that means that you can do the same within a Bayesian framework.
The principle of statistical hypothesis tests, by definition, treats the null hypothesis H0 and the alternative H1 asymmetrically. This always needs to be taken into account. A test is able to tell you whether there is evidence against the null hypothesis in the direction of the alternative.
It will never tell you that there is evidence against the alternative.
The choice of the H0 determines what the test can do; it determines what the test can indicate against.
I share @Michael Lew's reservations against a formal use of the term "proposed hypothesis", however let's assume for the following that you can translate your scientific research hypothesis into certain parameter values for a specified statistical model. Let's call this R.
If you choose R as H0, you can find evidence against it, but not in its favour. This may not be what you want - although it isn't out of question. You may well wonder whether certain data contradict your R, in which case you can use it as H0, however this has no potential, even in case of non-rejection, to convince other people that R is correct.
There is however a very reasonable scientific justification for using R as H0, which is that according to Popper in order to corroborate a scientific theory, you should try to falsify it, and the best corroboration comes from repeated attempts to falsify it (in a way in which it seems likely that the theory will be rejected if it is in fact false, which is what Mayo's "severity" concept is about). Apart from statistical error probabilities, this is what testing R as H0 actually allows to do, so there is a good reason for using R as H0.
If you choose R as H1, you can find evidence against the H0, which is not normally quite what you want, unless you interpret evidence against H0 as evidence in favour of your H1, which isn't necessarily granted (model assumptions may be violated for both H0 and H1, so they may both technically be wrong, and rejecting H0 doesn't "statistically prove" H1), although many would interpret a test in this way. It has value only to the extent that somebody who opposes your R argues that H0 might be true (as in "your hypothesised real effect does not exist, it's all just due to random variation"). In this case a test with R as H1 has at least the potential to indicate strongly against that H0. You can even go on and say it'll give you evidence that H0 is violated "in the direction of H1", but as said before there may be other explanations for this than that H1 is actually true. Also, "the direction of H1" is rather imprecise and doesn't amount to any specific parameter value or it's surroundings. It may depend on the application area how important that is. A homeopath may be happy enough to significantly show that homeopathy does something good rather than having its effect explained by random variation only, regardless of how much good it actually does, however precise numerical theories in physics/engineering, say, can hardly be backed up by just rejecting a random variation alternative.
The "equivalence testing" idea would amount to specifying a rather precise R (specific parameter value and small neighbourhood) as H1 and potentially rejecting a much bigger part of the parameter space on both sides of R. This would then be more informative, but has still the same issue with model assumptions, i.e., H0 and H1 may both be wrong. (Obviously model assumption diagnoses may help to some extent. Also even if neither H0 nor H1 is true, arguably some more distributions can be seen as "interpretatively equivalent" with them, e.g., two equal non-normal distributions in a two-sample setup where a normality-based test is applied, and actually may work well due to the Central Limit Theorem even for many non-normal distributions.)
So basically you need to choose what kind of statement you want to allow your test to back up. Choose R as H0 and the data can only reject it. Choose R as H1 and the data can reject the H0, and how valuable that is depends on the situation (particularly on how realistic the H0 looks as a competitor; i.e., how informative it actually is to reject it). The equivalence test setup is special by allowing you to use a rather precise R as H1 and reject a big H0, so the difference between this and rejecting a "random variation/no effect" H0 regards the precise or imprecise nature of the research hypothesis R to be tested.
Videos
Is the alternative hypothesis the same as the research hypothesis?
What is an alternative hypothesis example?
What is meant by an alternative hypothesis?
I understand the null hypothesis and how we can prove it wrong, but at least in my textbook I do not find it clear how the alternative hypothesis work.
It says things like: for an experiment we have the H0: u = 3 and the H1: u = 4.
Ok, then we prove H0 is wrong. How does it support H1? I mean, the real u could be like u = 2311. And we would be dealing with both hypothesis being useless.
Also, why should we change our experiments when H0: u = 3 and H1: u < 3, or H1: u < 2, or H1: u > 3, when the reason of the experiment is just to reject H0?