Videos
The connection between the research hypothesis and the choice of null an alternative is not writ in stone. I can't see any particular reason why one could not say (just casting your phrase in plain English because that way I won't get tangled up):
"We think the treatment should reduce reaction time" ...
... but then formulate a two-sided alternative, if that was appropriate. I don't think any great song and dance is required to use a two-tailed test if you're clear that you want your hypothesis test to have power in both tails.
That is, I see no problem with discussing the properties of the hypothesis test as if the alternative were not the same thing as your research hypothesis, and then simply interpreting the results of the test back in terms of the research hypothesis.
Of course, I don't control how pointlessly dogmatic any particular journal, editor or referee may be. [Indeed, in my experience, my thoughts seems rarely to influence people whose mind is set on something being the case.]
The same attitude carries through to ANOVA; it's not 'saving' you, since a multigroup test can be made "directional" (in an ANOVA-like situation, whether or not you still call it ANOVA) --
With one-factor comparisons ($k$ groups), you have $k!$ possible orderings of the means. If you are interested in some particular ordered alternative, you can specify it clearly up front and simply use a test sensitive to that alternative (you could specify a contrast, for example, though there are other approaches to ordered alternatives).
So if a research hypothesis was "forcing" you to do one tailed, it would, I think, equally "force" you to do some equivalent with more groups, since that's possible.
It is usual to run two tailed tests even if your hypothesis is pretty much directional, at least in psychology.
For one thing, running a one tailed test limits your ability to be surprised.
Changing your hypothesis after the data has been collected is cheating.