[Q] How do I calculate P-value?
r - Manually Calculating P value from t-value in t-test - Cross Validated
statistics - How can I calculate the p-value by hand? - Mathematics Stack Exchange
Calculating p-value by hand from Stata table - Stack Overflow
How do I calculate p-value from test statistic?
To determine the p-value, you need to know the distribution of your test statistic under the assumption that the null hypothesis is true. Then, with the help of the cumulative distribution function (cdf) of this distribution, we can express the probability of the test statistics being at least as extreme as its value x for the sample:
-
Left-tailed test:
p-value = cdf(x).
-
Right-tailed test:
p-value = 1 - cdf(x).
-
Two-tailed test:
p-value = 2 × min{cdf(x) , 1 - cdf(x)}.
If the distribution of the test statistic under H0 is symmetric about 0, then a two-sided p-value can be simplified to p-value = 2 × cdf(-|x|), or, equivalently, as p-value = 2 - 2 × cdf(|x|).
What does a low p-value mean?
A low p-value means that under the null hypothesis, there's little probability that for another sample, the test statistic will generate a value at least as extreme as the one observed for the sample you already have. A low p-value is evidence in favor of the alternative hypothesis – it allows you to reject the null hypothesis.
What does a high p-value mean?
A high p-value means that under the null hypothesis, there's a high probability that for another sample, the test statistic will generate a value at least as extreme as the one observed in the sample you already have. A high p-value doesn't allow you to reject the null hypothesis.
Videos
Ia their some formula or smth?
Use pt and make it two-tailed.
> 2*pt(11.244, 30, lower=FALSE)
[1] 2.785806e-12
I posted this as a comment but when I wanted to add a bit more in edit, it became too long so I've moved it down here.
Edit: Your test statistic and d.f are correct. The other answer notes the issue with the calculation of the tail area in the call to pt(), and the doubling for two-tails, which resolves your difference. Nevertheless I'll leave my earlier discussion/comment because it makes relevant points more generally about p-values in extreme tails:
It's possible you could be doing nothing wrong and still get a difference, but if you post a reproducible example it might be possible to investigate further whether you have some error (say in the df).
These things are calculated from approximations that may not be particularly accurate in the very extreme tail.
If the two things don't use identical approximations they may not agree closely, but that lack of agreement shouldn't matter (for the exact tail area out that far to be meaningful number, the required assumptions would have to hold to astounding degrees of accuracy). Do you really have exact normality, exact independence, exactly constant variance?
You shouldn't necessarily expect great accuracy out where the numbers won't mean anything anyway. To what extent does it matter if the calculated approximate p-value is $2\times 10^{-12}$ or $3\times 10^{-12}$? Neither number is measuring the actual p-value of your true situation. Even if one of the numbers did represent the real p-value of your true situation, once its below about $0.0001$, why would you care what that value actually was?
You need to use the ttail() function, which returns the reverse cumulative Student's t distribution, aka the probability T > t:
display ttail(38,abs(_b[_cons]/_se[_cons]))*2
The first argument, 38, is the degrees of freedom (sample size less number of parameters), while the second, 1.92, is the absolute value of the coefficient of interest divided by its standard error, or the t-stat. The factor of two comes from the fact that Stata is doing a two-tailed test. You can also use the stored DoF with
display ttail(e(df_r),abs(_b[_cons]/_se[_cons]))*2
You can also do the integration of the t density by "hand" using Adrian Mander's integrate:
ssc install integrate
integrate, f(tden(38,x)) l(-1.92) u(1.92)
This gives you 0.93761229, but you want Pr(T>|t|), which is 1-0.93761229=0.06238771.
If you look at many statistics textbooks, you will find a table called the Z-table which will give you the probability that Z is beyond your test statistic. The table is actually a cumulative distribution function of the normal curve.
When people went to school with 4-function calculators, one or more of the questions on the statistics test would include a copy of this Z-table, and the dear students would have to interpolate columns of numbers to find the p-value. In your example, you would see the test statistic between .06 and .07 and those fingers would tap out that it was closer to .06 and do a linear interpolation to come up with .062.
Today, the p-value is something that Stata or SAS will calculate for you.
Here is another SO question that may be of interest: How do I calculate a p-value if I have the t-statistic and d.f. (in Perl)?
Here is a basic page on how to determine p-value "by hand": http://www.dummies.com/how-to/content/how-to-determine-a-pvalue-when-testing-a-null-hypo.html
Here is how you can determine p-value using Excel: http://ms-office.wonderhowto.com/how-to/find-p-value-with-excel-346366/
===EDIT===
My Stata text ("Microeconometrics using Stata", Revised Ed, Cameron & Trivedi) says the following on p. 402.
* p-values for t(30), F(1,30), Z, and chi(1) at y=2
. scalar y=2
. scalar p_t30 = 2 * ttail(30,y)
. scalar p_f1and30 = Ftail(1,30,y^2)
. scalar p_z = 2 * (1 - normal(y))
. scalar p_chi1 = chi2tail(1,y^2)
. display "p-values" " t(30)=" %7.4f p_t30
p-values t(30) = 0.0546