醫藥統計項目聯繫QQ:231469242python
P值:觀察到極端值的機率less
觀察到的機率越低,結果就越顯著。觀察到機率低於P值時,認爲足夠證據支持H1(顯著)dom
相似於反證法,先假設H0,A和B沒有關係ide
觀察到結果機率很是低,幾乎不可能發生,推翻原假設H0post
H1成立(有顯著關係)ui
顯著性不能證實任何事情是真,而只能拒絕二者沒有關係(證實二者有顯著關係)this
顯著性不能量化差別性lua
顯著性不能誇大二者差別性有現實意義idea
顯著性不能解釋爲何二者有差別性
P值小於0.05解釋(The Interpretation of the p-Value)P值小於0.05:若是H0是真,找到極端值的機率小於5%。不能簡單說明H0是假或H1是真
A value of p < 0:05 for the null hypothesis has to be interpreted as follows: If
the null hypothesis is true, the chance to find a test statistic as extreme as or more
extreme than the one observed is less than 5%. This is not the same as saying that
the null hypothesis is false, and even less so, that an alternative hypothesis is true!
http://www.360doc.com/content/15/0704/22/22175932_482657194.shtml
P值誤區
Imagine that you want to be the new point guard of your basketball team, but before you try out for the position, you want to make sure you have, pun intended, a real shot at achieving your goal. You shoot 20 free throws and make 12 of them; that's a 60% accuracy rate. You want to know if your accuracy rate, or the observation, is about the same or different than the team's accuracy rate, or the population statistic; enough to replace the old point guard.
You can do a test of significance to ascertain if your accuracy rate is significantly different from that of the team. A significance test measures whether some observed value is similar to the population statistic, or if the difference between them is large enough that it isn't likely to be by coincidence. When the difference between what is observed and what is expected surpasses some critical value, we say there is statistical significance.
A standard normal distribution curve represents all of the observations of a single random variable such that the highest point under the curve is where you would expect to find values closest to the mean and values least likely to be observed in the smallest part under the curve.
The p-value is the probability of finding an observed value or a data point relative to all other possible results for the same variable. If the observed value is a value most likely to be found among all possible results, then there is not a statistically significant difference. If, on the other hand, the observed value is a value among unlikely values to be found, then there is a statistically significant difference. The smaller the probability associated with the observed value, the more likely the result is to be significant.
To find the p-value, or the probability associated with a specific observation, you must first calculate the z score, also known as the test statistic.
The formula for finding the test statistic depends on whether the data includes means or proportions. The formulas we'll discuss assume a:
When dealing with means, the z score is a function of the observed value (x-bar), population mean (mu), standard deviation (s), and the number of the observations (n).
When dealing with proportions, the z score is a function of the observed value (p-hat), proportion observed in the population (p), probability of successful outcome (p), probability of failure (q = 1 - p), and the number of trials (n).
After calculating the z score, you must look up the probability associated with that score on a Standard Normal Probabilities Table. This probability is the p-value or the probability of finding the observed value compared to all possible results. The p-value is then compared to the critical value to determine statistical significance.
The critical value, or significance level, is established as part of the study design and is denoted by the Greek letter alpha. If we choose an alpha = 0.05, we are requiring an observed data point be so different from what is expected that it would not be observed more than 5% of the time. An alpha equaling 0.01 would be even more strict. In this case, a statistically significant test statistic beyond this critical value has less than a 1 in 100 probability of occurring by chance.
The last step in a significance test is to compare the p-value to alpha to determine statistical significance. If the p-value exceeds the critical value, then we can reject the idea that the observed value was a result found by chance.
So, let's say your free throw accuracy of 0.6 turns out to have a z score associated with a probability of 0.03 and your alpha is set at 0.05, or p < alpha, then there is a statistically significant difference. We can reject the idea that there is no difference between your accuracy and the accuracy of the team's, and accept the alternative: your shooting accuracy is significantly different from that of the team's.
If, on the other hand, the alpha is set at 0.01, then p > alpha and the result is not statistically significant. In this case, your coaches can say, 'Um, sorry. There simply isn't enough evidence to conclude you are way, way better.'
Significance does not:
A significance test measures whether some observed value is similar to the population statistic or if the difference between the observed value and the population statistic is large enough that it isn't likely to be a coincidence.
The p-value is the probability of finding an observed value or data point relative to all other possible results for the same variable. To find the p-value, you must first calculate the z score, also known as the test statistic. After calculating the z score, look up the probability associated with that score on a Standard Normal Probabilities Table. The last step in a significance test is to compare the p-value to an established critical value, called alpha, to determine statistical significance.
If the p-value is a value most likely to be found among all possible results, then there is not a statistically significant difference. If, on the other hand, the observed value is a value among unlikely values to be found, then there is a statistically significant difference.