Copyright

Statistical Significance: Definition & Levels

Instructor: Chris Clause
In this lesson, you will learn to define statistical significance as well as learn how it is determined. Following this lesson you will have the opportunity to test your knowledge with a short quiz.

Definition

Researchers in the field of psychology rely on tests of statistical significance to inform them about the strength of observed statistical differences between variables. Research psychologists understand that statistical differences can sometimes simply be the result of chance alone. Tests of statistical significance were developed as a way of providing researchers with the ability to understand if experimental interventions were resulting in real differences or if observed differences would have likely occurred anyway.

Example

To better illustrate tests of statistical significance, let's look at an example. Say you have designed a simple experiment wherein you are interested in evaluating the effectiveness of a new therapeutic intervention introduced to people suffering from depression. In this example, you provide the experimental group with access to the new treatment. You decide to use a self-report depression index as your measure of post-treatment symptom severity. You compare the results with the results of a group of people who are also suffering from depression but have received no treatment.

Your research hypothesis is that the experimental group will report less severe symptoms following the intervention than the control group. Based on this scenario, the null hypothesis (or hypothesis of no relationship) would therefore be that the experimental group will not report less severe symptoms following the intervention than the control group.

Hypothesis Testing

After running the experiment, there are four possible outcomes that can occur. The null hypothesis can be rejected when it's actually true (type I error) and it can be rejected when it's actually false (correct decision). The null hypothesis isn't rejected when it's true (correct decision) and it's not rejected when it's actually false (type II error). Two of these outcomes are accurate and two result in errors. Researchers are most concerned with making a type I error. In the case of the example mentioned above, this would mean that the researcher would conclude that the intervention did have an impact when it actually did not.

Alpha

Prior to evaluating the results of the experiment, the researcher would have selected a confidence level with which to evaluate the results. The most common confidence level used is 95%, which indicates that we have 95% confidence that the statistical differences between experimental and control groups are not simply due to chance, but due to the intervention. When using a 95% confidence interval, we are also saying that there is a 5% chance that a type I error will occur. Alpha is the term used to describe the probability of making a type I error. By choosing a 95% confidence level, we are saying that we have selected an alpha level of .05.

Even though historically 95% is the most common confidence level used, confidence levels can be set at just about any number. For instance, setting it at 90% results in an alpha of .10, 99% results in an alpha of .01, 97% results in an alpha of .03, etc. Regardless of the number selected, the same rules apply in describing how confident a researcher wants to be that any statistical differences are not due to chance. You might wonder why it is not set at 100%. Generally speaking, all statistical tests run the risk of yielding a false positive (or type I error), so a confidence level of 100% is not necessarily possible.

P-Value

Determining at which level to set alpha is only part of the equation. Researchers must have something with which to compare alpha. Every experimental test statistic has what's called a corresponding p-value. The letter p is in reference to the word probability. The calculation which is used to derive the p-value is different depending on the statistical test (i.e. correlation, ANOVA, etc.) used to analyze the experimental results. The p-value represents the likelihood that the statistical result occurred by chance. So, a p-value of .07 would indicate that there is a 7% chance that the statistical result occurred by chance and a 93% chance that it did not.

To unlock this lesson you must be a Study.com Member.
Create your account

Register for a free trial

Are you a student or a teacher?

Unlock Your Education

See for yourself why 30 million people use Study.com

Become a Study.com member and start learning now.
Become a Member  Back
What teachers are saying about Study.com
Free 5-day trial

Earning College Credit

Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.

To learn more, visit our Earning Credit Page

Transferring credit to the school of your choice

Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.

Create an account to start this course today
Try it free for 5 days!
Create An Account
Support