Standard scores are an easy way to compare scores, making it possible to compare unusual things that are not in the same category. Several IQ tests use multiple tests that are combined into one IQ score.
What Is a Standard Score?
To work with numbers, they need to be in the same language. For instance, if we're describing someone's results after taking three separate tests, the following doesn't mean anything to you: Test 1 completed in 3 minutes, Test 2 with no mistakes, and Test 3 has 30 questions answered. Instead, if we were to say: based on their test results, the person has an IQ of 100, would this make more sense?
Standard Score is a set of scores that have the same mean and standard deviation so they can be compared. We will use IQ as our common example to stitch all of this together. Mean is the average of all of the scores and in the very center of a plotted graph of scores (this will be explored later). The mean, or average, IQ is 100. Standard deviation is a way of dividing up the standard scores. This would be easier to see in graph form.
The different colors of the graph seen here are standard deviations. Standard deviation is a mathematical way of grouping people together based on how much variance there is in the set of scores. If you look at the red line on 100, the blue group to your right is considered one positive standard deviation. In that blue group is 34.1% of the population (you won't be tested on specific percentages). If you combine it with the green group just to the left of the red line, you have everything within one standard deviation of the average (average is 100), or 68.2% of the population. One standard deviation equals 34.1%; combining both above and below the standard deviation, you get 68.2%. Standard deviations are a little tricky, but they allow for easy groupings and predictions.
Calculating Standard Scores
Standard scores are calculated by taking the raw score and transforming it to a common scale. There is not one formula to convert any score into a standard score. Each standard score's formula is connected to its raw score and what is trying to be said. Back to the IQ example: Test 1 completed in 3 minutes, Test 2 with no mistakes, and Test 3 has 30 questions answered. These results are compared to others, and this translates into an IQ of 100.
Standard scores can also be used to illustrate how well someone did in comparison to others. Percentile scores illustrate the relative standing of the person's standard score in comparison to others. For example, an IQ of 100 means the person scored above 50% of people. This is determined by using the same graph as the IQ one we looked at earlier. If you scored a 115 on an IQ test, then you would look at 115, which is one standard deviation above the mean. You would then add up all of the percentages below. One standard deviation is 34.1%, and everything below the mean is 50%. So, you scoring an IQ of 115 means you scored above 84.1% of people.
Percentile scores can also be used to determine how many people scored between two scores. For instance, how many people have scores between 100 and 85? About 34.1% of people have an IQ score between 100 and 85.
Let's review. A standard score is a set of scores with the same mean and standard deviation. By converting raw scores into a standard score, it allows the scores to be compared. The mean gives a central reference and what most people score at, and the standard deviation divides all the others into groups, depending on how far above or below they scored than the average. By using the standard deviation and statistical procedures, it is possible to determine a percentile score for a standard score. A percentile score looks at how many the person scored above.