sensitivity and specificity

medical statistics
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

Related Topics:
laboratory diagnosis

sensitivity and specificity, two measures used to determine the validity of a test, typically in a clinical setting in healthcare. Sensitivity is a measure of how well a given test identifies the disease or trait in question (i.e., how well it avoids false negatives), while specificity is a measure of how well a given test identifies the absence of the condition being tested (i.e., how well it avoids false positives). Combined, these two measures are vital for determining the predictive value of a test and how well it will perform in clinical settings.

In healthcare settings, some tests are diagnostic in nature, providing definitive information about the presence or absence of an illness, but it is often less expensive, faster, safer, or otherwise more practical to perform screening tests. However, screening tests, which may be used because a diagnostic test is not available, provide answers that are less certain than those provided by diagnostic tests. Screening tests sometimes result in false negative results, which are instances where the test categorizes individuals as not having the condition when they actually do. Alternatively, screening tests may yield false positive results, in which the test categorizes individuals as having the condition when they actually do not. Often, the same test may provide both false negatives and false positives, generally at different rates.

Sensitivity is calculated by comparing the number of persons correctly identified as having a condition in a test population with the true number of individuals who have the condition in the same test population. The equation can be stated as: sensitivity = number of true positives / (number of true positives + number of false negatives).

Specificity is calculated by comparing the number of individuals correctly identified as not having a condition in a test population with the true number of individuals who do not have the condition in the same population. The equation can be stated as: specificity = number of true negatives / (number of true negatives + number of false positives).

For example, suppose a group of 10,000 subjects undergoes a screening test for condition X. The rate of X in the test population is 0.5 percent, meaning that 50 out of the 10,000 subjects have the condition. Assuming that the test has a sensitivity of 0.86, or 86 percent, of the 50 persons who actually have the condition, the test will successfully identify 86 percent of them. Thus, the test returns 43 true positives and 7 false negatives. If a test has a specificity of 0.92, or 92 percent, and there are 9,950 subjects in the test population, the test will correctly identify 92 percent of them, meaning that it will return 9,154 true negatives and 796 false positives.

In some cases, the number of false positives may dwarf the number of true positives, despite what seems to be a very high sensitivity. It is important for healthcare providers to have a clear understanding of the sensitivity and specificity of the test, as well as the underlying prevalence of a trait in a population, to know how to proceed after a screening test. In the example above, a test subject who is screened for condition X and receives a positive result may be one of the 43 true positives, though it is more likely that they are one of the 796 false positives. In contrast, someone who receives a negative result may be one of the 7 false negatives, but it is vastly more likely they are one of the 9,154 true negatives. It may be that a positive result justifies further testing for condition X, while a negative result is enough to essentially rule out the condition.

However, such decisions are dependent on the underlying prevalence of the condition, as well as the sensitivity and specificity. If a test is conducted due to the presence of symptoms or other risk factors, the condition being tested for may be very prevalent in the tested population. In such a case, false negatives would become more common, and therefore a much higher sensitivity would be required for the test to rule out a condition.

Get Unlimited Access
Try Britannica Premium for free and discover more.

Both specificity and sensitivity are required for a test to have high validity. A test that always returned a positive result would have perfect sensitivity but no predictive value, because of its lack of specificity, while a test that always returned a negative result would have perfect sensitivity but no specificity.

Stephen Eldridge