Testing – Sensitivity and Specificity

Takeaways:

  • There is a trade-off to where you set the sensitivity for a test.
    • You’ll either have more false positives or false negatives.
  • When you test for a disease, a more sensitivity test means:
    • Improve your chance of catching a positive signal (more “true positives”).
    • A lower chance of thinking you’re safe when you’re not (fewer costly “false negatives”).
    • But you’ll catch some noise (more not so costly “false positives”).
  • When penetration of a disease is low, test predictability suffers.
    • False positives can quickly outnumber true positives.
    • So if you test positive, it is difficult to figure out: is it a true or false positive test outcome.
  • When you are testing for a “good thing” (immunity), false positives become costly.
    • False positive = thinking you’re immune when you’re not = not good…

Car alarm analogy:

  • 4 possibilities (analogy borrowed from MedCram YouTube video):
    • Thief and alarm goes off = true positive.
    • Random noise and alarm goes off = false positive.
    • Thief and alarm does not go off = false negative.
    • Random noise and alarm does not go off = true negative
Thief Noise
Alarm True Positive False Positive
No Alarm False Negative True Negative
  • False positives.
    • A false alarm…
    • You come out, no thief, your car is still there.
    • Moderate cost.
    • (see also “The Happiness Hypothesis“, “Thinking in Bets” and “Factfulness” on the evolutionary benefits of recognizing low-cost false positives, thinking that you spot a threat where there is none.)
  • False negatives.
    • Car is stolen but alarm does not go off.
    • Much higher cost.
    • What you want to avoid.
  • High alarm sensitivity:
    • Higher chance of a false positive.
      • The alarm is more easily triggered by random noise.
    • Lower chance of a false negative.
      • Less likely to not be alerted when there is a thief.
  • Low alarm sensitivity:
    • Lower chance of a false positive.
      • The alarm is less easily triggered by random noise.
    • Higher chance of a false negative.
      • More likely to not be alerted when there is a thief.
  • So there is a trade-off to where you set the sensitivity for a test.
    • You will either get more false positives or more false negatives.
  • The right sensitivity depends on what you are testing for, the testing environment.
    • In high crime environment: high sensitivity makes sense.
    • In low crime environment: lower sensitivity suffices.

Calculating sensitivity

  • Sensitivity: what percentage of actual theft is captured.
    • Total actual theft = true positive + false negative.
Thief Noise
Alarm True Positive (A) False Positive
No Alarm False Negative (B) True Negative
Total Actual Theft (C) Total Actual Noise
  • Sensitivity %:
    • True positives / (true positives + false negatives).
      • A / (A + B).
    • True positives / actual positives.
      • A / C.
  • High sensitivity.
    • ->  high true positive = catch every thief.
    • -> low false negatives = if there is no alarm, likely there is no thief.
    • -> high false positive = you will have many false alarms.

Calculating specificity

  • Specificity: what percentage of actual noise is captured.
    • Total actual noise = true negative + false positive.
Thief Noise
Alarm True Positive False Positive (D)
No Alarm False Negative  True Negative (E)
Total Actual Theft  Total Actual Noise (F)
  • Specificity %:
    • True negatives / (true negatives + false positives).
      • E / (D + E).
    • True negatives / total negatives.
      • E / F.
  • High specificity.
    • -> high true negative = alarm stays silent when there is just noise.
    • -> low false positive = if there is an alarm, likely there is a thief.
    • -> high false negative = but many thieves may not trigger the alarm.

Testing for critical disease: high sensitivity to avoid false negatives, but high false positive rates may affect the predictability of the test.

  • You want a high sensitivity test.
    • This means that if you have the disease, you will very likely test positive.
    • The false negatives will be low, which is what you want.
      • Meaning, you want to avoid receiving a negative test if there actually is disease.
  • But, high sensitivity tests may come with many high false positives.
    • Since the test is more easily “triggered”, there is a higher chance of receiving a positive test when you don’t actually have the disease.
  • The false false positives may outnumber the true positives.
    • So if you receive a positive test, it may be more likely a false positive than a true positive.
    • This is especially relevant if the occurrence of a disease is quite low.

Testing for infectious disease immunity: false positives and low predictability may become an issue.

  • False positives become very costly.
    • True positive = immunity = no threat of spreading the disease.
    • False positive = no immunity = threat.
  • As before, false positives may outnumber true positives if penetration is low.
    • Becomes very difficult to rely on positive test as an indicator of immunity.
    • A positive test may more likely be a false than a true positive..
  • Increase specificity at the expense of sensitivity.
    • Lower the amount of false positives.

Example – high sensitivity:

  • Population of 1,000,000.
  • Penetration % of disease varies: 1%, 10% or 50%.
  • Sensitivity of test is 99.5%.
  • Specificity of test is 90%.
  • Looking more closely at penetration of 1%:
    • There are 10,000 people with the disease.
      • Most of them will test positive: 9,950 people (99.5% sensitivity).
    • There are 990,000 that don’t have the disease.
      • 10%, or 99,000 people, will still test positive (90% specificity).
    • If you receive a positive test:
      • Chance of actually having the disease: 9,950 / (99,000 + 9,950) = only 9%.
  • If penetration % of disease is 50%:
    • There are 500,000 people with the disease.
      • 497,500 of them will test positive.
    • There are 500,000 people without the disease.
      • 50,000 people of them will still test positive.
    • If you receive a positive test:
      • Chance of actually having the disease is 497,500 / (50,000 + 497,500) = 91%.

Test b

Example – increasing specificity:

  • Sensitivity of test is 90%.
  • Specificity of test is 99.5%.
  • Now, at penetration of 1%:
    • Predictive power increases from 9% to 65%.

Test c

Leave a Reply