## Number needed to treat

Another way of looking at the risk difference is the number needed to treat (NNT). Where we are trying to prevent an event, and the risk difference is less than 0 (i.e. the intervention reduces the risk of the event), NNT is the inverse of the risk difference:

NNT = 1 / risk difference

(Where we drop any minus signs from the risk difference). NNT describes the number of patients you would need to treat with the experimental treatment rather than the control treatment in order to prevent a single event. In other words, if the risk difference is 0.76, that means if we treat 100 people, 76 more will benefit when we use the intervention, who would not have benefited if given control. So how many would we need to treat to help one person? 100/76 or 1.3.

We always round up NNT to the next whole number so in this case we need to treat two women with antibiotics to cure one additional woman (over and above those who would have been cured anyway, i.e. those cured in the control group). It is important with NNT to link it to a time frame, so in this case we would need to treat 2 women with antibiotics *for 6 weeks* to prevent a single *extra* woman from not being cured.

Where the risk difference is greater than 0 (i.e. the risk of the event we are trying to prevent actually increases) the same calculation produces a number known as the NNH – number need to harm. This is the number of participants treated for a length of time for one extra person to have the event.

While NNTs are easy to interpret, making them popular with consumers and clinicians, they cannot be used for performing a meta-analysis because of their mathematical properties. RR, OR and RD are therefore used for meta-analysis, and all may later be converted to NNTs as a way of communicating results in some Cochrane reviews. In later modules, we’ll look in more depth at interpreting and applying the results of analyses.

### Summary to date

Here is a reminder of the statistics we have covered so far in this module:

- The
**risk**describes the number of participants having the event in a group divided by the total number of participants - The
**odds**describe the number of participants having the event divided by the number of participants not having the event - The
**risk ratio (relative risk)**describes the risk of the event in the intervention group divided by the risk of the event in the control group - The
**odds ratio**describes the odds of the event in the intervention group divided by the odds of the event in the control group - The
**risk difference**describes the absolute change in risk that is attributable to the experimental intervention - The
**number needed to treat**(NNT) gives the number of people you would have to treat with the experimental intervention (compared with the control) to prevent one event.

### Uncertainty

All of these statistics are based on observations in a sample of participants who are randomly split into treatment and control groups. On average randomisation will generate two groups who would have the same event rates if treated identically – so that any observed difference in outcome must be due to the different effects of the treatment and control interventions. However, this comparability is not guaranteed in any particular trial. It is possible that, by chance, the treated group may have a few more people who would naturally do well or badly than the control group, even if they had all received identical treatment.

This means that the observed treatment effect (OR, RR, RD) may actually be an over- or underestimate of the real effect of treatment. A confidence interval (CI) can be calculated as a way of representing the uncertainty in the estimate of treatment effect. The interval contains a range of values above and below the calculated treatment effect within which we can be reasonably certain (usually specified as 95% certain) that the real effect lies. The result is said to be statistically significant if the 95% CI does not include the risk in the two groups being the same (i.e. 1 for risk ratio or odds ratio, 0 for risk difference).

Another way of thinking about CIs is that it gives us an estimate of the range in which the estimate would fall a fixed percentage of times if we repeated the study many times. Picking a 95% CI means that in 5% of all possible trials the effect estimate would fall outside the 95% CI (2.5% above and 2.5% below). In some situations, you may want your CI to include more of the possible trial results, to be more sure that you are quoting an interval that contains the real effect. You can choose, for example, a 99% CI. The interval you come out with then will be wider than for a 95% CI, making your interpretation more conservative.