Comparing two groups
In the section above we have talked about risk and odds as it applies to a single group of people. In clinical trials, so we can assess the effect of an intervention over and above the natural course of a disease, we usually compare how people respond in an experimental group to how they respond in a control group (i.e. we compare two groups of people). When we are dealing with a dichotomous outcome, we can either compare the risk of having the event between the two groups, or compare odds between the two groups.
Relative Risk or Risk Ratio
Relative risk or risk ratio (they mean the same thing and are both abbreviated as RR) is simply the risk of the event in one group divided by the risk of the event in the other group.
The most common way to go about calculating the risk ratio (and nearly all other statistics from dichotomous data) is to start by presenting your results in a 2×2 table, where each cell in the table contains the number of participants in each category.
Event (Still infected) |
No event (Not still infected) |
Total | |
Intervention (Antibiotics) | 14 | 119 | 133 |
Control (Placebo) | 128 | 20 | 148 |
Now, if you think through what you are comparing (risk in the treated group with risk in the control group), the risk ratio is easy to calculate.
RR | = | risk in the treated group | / | risk in the control group |
= | no. with event in treatment group | / | no. with event in control group | |
no. in treatment group | no. in control group | |||
= | (14/133) | / | (128/148) | |
= | 0.1 | / | 0.86 | |
= | 0.12 |
If an experimental intervention has an identical effect to the control, the risk ratio will be 1. If it reduces the chance of having the event, the risk ratio will be less than 1; if it increases the chance of having the event, the risk ratio will be bigger than 1. The smallest value the risk ratio can take is zero when there are no events in the treated group.
Odds Ratios
Just as odds are an alternative way of expressing how ‘likely’ events are in a single group, odds ratio is an alternative way of comparing how ‘likely’ events are between two groups.
The odds ratio is simply the odds of the event occurring in one group divided by the odds of the event occurring in the other group. If we take the same data from our 2×2 table above,
OR | = | odds in the treated group | / | odds in the control group |
= | no. with event in treatment group | / | no. with event in control group | |
no. without event in treatment group | no. without event in control group | |||
= | (14/119) | / | (128/20) | |
= | 0.118 | / | 6.40 | |
= | 0.018 |
If an experimental intervention has an identical effect to the control, the odds ratio will be 1. If it reduces the chance of having the event, the odds ratio will be less than 1; if it increases the chance of having the event, the odds ratio will be bigger than 1. The smallest value an odds ratio can take is zero.
The difference between good and bad outcomes
Most dichotomised outcomes will be a dichotomy between a good and a bad event. When we describe risk, it can refer to the risk of having a good event or the risk of having the bad event, so ‘reducing risk’ could be a good or a bad thing. It is important whether we define ‘the event’ as the good outcome or the bad outcome as the results can change if we swap the good and bad outcomes around.
Taking the UTI example again, suppose we decide to define the event as cure. The risk in the antibiotic group is now 119/133 = 0.895 (i.e. 119 women were no longer infected) and in the placebo group it is 20/148 = 0.135 (i.e. 20 women were no longer infected). The risk ratio is therefore 0.895/0.135 = 6.6
Remember we previously calculated the risk ratio for remaining infected as 0.12. By swapping the good and bad outcomes we have changed the risk ratio from 0.12 to 6.6, but there is no simple relationship between these numbers. This makes it difficult to calculate one from the other without going back to the original data.
This means there are essentially two risk ratios: the risk ratio for a good outcome and the risk ratio for a bad outcome. There is quite a lot of work being done on this issue in the Cochrane Collaboration at the moment, but the general rule is that for outcomes which we aim to prevent (e.g. death, recurrence or worsening of symptoms), it is best to report the event as the bad outcome, which is usually the intuitive choice. For outcomes where we are trying to improve health (e.g. healing, resolution of symptoms, clearance of infection), we still do not know which option is best, but you should be very clear in your results section which outcome you are presenting. These rules are based on analysing which statistic is the most consistent – an issue we will discuss in more detail shortly.
For odds ratios, you still need to choose which outcome is the most appropriate to present, but it is easier to convert from ‘good’ to ‘bad’ outcomes or vice versa. From the example above you will see that the odds ratio of the “good event” of cured is
OR | = | odds in the treated group | / | odds in the control group |
= | (119/14) | / | (20/128) | |
= | 8.50 | / | 0.156 | |
= | 54 |
From above we know the odds ratio when using the event ‘still infected” is 0.018, and the odds ratio when using ‘infection cleared’ as the event is 54. These numbers are inversely related (working with more accurate numbers, we find 1/0.01838 = 54.4 and 1/54.4 = 0.01838) and this is always the case with odds ratios. So in some senses it doesn’t matter whether we choose good or bad outcomes if we use odds ratios. Whichever we choose, it is vitally important that the results are very clearly reported so that those using the review are clear which outcome they are looking at.
When do risk ratios and odds ratios differ?
It is really important to make clear whether the statistic you are presenting is an odds ratio or a risk ratio. As we saw when we looked at using odds and risks to summarise events in a single group, the risk and odds can be very different. So too with risk ratios and odds ratios.
In general an odds ratio will always be further from the point of no effect (where OR=1, RR=1) than a risk ratio. If the event rate increases in the treatment group, the OR and RR will both be greater than 1, but the OR will be bigger than the RR. If the event rate decreases in the treatment group, both the OR and the RR will be smaller than 1, but the OR will be smaller than the RR.
Odds ratios and risk ratios will be similar when the event is rare, but will differ (often by a lot) when the event is common. In situations of common events, the odds and odds ratio can be misleading, because people tend to interpret an odds ratio as if it were a risk ratio. Trials usually study frequent events, so this is a very real issue.
Later on in this module we will discuss how to chose the appropriate statistic in your review.
Another word of caution about both these measures: because the result is expressed as a proportion of the event rate in the control group, it is not possible to determine the actual number of participants who benefited. For example, a RR of 0.5 can mean a risk is decreased from 40% in one group to 20% in the other, or it can mean a 2% in one group and 1% in the other. In both cases the risk is halved by the intervention, but the actual change in the number of events is very different. Because of this, it may also be useful to express results in absolute terms. One way of doing this is to report a risk difference; another is to report the number needed to tr