The strategies that you might consider using with skewed data depend on the way the original trialists analyse and report results. The options you might encounter include:
(a) The trialists have ignored (or not noticed) the skewness and simply report means, standard deviations, and sample sizes.
This appears to be the simplest situation, as you can directly enter these numbers into RevMan. However, as we have noted, there is a possibility that these 'improperly' analysed data may be misleading. So, we will be unsure of the validity of our findings.
(b) The trialists have log-transformed the data for analysis, and report geometric means.
When a positively skewed distribution is log-transformed the skewness will be reduced. This is a recommended method of analysis for skewed data. In some fields, such as analysing antibody concentrations after vaccination, this approach is the norm. The data we wish to analyze in RevMan should also be on the log scale: the mean of the logged data will be the log of the geometric data. The standard deviation can be obtained from the confidence interval for the geometric mean, as described in section 18.104.22.168 of the Cochrane Handbook for Systematic Reviews of Interventions.
(c) The trialists use non-parametric tests (e.g. Mann-Whitney) and describe averages using medians.
Non-parametric tests are a satisfactory alternative for analysing skewed data in trials. But as we cannot obtain means and standard deviations, we cannot include results of such analyses directly in a meta-analysis. This is, of course, unsatisfactory, especially when the inappropriately analysed results described in (a) can be used. One suggestion is that results of all studies are reported in a table in your review, regardless of the method of analysis used in the trials. This means that such data will not be lost from the review, and their results can be considered when drawing conclusions, even if they cannot be formally pooled.
Statistical methods do exist for combining p values from non-parametric tests, but not for estimating effects or detecting heterogeneity.
Fixed effect and random effects for continuous data
In Module 11 we covered differences between fixed effect and random effects meta-analysis of dichotomous data, and the issues are similar in continuous data. In a fixed effect inverse variance meta-analysis, the assumption is that all included studies are estimating one true or fixed effect and so variations between studies are due to random error. Studies are weighted according to the inverse of their variance, determined by the standard deviation. A potential problem therefore is that studies with restrictive eligibility criteria will have less variance (smaller standard deviation) and so will be given greater weight.
A random effects meta-analysis of continuous data assumes that all studies are estimating different effects (as they will all have differences to do with population, setting etc.) and these different effects are distributed according to a particular pattern. A random effects meta-analysis and fixed effect meta-analysis will therefore approximate each other in the absence of heterogeneity. Weight is attributed slightly differently when we use a random effects meta-analysis, however again studies with restrictive eligibility criteria will be given greater weight.