It is striking that there are more data on the animals used in experimentation than on humans (patients or not) who take part in clinical trials. Certainly, in both cases, the regulations are stern and there are different organisms which ensure the safety of participants in experimentation.
Recently data on the experimental use of animals in Spain were published. Overall, the number of applications has been over 808,827 throughout 2014: 526,553 rodents (mostly mice), about 190,354 fish (more than a third were zebrafish) 44,169 birds and 23,881 rabbits, to name the most used animal species. It should be noted that a quarter of those, and despite being mostly mice, are genetically modified animals. The vast majority (75%) are used for what is called basic research and translational and applied research.
Is this too many or too few? What are the latest trends? Despite recent changes in the way we collect information, data show an increase over previous years, which does not seem to quite support the principles that should inspire animal testing, which were collected by the Royal Decree 53/2013, the so-called 3 Rs replacement, reduction and refinement.
Aside from quantity, quality also is important and there is a remarkable lag in initiatives to improve data collection and reviews of experimental studies compared to human clinical research. We are referring to the CAMARADES (Collaborative approach to Meta-analysis and Review of Data in Experimental animal Estudies), which is essential before starting a new study, and the ARRIVE guidelines (Animal Research: Reporting In-Vivo Experiments) to improve the design and publication of animal experimentation, and ultimately, to reduce the risk of biases.
One might wonder, how many biomedical research funding agencies, in their peer review process, call or require the use of these guidelines when assessing projects involving animal experimentation? Surely we could also discuss the implementation of the guidelines CONSORT (Consolidated Standards of Reporting Trials) and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analysis) in the case of clinical trials in humans.
It seems clear that the higher the risk of bias, the more overestimation of effects, thus it is not surprising that the subsequent proposals for translating this into human experimentation end up being disappointing.
The field of neuroscience is full of such cases of failed transfer, usually for involving imperfect animal models, or less than careful study designs and too prone to bias.
A recent paper by Malcolm R. Macleod from the Centre for Clinical Brain Sciences, University of Edinburgh, published in PLoS Biology, insisted on those qualitative aspects often found in animal research. It also underlined that reporting the risk of bias is not related to the journal’s impact factor, which again emphasizes this measure as a poor indicator of the quality of research.
Post written by Joan MV Pons.