Paula Adam, Head of Research Assessment AQuAS
While growing, the overall number of agents who are interested in evaluating the impact of research conducted in the R & D organizations meet a common concern: The need to define a set of indicators of results or impacts that are comparable and internationally accepted. In 2009 a Canadian panel of experts carried out one of the proposals with a global scope, using a set of indicators accompanied by a theoretical model and an appeal to the global community in helping advance the improvement and refinement. Since then, others have reported similar interests. However, we’re not yet aware of any successful wide-ranging initiatives. In literature the points of view favouring a mixed vision of quantitative metric indicators combined with qualitative approaches prevails.
One way to measure research is to consider it as a production process with inputs, process variables, outputs, outcomes and benefits. The activity and resources dedicated to R & D are measurable in quantitative terms (human resources, competitive and non-competitive funds raisings efforts, etc). The research process can also be measured quantitatively (clinical trials, patents, clinical practice guidelines, spin-offs and start-ups, etc) as well as primary outcomes (publications, citations, etc). In Catalonia, the UNEIX information system (Information system of Catalonia universities) collects since 2000 data from twelve public and private universities in Catalonia which allows a quantitative monitoring of needs, means and some results in the fields of teaching and investigation.The SIRECS (information system for research in health sciences) complements the information gathered by UNEIX with data individual data for the 19 biomedical research centres that receive public allowances from the Autonomous Government.
The Central de Resultats (Results Centre) puts together in comparative and individual terms the data from UNEIX-SIRECS regarding resources, processes and scientific results as well as quantitative indicators. The question is how to continue and complete the analysis with non-academic or social impact indicators at a nominal level (centre by centre), i.e. the impact on capacity building (of the system, the professionals and the patients along with their families); the impact on decision-making; the impact of health gains and the economic and social impact.
In Britain, the Research Excellence Framework (REF) is a project applied to universities and is perhaps the information system that collects the most non-academic impacts in a systematic and nominal way. This is accomplished by adding to the quantitative data a very large number of case studies conducted periodically through peer review for all universities and scientific disciplines.
The non-academic impact in this case is defined as: “any effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia“. In the REF, each case study is a document of 4 pages with a narrative description of non-academic impacts described using two criteria: the scope and the significance. The impasse between these narratives files and the metric objectives and has been recently addressed in a report from King’s College London and Digital Science using the methodological possibilities of ‘big data’ (or analyzing large volumes of data that the new generation of computers allow for): 6,679 studies from British universities have been used (subject to the methodology REF 2014) to apply a mixed method of quantitative approach to a high volume of qualitative data. This method is called text-mining and serves to synthesize the content of a high volume of qualitative narrative information. One of the study’s findings is striking: “The quantitative evidence supporting claims for impact was diverse and inconsistent, suggesting that the development of robust impact metrics is unlikely”. Therefore, it’s suggested again that metric indicators combined with qualitative approaches is perhaps the best way to capture an overall assessment.
Returning to Catalonia, the pilot study performed for two centres based on input-output tables sectored approach, suggests the possibility of extending the quantitative metrics in the economic sphere. At a more qualitative level, the evaluation of the 47 CERCA centres performed in 2012-13 through a survey revised by international expert panels provides quantitative and qualitative data and metrics which are very valuable for the assessment of each centre’s impact based on its missions and strategies. What is the next step? How to move towards a mixed quanti-quality method in order to have an overall evaluation of the impact of biomedical research centres in Catalonia?