Altmetrics: complementary metrics focused on the article

19 Apr
Ernest Abadal

The traditional system in assessing the quality of a scientific publication (a journal article, for example) has fundamentally been based on the calculation of the citations it generates. In an article published in Science (1995), Eugene Garfield (1925 – 2017) proposed a citation index as a system that would help authors find articles on a subject. It was a great innovation without doubt. Later, with the creation of the Institute for Scientific Information (today the Web of Science) and the Journal Citation Reports, this system became very prominent and centred its work on the assessment of journals because it helped authors decide in which journal to publish (based on the impact factor calculated for each one).It is a system that has been criticised from the humanities and social sciences and also because it does not focus on the article itself but instead gives the reference value to the journal in which it is published (and presupposes that an article should “inherit” the journal’s impact factor).

From 2010, people started talking about altmetrics, a set of indicators (for example, how often an article is shared, its re-dissemination, the comments it has generated, mentions (likes), etc…) that measure the presence of a publication in social and academic networks, which complement citation indexes considerably. Altmetrics, therefore, assess the repercussion of an article itself and not that of a journal as a whole (the way impact factors do, for example).

At present, several scientific editors have taken this information into consideration. One of the first examples was the journal PLOS, followed by Nature and others. Its use has also spread to data bases (e.g. Scopus) and to academic networks (e.g. ResearchGate). The altmetric data that accompany an article tend to have the sections that appear in figures 1 and 2, even though there can be small differences depending on the programme used (ImpactStory, PLUM, Article Level Metrics, altimetrics.com, etc…).

Figure 1.

Figure 1. Example of the altmetrics of an article in PLOS

And so we can see that it is not only the statistics of presence in social networks that are included (mentions, blogs, etc…) but also the use of data (visualisations and downloads) as well as the citations of an article (in Scopus, CrossRef, PubMed, GoogleScholar, etc…). We are talking about very complete quantitative information for the reader and also for the author of the article.

Figure 2. Example of the altmetrics of an article in Nature

In the case of Nature (figure 2) there is also a graphic representation in the form of a circle or “ring” in which each colour is a type of channel (twitter, blogs, facebook, wikipedia, etc…), where a contextualised percentage is given in relation to articles which are similar in age and it also indicates its precise presence in the general media (“news articles”) and scientific blogs.

Let us do a quick assessment of altmetrics. Their main strengths lie in the fact that they measure the impact of publications beyond academic circles, strictly speaking, that they can be applied to all types of documents (be it an article, a book or a doctoral thesis), that the results are immediate (there is no need to wait for the annual value of the impactor factor) and that they focus on the article (and not on the journal).

In terms of their weak points, it should be said that the indicators need to be collected very quickly (they are very volatile), that it is difficult to compare the indicators between each other (which is of more value, a retweet or a “like”?), that there is great difficulty in the normalisation and homogeneity when collecting data (which does not occur in the case of citations) and that different measuring tools produce different results (e.g. ImpactStory or Altmetrics).

Altmetrics, therefore, help to measure the impact of a specific publication in social networks. This is why we should define them as complementary metrics rather than alternative metrics. In contrast to the traditional impact factor – which is applied to a journal – altmetrics are centred on the article and this is a significant innovation. Despite them having some weak points they are in a consolidation phase and have long-term potential.

From a researcher’s perspective, it is clear that at present publishing an article in a journal is not enough and one needs to be fully involved in its dissemination in social networks (especially Twitter, blogs, etc…) and also in academic networks (Researchgate, Mendeley, etc…) so as to give visibility to the contents published. In this new scenario, altmetrics are fundamental because they are able to measure this impact in networks and offer authors (and readers) a general view of the dissemination of their publications.

Post by Ernest Abadal, Faculty of Library and Information Science, University of Barcelona.

Open access depredatory journals

7 Dec
Joan MV Pons

As soon as it has been published and identified with an email address, it will not come as a surprise to receive a lot, but I mean a lot, of emails that often invite one to publish in what are apparently scientific journals (by name), to participate in congresses or conferences on subjects that seem of interest or to join as a member of some board of editors. These emails come in constantly and which I always mark as junk mail so as not to waste more time on them.

And it is true that this type of business, which is purely that, has proliferated in recent times largely due to the inherent zeal of the human species for lining one’s pockets, but also perhaps because of the great proliferation of researchers and research institutes. There is a lot of money at stake and it is well-known, that with minimal effort, one ends up publishing anything that one desires. If editors of journals in the past strived for readers and subscribers, now in addition to these open access journals, what they are looking for are columnists, people who publish in their pages …. in exchange for a small (and not so small) fee. There is no need to talk about the advantages of these open access journals and how some of them have attained a pretty high impact factor within a short period of time. Here the impact factor is a correct measure because it gives an approximation of the citations the articles receive which are published in these journals; it is a mistake, we know, to use the impact factor of the journal as an approximate measure or substitute (proxy or surrogate) for the value of an article.

Jeffrey Beal, a librarian, is the person who introduced this term and who elaborates and updates a list of journals periodically that can fit in to this  typology. According Wikipedia’s definition, those considered as predatory journals are those open access publications that stem from a business model based on the exploitation of open access publications by means of charging a publishing fee to the authors without providing the editing and publishing services of journals considered as legitimate (be they open access or not). Beall’s List up to December 2016 – a good sample of how Wikipedia updates itself in some subjects – had some 1,155 journals included.

The same universal cybernetic encyclopaedia provides a series of associated characteristics with this type of predatory journal (also hunters, that hunt to survive).

Post written by Joan MV Pons