By Tara Newby, SAGE Publishing
How can the impact of an academic article be measured? It seems that everyone wants to find an answer to this question – from the researcher and author teams that create research articles, to the editors and peer reviewers who curate them, to the societies and publishers who ensure that the articles are released to the world. It is true that all peer-reviewed research contributes to knowledge and thus has inherent value. But in this internet age, when a publisher can easily see which articles are the most downloaded and which articles are not downloaded at all, the question of impact remains. If an article is researched, reviewed, produced, and published, but very few read it – does it have an impact? If an article is negatively cited, what is its impact? There is no simple answer to the question of defining and measuring impact.
Despite the difficulty of determining what true impact is for an academic article, dozens of metrics have emerged that claim to assess it. The impact factor published yearly by Clarivate Analytics is arguably the most influential metric of impact in scholarly publishing. Many authors choose the journal to which they want to submit based on the strength of that journal’s impact factor. Yet the impact factor metric – the average number of citations during a volume year to articles published in a journal during the previous two volume years – has been widely critiqued. Some argue that it overemphasizes an arbitrary selection of articles, that it doesn’t take into account the long-term impact of research, and that it is too influential in determining publication decisions. Due to the emphasis that tenure committees place on publishing in high-impact factor journals, many authors are incentivized to publish in the journal with the highest impact factor whether or not it offers the right fit for their research. Many editors are similarly motivated to publish content that would be highly cited due to controversial topics rather than replicative studies that add value to the literature, in order to increase their journal’s impact factor.
On the other side of impact metrics is the Attention Score, put forward by Altmetric.com. The Altmetric Attention Score solves some of the problems of the impact factor, particularly by providing an article-by-article metric rather than a journal-level metric, and by measuring the online conversation around an individual article, rather than just recent citations. However, even the Altmetric Attention Score has its limitations. In measuring the online conversation around an article, it tends to highlight “catchy” articles rather than scientifically important ones that change practice or advance research. It also neglects to include citations to articles in other journals, so it would be difficult to compare the value of an Altmetric Attention Score to the value of an impact factor or similar metric.
While it is limited as a metric of impact, one of the advantages of the Attention Score is that publishers, editors, and authors can see the direct influence of online promotional efforts on the reach of a particular article. Any tweets, blog posts, or news articles “count” for the Altmetric Score, and any mentions can be easily tracked via Altemetric.com. In the past, citations were one of the few indications of impact, and authors had little control over those outcomes apart from selecting a journal with a high impact factor to publish their work. But now, it is clear that any promotional effort that authors themselves make on behalf of their articles can have a positive effect on the reach of their content. Tools like Kudos have been developed that make it increasingly simple for authors to share articles on their own social media. Any content that is not already open access can often be shared during a publisher’s free trial period, making the full article free to read. As social media shares proliferate, it has become increasingly clear that just as it takes a village to publish an academic article, it also takes a village to vigorously promote it. Many types of promotional efforts – from mass email campaigns to individual tweets – contribute to the impact of an article by increasing its readership.
A comprehensive metric for impact has yet to determined, yet the tools that we have – though limited – are useful for gaining an understanding of the value of published content. In the future, some sort of productive integration of the impact factor, the Altmetric Attention Score, and other metrics may be possible. But for now, it is important to ask first, what impact is being evaluated? The person/group doing the evaluating will have a different perspective on the impact of an article based on their own priorities, and impact will be different on different audiences (i.e. the academic publication pool vs. the everyday lives of average people). Altmetric.com gets at this issue succinctly in their post “What are Altmetrics?”:
It is important to bear in mind that metrics (including citation-based metrics) are merely indicators–they can point to interesting spikes in different types of attention, etc. but are not themselves evidence of such.
To get at true evidence of impact, you need to dig deeper into the numbers and look at the qualitative data underneath: who’s saying what about research, where in the world research is being cited, reused, read, etc., and so on.
This is a conversation that we must continue to have, as new metrics for measuring impact arise. In the meantime, it is essential to resist comprehensively relying on any single metric of impact, and instead to reach a little deeper into the narrative of an article’s reach and influence before making that evaluation.
Tara Newby is a Publishing Editor at SAGE and works on the STM Society Journals team.