top of page
Traditional Library

ALTMETRICS BINGO

Purpose

Altmetrics describes the analytical tools that can illuminate the impact of scholarly work in both academia and throughout society. In the past, whether an author’s work was valued by other academics was the only measure of worth. In the digital age and with many openly accessible routes to distribution of ideas, there is a need to look at how scholars communicate across many platforms. Altmetrics is a way to capture that information.


Background to Impact Measurement

The concepts behind the altmetrics were first pioneered by Eugene Garfield and the organization he founded in the late 1950s, The Institute for Scientific Information (ISI). Garfield developed a product named Science Citation Index and subsequently Social Science Citation Index,  and Arts and Humanities Citation Index. 


These indexes were built on the idea of formal citation of scholarly production. In other words, an article’s impact was based on how many times a work was cited in subsequently written articles. The method used was to index not only the author(s), title, abstract, bibliographic information (journal, volume, date, pages), and descriptive terms about the article in hand but to also keep track of what articles it cited in its reference list. A list could then be generated of the most cited authors and journals. Another major feature of the index was that it also noted the institutional affiliation of each other. This allowed entire institutions to measure their reputation in particular fields. 


Most importantly, the citation indexes allowed librarians to judge the reputation and quality of particular journals and to answer the question of which journals should be in a library’s collection. This led to a measure called Impact Factor. This measure is still widely used and ISI continues to operate and publish the online database Web of Science and the companion list Journal Citation Reports, that lists journals by their impact factor scores. 


The problem with Impact Factor is that it was a strong quantitative measure (how many times a work was cited) but a poor qualitative measure (it did not indicate that a work was cited for negative reasons, for instance). 

About: About
bottom of page