Posted by: bluesyemre | June 18, 2015

How to navigate the world of citation metrics by Jenny Neophytou

465870a-i2.0

There are many factors beyond academic quality that can influence the rate at which an article is cited. The purpose of this post is to provide guidance on the various types of citation metrics that are available, including the background to those metrics, what they tell us, and crucially, what they do not tell us about academic behavior. It is crucial that these factors are considered before metrics are used in any decision-making process:

  • Discipline. In particular, social science and humanities disciplines tend to cite more slowly, and cite a larger proportion of books (as opposed to journals) compared with scientific disciplines. Citation metrics should not be compared across disciplines unless this is accounted for (i.e. the SNIP metric (see below)).
  • Document type. Review papers tend to attract the most citations; case studies tend to attract the fewest citations. That is not necessarily a comment on the research quality – just the type of research produced. Usually, ‘non-substantive’ papers, such as meeting abstracts and editorials, are excluded from the denominator of citation metrics.
  • Age of research cited. Older articles will have more citations. If using a metric that measures ‘total citation counts’, keep in mind that the metric will be skewed towards older papers, or towards academics who have been in their careers for a longer period of time.
  • The data source. There are many sources of citation information (i.e. Web of Science, Scopus, Google Scholar), and the citation scores for a single article are likely to be higher in the largest database (Google Scholar). Most citation metrics are tied to a single database, however not all are. In these instances, it is important to note the data source.

Overview of Key Metrics:

  • 5-Year Impact Factor Data source: Web of Science. Published in the annual Journal Citation Reports. Average citations in the JCR year to substantive papers (articles, proceedings papers, reviews) published in the previous 5 years.
  • Altmetrics Metrics based on a broad spectrum of indicators, such as tweets, blog mentions, social bookmarking, etc. For more details see this blog posting on Wiley Exchanges:http://exchanges.wiley.com/blog/2013/05/20/article-level-metrics-painting-a-fuller-picture/.
  • Eigenfactor Data source: Web of Science. Published in the annual Journal Citation Reports. Based on weighted citations in the JCR year (excluding journal self-citations) to papers published within the previous 5 years. Citations are weighted according to the prestige of the citing journal (i.e. citations from top journals ‘mean more’ than citations from lesser journals). The mathematics of the calculation are akin to the PageRank calculations that Google uses in its ranking algorithms.
  • Google Scholar Metrics Data source: Google Scholar. These are ‘rolling metrics,’ i.e. based on a continually changing dataset. The main Google Scholar journal metric is the H5 index. This is very similar to the H-Index (explained below) but limited to papers published within the past 5 years.
  • H-index Data source: Any. An article level measure designed to evaluate individual authors, but which can be extended to any dataset. The H-index indicates the number of papers, H, that have been cited at least H times, e.g. an H-index of 15 means that 15 papers have been cited at least 15 times each. This metric does not control for the age of documents or citations, and can be calculated from any citation database. Caution is advised, as the same group of articles will yield a different H-Index in different databases.
  • Immediacy Index Data source: Web of Science. Published in the annual Journal Citation Reports. Average citations in the JCR year to substantive papers published in the same year. This is really an indication of how rapidly research is cited. Journals with a high Immediacy Index will usually be journals representing a fast-paced research environment.
  • Impact Factor Data source: Web of Science. Published in the annual Journal Citation Reports. Average citations in the JCR year to substantive papers published in the previous two years.
  • SJR Data source: Scopus. Published in the SCImago journal and country rank reports.
    The SCImago Journal Rank (SJR) is based on weighted citations in Year X to papers published in the previous 3 years. Citations are weighted by the ‘prestige’ of the citing journal, so that a citation from a top journal will ‘mean more’ than a citation from a low-ranked journal. As with the Eigenfactor, the calculation is broadly similar to the Google PageRank algorithm.
  • SNIP Data source: Scopus. Published twice yearly on CWTS Journal Indicators.
    The Source Normalized Impact per Paper (SNIP) measures average citations in Year X to papers published in the previous 3 years. Citations are weighted by the ‘citation potential’ of the journal’s subject category, thereby making the metric more comparable across different disciplines.

http://exchanges.wiley.com/blog/2014/05/15/how-to-navigate-the-world-of-citation-metrics/


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Categories

%d bloggers like this: