As an early career researcher, it can be difficult to keep up with industry lingo, so, as of late, eContent Pro International® has provided free nuggets of wisdom on a variety of topics – such as Open Access and ORCID iD
Today, let’s examine impact factor, the system used to rank journals, on a global scale, in terms of overall quality and usage.
An impact factor (IF) evaluates the frequency, on average, articles from an indexed academic journal have been cited throughout any given year. These factors are then used to determine a journal's impact on the academic community, providing some measure of effectiveness by which journals can be ranked.
Essentially, citations act as currency inside the realm of academic publishing – the more a journal’s content is cited, the more valuable a journal is perceived to be. This is the current industry standard that has caught flak for a multitude of reasons, but predominantly because it delivers such a narrow, limited scope of outputs’ overall influence. Many scholars agree these factors don’t accurately reflect a journal’s overall quality, but in the absence of superior alternatives, it's viewed as the best evaluation technique available today.
How to Calculate Impact
Impact factors can be found in Journal Citation Reports (JCRs) maintained by Clarivate Analytics. Clarivate calculates global IF rankings, tracking over 2 million outputs—spanning 80 countries/regions—from 11,500+ indexed journals.
Thomson ISI (Institute for Scientific Information) created the formula for calculating impact factor. This equation is pretty straightforward – A/B.
Within this equation…
"A = the number of times articles published in 2017-2018 were cited in indexed journals during 2019
B = the number of articles, reviews, proceedings or notes published in 2017-2018"
As an example, let's calculate our own hypothetical journal's IF through 2019
Let's say our journal published 100 articles in 2017-2018, which accumulated 250 citations during 2019. If indexed output—articles, reviews, proceedings, or notes—in 2017-2018 totaled 20,000, then we're left with the following equation – 250/20,000. Hence, our hypothetical journal's IF for 2019 is 0.0125.
For reference, in 2017, the JCR tracked 12,061 journals. Of those 12,061, two-thirds—8,074 journals—had an IF greater than or equal to 1, 4.6% of journals—553—had an IF greater than or equal to 6, and just 1.7% of journals—213—had an IF greater than or equal to 10.
As stated earlier, many scholars agree these factors don’t accurately reflect a journal’s overall quality due to a plethora of reasons. Here are a few limitations associated with the current impact factor model:
IFs only account for traditional citations, not when output is referenced in nontraditional ways. Should journals be credited when its contents are, for example, featured in non-academic publications like newspapers and magazines or mentioned by scholars on social media platforms – like LinkedIn and Twitter? As of now, any noteworthy references outside of the norm remain a nonfactor when calculating a journal’s impact factor.
Only indexed journals receive IFs. This distinction negates the importance of output from journals that aren't indexed by databases like Scopus or Web of Science.
There’s this preconceived notion that non-indexed journals are inferior to indexed journals, but "non-indexed" doesn’t exclusively refer to suspect, predatory publications. Inclusion in an index isn't automatic, as journals need to apply and be approved before joining any database.
Brand new journals, for example, may not have enough published content under their belts to qualify for high-end indices. Other journals cover disciplines that don’t even qualify for indices – like Arts & Humanities. Either way, plenty of potential, budding candidates have been left on the outside looking in, only to be mislabeled as just another “predatory” operation.
What if a researcher reads an article, experiences some sort of epiphany, and publishes their own study directly inspired by the original source material?
Here’s an example of why many scholars argue that “impact” isn't strictly tangible. Indirect influence cannot be cited, at least not in a traditional sense, but it can greatly impact the industry. This oversight leaves output, as well as entire journals, susceptible to being criminally undervalued due to IF limitations.
Niche Content Coverage
If a research area begins trending before it has the chance to develop, it can take several years for its content to disseminate, especially if these findings have a narrow scope, with only a handful of scholars currently researching the niche subject in question. In that case, it may take even longer to attract interest and/or garner citation impact.
It's no secret that top indices are partial to English language journals. After all, English is the unofficial, yet universal, standard for all things scientific. The Atlantic researched this phenomenon in-depth, confirming that "The top 50 journals are published in English and originate from either the U.S. or the U.K." This has led to an industry precedent – seemingly, only research composed in English is viable. Such widespread standards have kept foreign language journals, as well as foreign language research, from appearing in top indices.
No IFs, Ands, or Buts
Alternative metrics do exist—SCImago, Eigenfactor, Google Scholar—but none vastly exceed impact factor, at least not in the court of public opinion. For all its shortcomings, IF remains the measuring stick for industry-wide journal quality, and it’s difficult to foresee another metric usurping it anytime soon.