In the past few years, the field of science policy has seen an increasing emphasis on the “societal value and value for money, performance-based funding and on globalization of academic research.” There has also been a notably growing need for research assessment internally and broad research information systems. During the shift to more dependence on research indicators, the computerization of the research process and moving to social media for academic communication have led to research assessment relying on data metrics. This involves the usage of citation indexes, electronic databases, the repositories of publications, the usage analytics of the publishers’ sites, and other metrics like Google Analytics (Moed, 2017).
According to ASPIRE, a research performance scheme, here are four indicator categories that go into measuring one’s activity and quality of research. The indicator categories include: research income, which measures the monetary value amount in terms of grants and research income, whether private or public; research training, which measures the activity of the research in terms of supervised doctoral completions and supervised master’s research completions, along with measuring the quality of the research in terms of timely supervised doctoral completions (four years or less) and timely supervised master’s research completions (two years or less); research outputs that measure the research activity based on the number of publications (books, book chapters, journal articles, conference papers) and the number of creative works, such as live performances or exhibitions and events (The quality of the research is measured based on the number of ERA-listed citation databases and high-quality creative works determined by an internal peer review process.); and research engagement, which measures the research activity by the number of accepted invention disclosures, granted patents and commercialization income. This indicator measures the quality of the research based on external co-authored or created outputs and external collaboration grants (Edith Cowan University, 2016).
Due to the increasing usage of such indicators, there have been many different measures developed to better understand a research’s impact. For example, one way to look into indicators is by measuring outcomes, such as dollars saved, lives saved, and crashes avoided, and combining them with other outputs. This is the method included in the Research Performance Measures System (National Academies of Sciences, Engineering, and Medicine, 2008).
When deciding what measurement system you would want to go with, there are a few things to consider to make sure you are not depending on the wrong indicators. There are more critiques now of the way indicators are used in assessing research, since indicators may be biased and come short of measuring what they are expected to measure. Also, most studies have a limited time horizon, and that could make some indicators unreliable. In addition, there have been discussions about how indicators may be manipulated and their perceived societal impact being flawed. That is why many believe that using indicators by themselves at the level of the individual and making determinations based on them results in faulty measurements. A valid and just assessment of individual research can only be done properly if there is sufficient background knowledge on the particular role the research played in their publications while also taking into consideration other types of information affecting their performance (Moed, 2017).
Although making a contribution to scientific-scholarly progress is a notion that has a place in history, it is argued that this impact can only be measured in the long term. This is why some current indicators measuring scientific-scholarly contributions may be focused less on actual contributions and more as an indication of “attention, visibility, and short-term impact.” In terms of societal value, it is almost impossible assess in a politically neutral manner. This is because societal value is usually measured based on the policy domain, which makes it hard to be neutral (Moed, 2017).
However, the importance of indicators should not be underestimated because of these factors. Instead, you will need to pay attention to how these influences may change your perception and try to eliminate any biases. For one, an assumption in the usage of indicators to assess academic research has been that it is not the “potential influence or importance of research but the actual influence or impact” that is important for policymakers and researchers. Another bias present in indicators that you can eliminate is the usage of citations as indicators of the importance of the research rather than effective communication strategies. When you shift your perspective to look at citations in this way, it will discourage their usage as a major indicator of importance (Moed, 2017).
With that being said, do not let the shortcomings of the research indicator system stop you from reaping their best qualities. There are many ways you can use these indicators to influence your decisions and better your outcomes. For example, you can use citations as an indicator of your social media presence and if you are effectively getting your work out into the world. Though the specifics of how accurate indicators are is a hotly contested issue, balancing your trust in indicators with some skepticism will result in better outcomes in the future. The only way for you to fully understand these biases and balance your expectations correctly is by looking at different systems and digging into studies about these specific indicators.
Since indicators depend on a wide variety of outcomes, one way to better these outcomes is by maximizing your work’s impact and visibility. You can find some tips about how to go about this process by looking through eContent Pro’s blogs and adding to your research factors, which will result in higher visibility and better outcomes in terms of impact. Also, since indicators rely on citations and other similarly measured factors, you might want to take a look at eContent Pro’s publishing services to see what works best for you in order to ensure that your work gets to the right audience and becomes an important piece of scholarly work in your academic field. Whether it is libraries and open access organizations, university presses and commercial publishing houses, or academic and research individuals, eContent Pro’s services will ensure that your work gets the visibility and recognition it deserves.
As publishing is an important pillar of building a standout research profile, it is paramount to make use of available author services, many of which are available on the eContent Pro’s website. Services such as English Language Copyediting, Scientific and Scholarly Editing, Journal Selection, and many other related services will greatly improve your chances of getting your manuscript accepted into a journal. You will find eContent Pro’s author services to be top quality, turned around exceptionally quickly, and affordably priced. Here are what our satisfied customers are saying about eContent Pro’s Author Services:
“The feedback on the technical content was beyond my expectations as it included additional reference suggestions and questions provoking further thought. The chapter has improved as a result.”
Dr. Sheron Burns, University of the West Indies, Jamaica
“This is not my first experience with eContent Pro. I was extremely pleased with the communication between me and office staff/professionals.”
Dr. Theresa Canada, Western Connecticut State University, USA
- Edith Cowan University. (2016). Research performance analytics. https://intranet.ecu.edu.au/__data/assets/pdf_file/0005/720374/Research-Performance-Analytics.pdf. https://intranet.ecu.edu.au/__data/assets/pdf_file/0005/720374/Research-Performance-Analytics.pdf
- Moed, H. (2017). How can we use research performance indicators in an informed and responsible manner? The Bibliomagician. https://thebibliomagician.wordpress.com/2017/11/03/how-can-we-use-research-performance-indicators-in-an-informed-and-responsible-manner-guest-post-by-henk-moed/
- National Academies of Sciences, Engineering, and Medicine. (2008). ). Performance measurement tool box and reporting system for research programs and projects. The National Academies Press. https://nap.nationalacademies.org/read/23093/chapter/5