This ranking system employs bibliometric methods to analyze and rank the scientific paper performances of the top 500 universities in the world. The selection of the 500 universities in this ranking system is based on the information obtained from the Essential Science Indicators (ESI). Among more than 4,000 research institutions listed in ESI, our ranking system firstly selected the top 700 institutions based on the numbers of published journal articles and numbers of citations. Non-university institutions were removed from the list, and the rest of universities which also including in other ranking programs (such as ARWU, THE, QS, and U.S. News) are being compared again. The 828 universities are selected as the targets for HEEACT ranking system. The data assessment of the university performances is drawn from the ESI of ISI and Web of Science (WOS), which included SCI and SSCI, and Journal Citation Reports (JCR).
The concept of authority control is employed to retrieve data indexed under different forms of a university’s name in the aforementioned databases – i.e., the official name, the abbreviated and other possible forms of the names. This ranking system also considers the mergers and division of universities (or branch campuses in a university system) and is included the publications by the affiliated institutions of a university, such as research centers and university hospitals. This effort ensures the accuracy of the number of published journal articles of each university and the subsequent statistics of their citations.
Some branch campuses within a particular university system may have been commonly recognized as individual institutions. However, they are indexed only by the name of university system in ESI. For example, University of Illinois at Urbana-Champaign, University of Illinois at Chicago, and University of Illinois at Springfield are not differentiated in ESI (are all indexed under “University of Illinois”) even though they are often regarded as three individual universities. This ranking system corrects the flaw by manually searching SCI/SSCI in order to identify the actual number and citations of articles produced by each branch campus. In other words, this ranking system employs the same manual searching procedures to ensure that the measurement of highly cited papers in each university has fairly represented the research performance of each individual university campus.
The measurements of the 2011 performance are composed of eight indicators. All the indicators represent three different criteria of scientific paper performance: research productivity, research impact, and research excellence. Table 1 lists the indicators and shows the respective weightings of each indicator.
Table 1: The Criteria, Indicators, and Their Respective Weightings Used for the Overall Performance Based Ranking
||2011 Overall Performance Indicators
||Number of articles of the last 11 years (2000-2010)
|Number of articles of the current year (2010)
||Number of citations of the last 11 years (2000-2010)
|Number of citations of the last 2 years (2009-2010)
|Average number of citations of the last 11 years (2000-2010)
||h-index of the last 2 years (2009-2010)
|Number of Highly Cited Papers (2000-2010)
|Number of articles of the current year in high-impact journals (2010)
The number of articles published in peer-reviewed academic journals is frequently used to indicate the productivity of a research institution. To objectively represent a university’s on-going and current research productivity, this ranking system employs two indicators: the number of articles of the last 11 years (2000-2010), and the number of articles of the current year (2010).
“Number of articles of the last 11 years” draws data from ESI, which include 2000-2010 statistics of articles published in journals indexed by SCI and SSCI. “Number of articles of the current year” relies on the 2010 data obtained from SCI and SSCI, which were searched between January 1 and January 31, 2011.
The number of citations to a particular academic article within a specific time frame is a commonly accepted indicator for that article’s impact. This ranking system considers both the long-term and short-term impact of a particular research and seeks to provide a fairer representation of a university’s research impact regardless of its size and faculty number. Thus, this ranking system measures research impact by: the number of citations of the last 11 years, the number of citations of the last 2 years, and the average number of citations of the last 11 years.
“Number of citations of the last 11 years” draws 2000-2010 citation statistics from ESI. “Number of citations of the last 2 years” draws 2009-2010 citation statistics from SCI and SSCI in WOS, which include citation statistics updated to the dates of retrieval. “Average number of citations of the last 11 years” is the number of citations in the last 11 years divided by the number of articles in the last 11 years.
This ranking system assesses each university’s research excellence by the following indicators: the h-index of the last 2 years, the number of Highly Cited Papers from ESI, and the number of articles of the current year in high-impact journals (Hi-Impact journal articles). “h-index of the last 2 years” measures both the quantity and quality of a university’s research via the use of the 2009-2010 data from SCI and SSCI. Employing Hirsch’s (2005) concept of h-index, a university has index h if h of its Np papers in the last two years have at least h citations each and the other (Np – h) papers have ≦h citations each.
“Number of Highly Cited Papers” utilizes data from ESI, which include statistics of “Highly Cited Papers” from 2000 to 2010. ESI defines Highly Cited Papers as SCI /SSCI-indexed papers that are cited most (in the top 1% of the total papers indexed in the same year) within the last 11 years.
“Number of articles of the current year in high-impact journals” employs data from JCR, which supplies the impact factor of each journal in its subject field. The impact factor of a journal is the number of citations to the papers published in that particular journal within the previous two years divided by the number of that journal’s papers within the previous two years. A journal with a higher impact factor means its articles are more frequently cited by other journals, thus suggesting its higher scholarly value. This ranking system defines high-impact journals as journals whose impact factors are ranked as the top 5% of the total journals within a specific subject category. With high-impact journal lists derived from JCR, this ranking system is able to count the numbers of each university’s articles published in high-impact journals by subject.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output.
Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.
Score Calculation and Sorting
The procedures for data processing are as follows: First, the project staff conducted authority control on the various forms of a university name and inspected all the SCI/SSCI bibliographic records in which the address field contained one of the forms of the university name. An accurate number of the total articles from a university was obtained after removing duplicate records containing different forms of that university’s name. Second, using SCI/SSCI, this ranking system obtained the total number of citations by adding the number of citations of each article from that university, starting from the article’s inclusion in SCI/SSCI to the date of our retrieval.
Based on the measurement procedures, this ranking system calculated a university’s score for each of the eight indicators. For each indicator, the university with the highest number received the maximum points (100); the other universities’ numbers were subdivided by the highest number and were converted decimally into their respective scores. For example, if University A had the highest number M for Indicator Y, it received 100 for that indicator, while University B with a number of N received (N/M×100) for that particular indicator. Finally, the ranking system calculated the final score of each university by the indicator weightings presented in Table 1 and sorted the universities by their final scores. Universities with the same scores were sorted alphabetically. It should be noted that many universities obtained similar scores, and the slight differences of the final scores must be interpreted carefully. A university’s slightly higher score than another university’s may not necessarily suggest its superiority in scientific research because the two universities might be in very close proximity in the ranking.