This ranking system employs bibliometric methods to analyze and rank the scientific paper performances of the top 500 universities in the world. The selection of the 500 universities for inclusion in this ranking system was based on information obtained from the Essential Science Indicators (ESI). Of the more than 4,000 research institutions listed in ESI, this ranking system first selected the top 700 institutions based on the numbers of published journal articles and numbers of citations. Non-university institutions were then removed from the list, and the project staff compared the remaining universities to those included in other ranking programs such as ARWU, THE-QS, and U.S. News. It resulted in 820 universities for this ranking system. Data used to assess the performances of the universities was drawn from ISI’s ESI and Web of Science (WOS), which included SCI and SSCI, and Journal Citation Reports (JCR).
The concept of authority control was employed to retrieve data indexed under different forms of a university’s name in the aforementioned databases – i.e., the official name, the abbreviated and other possible forms of the names. This ranking system also considered the mergers and splitting of universities (or different campuses in a university system) and included publications by a university’s affiliated institutions such as research centers and university hospitals. This effort ensured the accuracy of each university’s number of published journal articles and the subsequent statistics of their citations.
Some university systems have several campuses. A few campuses within a particular university system may have been commonly perceived as individual institutions. However, they are indexed in ESI only by the university system name. For example, University of Illinois at Urbana-Champaign, University of Illinois at Chicago, and University of Illinois at Springfield are not differentiated in ESI (they are all indexed under “University of Illinois”) even though they are often perceived as three individual universities. This ranking system corrects the flaw by manually searching SCI/SSCI in order to identify the actual number of articles and citations of these articles produced by each individual campus. Likewise, this ranking system employed the same manual searching procedures to ensure that the measurement of each university’s Highly Cited Papers had fairly represented the research performance of each individual university campus.
The 2010 performance measures are composed of eight indicators. The indicators together represent three different criteria of scientific paper performance: research productivity, research impact, and research excellence. Table 1 lists the indicators and shows the respective weightings for each indicator.
Table 1: The Criteria, Indicators, and Their Respective Weightings Used for the Overall Performance Based Ranking
||2010 Overall Performance Indicators
||Number of articles of the last 11 years (1999-2009)
|Number of articles of the current year (2009)
||Number of citations of the last 11 years (1999-2009)
|Number of citations of the last 2 years (2008-2009)
|Average number of citations of the last 11 years (1999-2009)
||h-index of the last 2 years (2008-2009)
|Number of Highly Cited Papers (1999-2009)
|Number of articles of the current year in high-impact journals (2009)
The number of articles published in peer-reviewed academic journals is frequently used to indicate the productivity of a research institution. To objectively represent a university’s on-going and current research productivity, this ranking system employs two indicators: the number of articles of the last 11 years (1999-2009), and the number of articles of the current year (2009).
“Number of articles of the last 11 years” draws data from ESI, which include 1999-2009 statistics of articles published in journals indexed by SCI and SSCI. “Number of articles of the current year” relies on the 2009 data obtained from SCI and SSCI, which were searched between January 1 and January 31, 2010.
The number of citations to a particular academic article within a specific time frame is a commonly accepted indicator for that article’s impact. This ranking system considers both the long-term and short-term impact of a particular research and seeks to provide a fairer representation of a university’s research impact regardless of its size and faculty number. Thus, this ranking system measures research impact by: the number of citations of the last 11 years, the number of citations of the last 2 years, and the average number of citations of the last 11 years.
“Number of citations of the last 11 years” draws 1999-2009 citation statistics from ESI. “Number of citations of the last 2 years” draws 2008-2009 citation statistics from SCI and SSCI in WOS, which include citation statistics updated to the dates of retrieval. “Average number of citations of the last 11 years” is the number of citations in the last 11 years divided by the number of articles in the last 11 years.
This ranking system assesses each university’s research excellence by the following indicators: the h-index of the last 2 years, the number of Highly Cited Papers from ESI, and the number of articles of the current year in high-impact journals (Hi-Impact journal articles). “the h-index of the last 2 years” measures both the quantity and quality of a university’s research via the use of the 2008-2009 data from SCI and SSCI. Employing Hirsch’s (2005) concept of h-index, a university has index h if h of its Np papers in the last two years have at least h citations each and the other (Np – h) papers have ≦h citations each.
“Number of Highly Cited Papers” utilizes data from ESI, which include statistics of “Highly Cited Papers” from 1999 to 2009. ESI defines Highly Cited Papers as SCI /SSCI-indexed papers that are cited most (in the top 1% of the total papers indexed in the same year) within the last 11 years.
“Number of articles of the current year in high-impact journals” employs data from JCR, which supplies the impact factor of each journal in its subject field. The impact factor of a journal is the number of citations to the papers published in that particular journal within the previous two years divided by the number of that journal’s papers within the previous two years. A journal with a higher impact factor means its articles are more frequently cited by other journals, thus suggesting its higher scholarly value. This ranking system defines high-impact journals as journals whose impact factors are ranked as the top 5% of the total journals within a specific subject category. With high-impact journal lists derived from JCR, this ranking system is able to count the numbers of each university’s articles published in high-impact journals by subject.
Score Calculation and Sorting
The procedures for data processing are as follows: First, the project staff conducted authority control on the various forms of a university name and inspected all the SCI/SSCI bibliographic records in which the address field contained one of the forms of the university name. An accurate number of the total articles from a university was obtained after removing duplicate records containing different forms of that university’s name. Second, using SCI/SSCI, this ranking system obtained the total number of citations by adding the number of citations of each article from that university, starting from the article’s inclusion in SCI/SSCI to the date of our retrieval.
Based on the measurement procedures, this ranking system calculated a university’s score for each of the eight indicators. For each indicator, the university with the highest number received the maximum points (100); the other universities’ numbers were subdivided by the highest number and were converted decimally into their respective scores. For example, if University A had the highest number M for Indicator Y, it received 100 for that indicator, while University B with a number of N received (N/M×100) for that particular indicator. Finally, the ranking system calculated the final score of each university by the indicator weightings presented in Table 1 and sorted the universities by their final scores. Universities with the same scores were sorted alphabetically. It should be noted that many universities obtained similar scores, and the slight differences of the final scores must be interpreted carefully. A university’s slightly higher score than another university’s may not necessarily suggest its superiority in scientific research because the two universities might be in very close proximity in the ranking.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output.
Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.