Posted by: bluesyemre | January 24, 2022

20 years on, what have we learned about global rankings?

In 2019, several ‘famous’ people went to prison for criminal conspiracy to influence undergraduate admission decisions. Thirty-three parents of college applicants were accused of paying more than US$25 million between 2011 and 2018 in what became known as the Operation Varsity Blues bribery scandal.

Two years later, the former dean of Temple University’s business school, along with two co-conspirators, was convicted of fraud for falsifying data provided to US News and World Report. He faces the maximum possible sentence of 25 years in prison followed by three years of supervised release and a US$500,000 fine.

Both events tell a tale of status-seeking behaviour – how university rankings continue to bamboozle the public, students and parents, influence university, government and investment strategies, and captivate media headlines and audiences around the world.

Launched in 2003, global rankings captured the zeitgeist of accelerating globalisation and the global battle for talent and increased policy and public focus on performance, quality and accountability.

On the eve of their 20th anniversary, the Research Handbook on University Rankings: Theory, methodology, influence and impact – in 37 chapters – provides a comprehensive review and analysis of their influence and impact.

Three themes are highlighted below.

Geopolitical reshaping of the higher education landscape

The success of rankings lies in the way they showcase international comparability between inherently diverse and unequal systems and institutions. As Brendan Cantwell of Michigan State University argues, the global higher education system is characterised by asymmetrical exchange and collaboration as well as by conflict and competition within and between countries.

Excellence initiatives aim to alter that narrative by seeking to position a few universities at the top of the global hierarchy.

China’s path is well documented. Its remarkable rise from having no universities in the top 100 in 2003 to seven in 2021 is an increase of 700% in the Academic Ranking of World Universities (ARWU). In comparison, the United States experienced a 31% decline from 58 universities in the top 100 in 2003 to 40 in 2021.

This also explains why the French celebrated when the University of Paris-Saclay was ranked 13th on the ARWU in 2021. A process of consolidation had brought together 10 faculties, four grandes écoles, the Institut des Hautes Etudes Scientifiques, two member-associated universities and shared laboratories with the main national French research organisations.

Too much focus on the top 100 ignores the more noteworthy expansion in scientific output and capacity coming from a pipeline of universities and scholars from a more diverse set of countries, as described by authors Simon Marginson, and Jeongeun Kim and Michael Bastedo. This multi-polarity portrays an open and dynamic higher education and knowledge system – different from the static core-periphery model which has characterised global system theory.

Yet, it is also one in which elite universities, and their nations, seek to reinforce and extend their influence and advance their objectives through international networks, says Angel Calderon. Competition and collaboration go hand in hand.

But there are many ‘losers’. Professor Akiyoshi Yonezawa explains that the arms race for investment in world-class universities became more expensive than Japan, with its already-mature higher education system, could afford. A similar tale is told by Tara K Ising and James D Breslin of the “fallacy of status prioritisation” which nearly crippled the University of Louisville, United States, when the economic tide went out.

These differing outcomes highlight the necessary substantial investment underpinned by favourable policy alongside the built-in bias of rankings methodology which favours high-performing and older universities, research measures and reputation. As such, they tell us almost everything we need to know about geopolitical tensions today.

The business of rankings

Increased attention on international comparability and accountability, along with open science systems and the desire for digital platforms, has fostered growing alignment between rankings, publishing and big data. This is generating a global intelligence business with huge repositories of higher education and scientific data held behind paywalls.

Hamish Coates evidences deepening integration between a small number of global publishers and online systems, including “online programme management” firms. Using Elsevier as a case study, George Chen and Leslie Chan map the development of end-to-end publishing, data analytics and research intelligence platforms which extend the visible role as a service provider as well as the invisible role in public governance.

Publishing firms intersect with rankings and sophisticated end-to-end software to accumulate and manage data, monetise and create new assets and leverage analytics products to work across the entire academic knowledge production cycle from conception to publication and distribution and subsequent evaluation and reputation management.

In turn, they arguably generate perverse incentives for universities and researchers to use those very same products for competitive and strategic purposes.

Too little attention has focused on corporate integration and economic concentration between rankings, publishing and big data. Indeed, the uncritical ease with which universities and scholars provide portfolios of data is illustrated by the mountains of material submitted to the Times Higher Education Impact Rankings for assessment behind closed doors.

The recent announcement of the acquisition of Inside Higher Ed by Times Higher Education has the potential to further confuse the roles of independent commentator on higher education and promoter of rankings.

Questions are only beginning to be asked about data ownership, governance and regulation – in the same way such questions are being asked about big tech.

Meaningful indicators and measuring performance

One – if not the – most regularly critiqued rankings issue concerns the methodology and choice of indicators. The growing number of rankings and new audiences have hastened the creation of vast data-lakes, but do not tell us much about the missions and outcomes of higher education.

We still have a poor understanding of what constitutes high quality higher education or how to assess quality in teaching and learning, internationalisation, EDI (equality, diversity and inclusion), societal engagement and impact, innovation, etc. We agree higher education institutions should be more socially responsive, but we lack a common understanding of what that means – and we’re too quick to prioritise global reputation.

Academics and universities are as guilty as their governments in this regard. Take the staff-student ratio which is readily used but, as John Zilvinskis et al and Kyle Fassett and Alexander McCormick argue, it does not correlate with teaching quality. Measuring learning gain, says Camille Howson, is a noble ambition, but there is “no simple ‘silver bullet’ metric that accurately and effectively measures student learning comparatively across subjects of study and institutional types”.

While some governments and universities remain under the influence of rankings, others are more circumspect. Rankings may be a motivator, but as Sebastian Stride et al, Andrée Sursock, and Cláudia Sarrico and Ana Godonoga argue, benchmarking and quality assurance can play more sustainable roles in shedding light on weaknesses, adopting new approaches and improving quality, governance and framework conditions.

There is too much evidence, warns Robert Kelchen, that we simply value what is measured, not what matters.

Still relevant?

All this focus on world-class excellence poses a basic question as to whether our students and graduates are better citizens and if our institutions make meaningful contributions to the well-being and sustainability of their communities.

A recent piece in The Atlantic identifies graduates of US global top 20 universities as being at the centre of Donald Trump’s coup attempt on 6 January – zealously undermining the basic values and structures of democratic society because their historic or assumed status positions protect them from any “significant consequences of their failures”.

At the end of nearly 20 years of rankings, there is little evidence that rankings make any meaningful impact on improving quality. And, there is no correlation between rising in the rankings and making a significant contribution to society or the public good.

Ellen Hazelkorn is a partner at BH Associates and professor emerita of the Technological University Dublin, Ireland, as well as joint editor of Policy Reviews in Higher Education. Georgiana Mihut is assistant professor in the department of education studies, University of Warwick, United Kingdom. Research Handbook on University Rankings: Theory, methodology, influence and impact is edited by Ellen Hazelkorn and Georgiana Mihut.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.


%d bloggers like this: