Ranking not the safest path to the best programme

As a student, looking at the results of rankings need not lead, for example, to finding the country’s best programme in medicine. In several cases, the programmes are ranked differently depending on who did the ranking. A thesis from Linköping University demonstrates this. 

Quality at universities and colleges is reviewed, assessed, and ranked the world over. The reviews are done by both Swedish and international entities, often by several at the same time. 

“My research shows that the ranking systems are not reliable. A programme can rank high in one system, and low in another,” says Brita Bergseth, whose licentiate thesis discusses quality measurement and ranking of universities and colleges, with a focus on educational programmes within medicine and nursing.

The thesis compares the results from three models for quality assessment at Swedish higher education institutions. She compares things such as the results of two rankings – U-Rank, done by an individual association, with the evaluation by the Swedish Council for Higher Education. The results of both reviews show only a haphazard connection; in several cases, the differences are significant. The ranking among the 24 higher education institutions reviewed could differ by up to 16 places. The assessments agreed in only one case. 

The reason for the differences is that both rankings are built on different systems. The Council evaluates degree programmes according to a four-step scale of quality, while U-Rank is built on statistics from organisations like the Council and Statistics Sweden. Common to both bodies, however, is that they base their assessments on the Swedish Higher Education Act, and that they use the same overall definition of quality in higher education. 

“But the views of which factors are regarded as constituting quality differ. For example, looking at teacher competence in total at the institution, or in the programme. Or looking at the connection between research and instruction. Similarly, the choice of data and methods for measurement vary. The purpose of the rankings is to provide guidance to future students, but I think there’s a risk in saying that they measure the quality in the programmes,” Ms Bergseth says.

So far, no internationally accepted instrument for measuring quality in higher education has been developed. Despite this, ‘league tables’ have had an impact, both as sources of information and governing instruments in the development of universities and colleges. Ms Bergseth’s research points out the difficulties in finding valid, reliable, and relevant models for quality assessments and ranking, and she is looking for continued methodological development in order to establish more robust review models. 

Thesis: Vägledande eller vilseledande? Kvalitetsmätning och ranking av universitet och högskolor

 

Published 2015-11-03