Expert Commentary

Covering annual college rankings: Reporting tips

2015 tip sheet on covering college-ranking systems and a collection of research exploring issues such as how the publication of annual college rankings impacts student behavior and university spending,

Each fall, education reporters nationwide prepare for the release of U.S. News & World Report’s newest college rankings. While other organizations have started developing their own systems for ranking American colleges and universities, the U.S. News reports generally are the most widely recognized — and anticipated. For decades, the publication has distributed its lists of “Best Colleges” to help parents and students compare schools in areas such as graduation rates, class sizes, test scores and faculty quality.

As reporters plan their coverage of U.S. News’ latest rankings – or coverage of any college ranking system – they should keep in mind that people want to know what the new report says but also what it means. For example, if a local university did not make the Top 100, does that mean it’s not good? If a school is named among the Top 25, will its graduates get better jobs? How much weight should students give this information as they debate where to send their applications? Should parents listen to critics who say college rankings should be ignored? Do university leaders think their campuses were adequately assessed? Answering these sorts of questions is critical when writing about this popular topic.

It’s also important to help audiences make sense of the various efforts to evaluate institutions. It might be puzzling to learn that U.S. News picked Princeton University as the top national university in late 2014 while Pomona College was No. 1 in Forbes magazine’s “America’s Top Colleges” list in 2015. Neither school was among the Top 5 featured in the “2014-15 World University Rankings” from Times Higher Education, a London-based publication. Without explanation, national ratings systems from the Princeton Review, PayScale, Washington MonthlyNiche and other agencies could add to the confusion.

Journalists who want help explaining the evolution of this trend, its benefits and consequences should look to academic researchers as key sources. Insights gained by scholars, who continue to study this topic, can add considerable depth and context to a reporter’s work. Such research also can provide reporters with information that local college officials may not be willing to talk about or share. Some higher-education administrators, for instance, might be hesitant to acknowledge how much college rankings matter to them and whether these reports affect how money is spent on campus.

Some studies to consider:

 

“Modeling Change and Variation in U.S. News & World Report College Rankings: What Would It Really Take to be in the Top 20?”
Gnolek, Shari L.; Falciano, Vincenzo T.; Kuncl, Ralph W. Research in Higher Education, December 2014, Vol. 55. doi: 10.1007/s11162-014-9336-9.

Abstract: “University administrators may invest significant time and resources with the goal of improving their U.S. News & World Report ranking, but the real impact of these investments is not well known since, as other universities make similar changes, rankings become a moving target. This research removes the mystique of the U.S. News ranking process by producing a ranking model that faithfully recreates U.S. News outcomes and quantifies the inherent ‘noise’ in the rankings for all nationally ranked universities. The model developed can be a valuable tool to institutional researchers and university leaders by providing detailed insight into the U.S. News ranking process. It allows the impact of changes to U.S. News sub-factors to be studied when variation between universities and within sub-factors is present. Numerous simulations were run using this model to understand the effect of each sub-factor individually and to determine the amount of change that would be required for a university to improve its rank or move into the Top 20. Results show that for a university ranked in the mid-30s it would take a significant amount of additional resources, directed in a very focused way, to become a top-ranked national university, and that rank changes of up to ± 4 points should be considered ‘noise.’ These results can serve as a basis for frank discussions within a university about the likelihood of significant changes in rank and provide valuable insight when formulating strategic goals.”

 

“True for Your School? How Changing Reputations Alter Demand for Selective U.S. Colleges”
Alter, Molly; Reback, Randall. Educational Evaluation and Policy Analysis, 2014. doi: 10.3102/0162373713517934.

Abstract: “There is comprehensive literature documenting how colleges’ tuition, financial aid packages, and academic reputations influence students’ application and enrollment decisions. Far less is known about how quality-of-life reputations and peer institutions’ reputations affect these decisions. This article investigates these issues using data from two prominent college guidebook series to measure changes in reputations. We use information published annually by the Princeton Review — the bestselling college guidebook that formally categorizes colleges based on both academic and quality-of-life indicators — and the U.S. News and World Report — the most famous rankings of U.S. undergraduate programs. Our findings suggest that changes in academic and quality-of-life reputations affect the number of applications received by a college and the academic competitiveness and geographic diversity of the ensuing incoming freshman class. Colleges receive fewer applications when peer universities earn high academic ratings. However, unfavorable quality-of-life ratings for peers are followed by decreases in the college’s own application pool and the academic competitiveness of its incoming class. This suggests that potential applicants often begin their search process by shopping for groups of colleges where non-pecuniary benefits may be relatively high.”

 

“Control by Numbers: New Managerialism and Ranking in Higher Education”
Lynch, Kathleen. Critical Studies in Education, 2015, Vol. 56. doi:10.1080/17508487.2014.949811.

Abstract: “This paper analyses the role of rankings as an instrument of new managerialism. It shows how rankings are reconstituting the purpose of universities, the role of academics and the definition of what it is to be a student. The paper opens by examining the forces that have facilitated the emergence of the ranking industry and the ideologies underpinning the so-called ‘global’ university rankings. It demonstrates how rankings are a part of politically inspired, performativity-led mode of governance, designed to ensure that universities are aligned with market values through systems of intensive auditing. It interrogates how the seemingly objective character of rankings, in particular the use of numbers, creates a facade of certainty that make them relatively unassailable: numerical ordering gives the impression that what is of value in education can be measured numerically, hierarchically ordered and incontrovertibly judged. The simplicity and accessibility of numerical rankings deflects attention from their arbitrariness and their political and moral objectives.”

 

“Framing the University Ranking Game: Actors, Motivations, and Actions”
Dearden, James A.; Grewal, Rajdeep; Lilien; Gary L. Ethics in Science and Environmental Politics, 2014, Vol. 13. doi: 10.3354/esep00138.

Abstract: “Any formulation of the university ranking game involves the perspectives of the 3 key actors: (1) graduating high-school students, (2) universities, and (3) ranking publications. These university rankings are developed and maintained by for-profit publications or magazines, which must balance 2 potentially conflicting motives: (1) to provide students with information to help them decide which university to attend and (2) to increase the revenues of the publication. The actions of the students involve their decision on which universities to apply to and which university to attend among those they are admitted to. The universities seek to attract the best students and seek to improve their ranking to do so. We frame these diverse motives and the ensuing actions of these 3 sets of actors as the university ranking game and discuss the potential inefficiencies in the game and the possibility for unethical behavior by publications and universities.”

 

“University Rankings in Critical Perspective”
Pusser, Brian; Marginson, Simon. Journal of Higher Education, July/August 2013, Vol. 84.

Abstract: “This article addresses global postsecondary ranking systems by using critical-theoretical perspectives on power. This research suggests rankings are at once a useful lens for studying power in higher education and an important instrument for the exercise of power in service of dominant norms in global higher education.”

 

“The University Rankings Game”
Grewal, Rajdeep; Dearden, James A.; Lilien, Gary L. American Statistician, 2008, Vol. 62. doi: 10.1198/000313008X332124.

Abstract: “With university rankings gaining both in popularity and influence, university administrators develop strategies to improve their rankings. To better understand this competition for ranking, we present an adjacent category logit model to address the localized nature of ranking competition and include lagged rank as an independent variable to account for stickiness of ranking. Calibrating our model with data from U.S. News and World Report from 1999-2006 shows persistence in ranking and identifies important interactions among university attributes and lagged rank. The model provides (lagged) rank-specific elasticities of ranks with respect to changes in university characteristics, thereby offering insight about the effect of a university’s strategy on its rank.”

 

Key words: public university, community college, liberal arts college, financial aid, college application, higher education, Ivy League, college application

About The Author