Scott Jaschik | Inside Higher Ed
March 21, 2011
Within hours of the release of the National Research Council's ratings of doctoral programs last year, individual departments and universities were stating objections to the methodology, the data or both.
Six months later, the American Sociological Association has released one of the most detailed analyses of the ratings -- and the panel of sociologists was unsparing in its critique. It found fault with the methodology, the way it was executed and the broad approach used by the NRC. The sociology panel's report, released with the endorsement of the council of the association, urges sociologists to take actions "discouraging faculty, students, and university administrators from using the core 2010 NRC rankings to evaluate sociology programs," and "encouraging them to be suspicious of the raw data accompanying the 2010 NRC report." Many of the criticisms are based on particular qualities of sociology, but may well apply to other fields as well.
The NRC rankings were released in September, years later than called for in the original plan and with an unusual methodology in which two separate rankings were released and ranges rather than a single ordinal number were provided for individual programs. The sociology panel found numerous flaws in the way the project was run.
* Books were not counted in evaluating faculty productivity or quality for social science disciplines. The NRC included books in its humanities evaluations, but not in other fields, noting the norm in fields such as economics of focusing on journal articles. The sociologists' report says that books play an important role in the discipline's scholarship and should not have been excluded. "The decision to exclude books understates the productivity of departments in which faculty tend to work on qualitative, historical, and theoretical research, and on research aimed at a broad or interdisciplinary audience," the report says.
* Analysis of faculty research also excluded book citations. "Many of the most important and influential works in the history of sociology and in recent sociology have been books. Examples are Theda Skocpol’s States and Social Revolutions and William Julius Wilson’s The Truly Disadvantaged, citations to which would be excluded in the assessments of their departments," the report says.
* Calculations of journal articles did not take into account whether the publication was well-regarded or obscure. "By disregarding differences in journal quality, the rankings penalize departments in which faculty tend to publish fewer papers of higher quality in stronger journals. Many scholars would consider the NRC procedure a perverse incentive to produce larger quantities of less important scholarship."
* No distinction is made between single-author and co-authored papers. Sociology is a field where some subfields tend to publish single author or double author papers while others publish with large groups of colleagues. The NRC formula, the report notes, counts a single piece with six authors in a department as six articles, favoring departments with a tradition of many co-authors.
* Admissions quality was judged by quantitative GRE scores alone. For many sociology departments, verbal scores are as significant a measure, the report says. It explains: "This decision privileges sociology departments with quantitative oriented programs over departments with theoretical, historical, ethnographic, or other nonquantitative orientations."
Beyond these issues (most of which the sociology report says could be fixed), there are additional problems posed by the nature of sociology today in ranking departments in their entirety, as the NRC does. One issue is that there are subfields -- and the report notes that U.S. News & World Report ranks sociology in seven subfields. (Given that the sociology report makes clear in other places that the authors don't hold the U.S. News rankings in particularly high regard, comparing them favorably to NRC on one measure would appear to be a notable dig at the NRC.) And even subfields don't capture all of the issues, the report says, since there are also significant methodological differences among departments.
"For example, a department can excel in urban sociology by having a largely ethnographic or a largely quantitative program," the report says.
While offering numerous examples of flaws in the rankings, the report says that they are already having an influence -- one that could grow in dangerous ways.
"In the short term, the NRC ranking system can have detrimental effects on departments whose rankings are lower than their actual quality. University administrators seem to have taken the results seriously," the report says. It notes that universities have rushed to issue press releases about highly ranked departments, and that an "attitude" of respecting the NRC review "may result in short-term punitive actions by administrators against departments whose rankings, unfairly or not, are low."
Further, the report says its authors fear that some universities may attempt to alter departments to score better -- regardless of the flaws in the methodology. "Departments may also have reason to worry that, in their next review, administrators may direct them to change their programs to improve their status in future NRC rankings, for example, by hiring faculty who write articles, rather than books."
Jeremiah P. Ostriker, chair of the NRC committee that prepared the rankings, and a professor of astronomy and former provost at Princeton University, said in an interview that there is "a great deal of validity in many of the points" raised in the report. Any rankings system, Ostriker said, "will seem and be unfair to some group or another."
On the question of including books in calculations of faculty productivity, Ostriker said that the NRC committee included social scientists, and that the issue never came up. He defended, however, the decision not to evaluate the quality of journals in which articles appeared. He said that by measuring both number of articles and their citations, the methodology reflected the reality that some journals are more influential than others. "A paper in Nature is much more cited than one in some out-of-the-way journal," he said.
Ostriker also said that, based on modeling done by the committee prior to adopting the final methodology, the NRC found that many tweaks don't change the actual rankings in significant ways.
The tone of the sociologists' report did not surprise him, Ostriker said. "When our study first came out, it was greeted with shock and horror because it wasn't what people expected and they were very, very nervous," he said. As time has passed, "the tone has changed," and he thinks the same will be true of the sociologists' criticisms.
Still, Ostriker stressed that he believed the most important part of the process was assembling the data, not the methodology. He noted that groups could use the data for their own rankings or on a program-by-program basis. "What's most important is finding ways that the data can be used and made helpful to universities," he said.