There’s an ongoing argument about how well (or badly) American students are doing in developing their critical thinking capacities. (“Less Academically Adrift”: http://www.insidehighered.com/news/2013/05/20/studies-challenge-findings-academically-adrift). A lot of the argument is statistical, but there are political implications as well. Many on the political right would like nothing better than to show that American higher education is doing a miserable job. They saw in Richard Arum and Josipa Roksa’s book Academically Adrift (Chicago, 2011) http://www.press.uchicago.edu/ucp/books/book/chicago/A/bo10327226.html a chance to bash colleges and universities. That’s a distortion of the book since its goal, I believe, was to point out ways to improve student learning, not to trash colleges and universities.
Inevitably, there has been push back, some of it with a political agenda of its own. .
The evidence comes largely from the Collegiate Learning Assessment (CLA), not a perfect assessment instrument but, I believe, the best measure available of student achievement in a cluster of capacities (not just critical thinking, but analytical and post-formal reasoning, and writing skills) that liberal education has often claimed to help students develop. So how well do students in institutions that use the CLA do in improving those capacities during their undergraduate years?
That’s where the numbers start to roll. The CLA data show that the overall gain (“effect size”) over four years is 0.73 of a Standard Deviation (SD), as reported by Roger Benjamin in “Three Principal Questions about Critical Thinking Tests” (http://cae.org/images/uploads/pdf/Three_Principal_Questions_About_Critical_Thinking_Tests.pdf ). Unless you are a right wing ideologue, that number doesn’t mean much all by itself. You ask what the highest performing institutions achieve (and how badly the poor performers do). In the most recent year the top among the CLA schools was a gain of 2.30 SD and the bottom a negative, -0.76 SD. That is a spread of over three standard deviations, a lot when you consider that the mean gain is 0.73 SD. Those numbers come from the extremes, of course, so what about the 25th percentile compared to the 75th? The spread there is about 1.5 SDs. That seems to me still a huge variation. It suggests that at many institutions there is a lot of room for improvement. No institution should be content with the median of 0.73 SD, when others are achieving twice that.
A more sophisticated approach to the data looks at “Value Added,” that is, results compared to predictions based on entering SAT scores, etc. Those figures contain a real shocker: The 25th percentile is a negative (-0.56 SD). That is, in a quarter of institutions students actually slip back from the course they were on when entering college.
So who does well and who badly? Is the top all schools with high snob value? Big endowments? Top US News rankings? The scuttlebutt I hear alleges that the high performing institutions are not always the most prestigious ones. And among the bottom performers there may be some well-known names. Since CLA keeps the individual institutional reports confidential, there is no way to be sure – except for faculty to insist on seeing their institutional report, and take a hard look at it with one crucial question in mind – How can my institution do a better job for its students? That may involve making comparisons with peer institutions, all quiet and confidential, of course, but focused on the crucial question What works and what doesn’t, and for whom? You’ll have to drill down and find what is happening in the various majors, and among groups of students – men, women, minorities, athletes etc. My hunch is that at most institutions there will be success stories and problem areas.
But one thing is already clear, and agreed upon by both sides in the debate: “ .. students who majored in … the arts and sciences, including the humanities, foreign languages, physical and natural sciences, mathematics, and engineering did better than academic majors in applied professional fields such as health, education, and business.“ (Roger Benjamin, cited above).
It’s important not to stop there. Why do students in the liberal arts and sciences outperform those in other fields? What works and what doesn’t? Roger Benjamin writes, “One hypothesis is that there is more writing and analysis required of students in those fields.“ That’s what we humanists do (most of us, most of the time). But in the current climate, academic and political, humanists need not just to assert that but to do the numbers, with CLA or with other methods of assessing these and other capacities. That’s why this debate is important for everyone who cares about the classics, the humanities, and liberal education.