When I was asked to speak to you, Bob Froh suggested that you might be curious to find out why a classicist and a humanist became so interested in the systematic assessment of student learning. There are a lot of answers to that question, but the basic one is this: I realized, belatedly, that if I had known when I was teaching what I now know about student learning, I could have taught more effectively and my students could have brought their learning to a higher level. Assessment is only part of that new knowledge about student learning, but it is a very important part. So the message of this talk is, I am afraid, "Do as I say, not as I did."
Of course, we can teach better now, and students can learn better than just a few years ago. I say that in self exculpation, but also to get us to start thinking about four propositions I'd like to discuss with you today:
Higher education is in a new ball game.
Higher education must not miss the opportunity to bring student learning to a much higher level.
There is a serious problem, deriving from the pace of change.
New know-how can surmount that problem.
The New Ball Game: That higher education is now in a new ball game is perhaps most evident in demands for greater accountability, reflected most recently in the report of the National Commission on the Future of Higher Education appointed by Mrs. Spellings of the Federal Department of Education. Such pressure coming as it does from outside academia has produced anxiety, resistance, and the hope that it will simply go away. It's not going away, not any time soon, and certainly not as long as the increases in college costs outpaces median family income. That, I believe, is the biggest factor driving the demand for accountability in higher education, and there is no reason to think those increases are about to abate. As long as they continue , the worries of students and parents about financing higher education will find expression in the political process, for example in legislation or regulations imposing strict accountability measures on colleges and universities.
Those external pressures are one indicator of the new setting now confronting higher education. But there are also developments inside higher education that seem to me equally important. I want to mention three of them today:
Changing Goals
New Knowledge
Assessment
First, changing goals. Over the past few decades many colleges and universities have rethought and restated their goals for undergraduate education. Mastery of a specific field has continued—quite properly—to be a prime objective, but when it comes to general education, the emphasis has shifted—often quite dramatically—from "exposing" students to bodies of knowledge that admitted them "to the company of educated men"—as Harvard used to say—to the development of certain cognitive and personal capacities—critical thinking, analytical reasoning, clarity of written and oral expression, moral reasoning, global awareness, civic engagement, etc. There's plenty of room for continuing debate about which goals are appropriate and reachable, but the shift from "content to cognition," as it is sometimes called, is well underway.
Second, new knowledge. We know a lot more now than we did just a few years ago about how students learn and what really works in teaching and learning. Some of this is seemingly small change, but it adds up. For example, we now know that students bring into class interpretive schemata, often naïve or misleading ones, and will use them to organize new information and ideas, unless other better approaches are consciously developed. In a physics course, students may assume that objects once set into motion naturally slow down and stop. They will use that schema until they are led to see that Newton was right, that objects will continue in motion until some countervailing force—friction, wind resistance or the like—stops them.
Such misconceptions may seem a minor matter, until one notices that students are indeed mislead by them. At a grander level, there is emerging knowledge about cognition and the brain. We now know that the brain continues to develop through late adolescence and early adulthood. A friend who is a developmental psychologist recently reminded me that "the late adolescent body may look physically mature but the nervous system is still developing, particularly about self-regulation and risk assessment abilities." Any parent of adolescent children, or even of young adults, will recognize the truth of what the neuroscientists have now confirmed. The brain, moreover, does not just add capacity, it also develops—it would seem—new capabilities, ones that need to be nurtured, developed, and challenged through the years traditionally associated with undergraduate education. We still have a long way to go in understanding the brain. In particular, as my friend tells me, there's very little research so far relating brain change during college years to student learning."
In between these two extremes, naïve misconceptions and neuroscience, there is a lot of new know how about what works and what doesn't in helping student learn. As Derek Bok has amply demonstrated in his very useful book, Our Underachieving Colleges (Princeton University Press, 2006), active learning works; lectures often fail. (I fear I may be demonstrating that to you even as I speak!) There are "social pedagogies" in which students work with one another and place one another in learning about a foreign culture or historically remote situation. The "Reacting to the Past" project developed by Mark Carnes of Barnard College is a good example. Service Learning works, especially when properly integrated into the curriculum, as Project Pericles shows. Our Teagle website has numerous other examples of practices that help students learn. The problem is not so much a lack of knowledge, it's slowness in putting that knowledge to work.
Let me turn then to the third development from inside academia that is deeply changing things. We now have new ways of determining whether students are really making progress to the cognitive goals that have become so central to liberal education. As long as liberal education was equated with exposure to certain content, the traditional course exam was an adequate means of assessment. But when the goal is to develop, over several years and many courses, cognitive capabilities such as analytical reasoning, or even post formal reasoning, something else was needed. Fortunately, various instruments such as the Collegiate Learning Assessment, have been developed and now make it possible to see where students are upon entering college, and where they have come by the time of graduation.
Let me tell you what I think is the secret about CLA and other such instruments. It's not that they are perfect, by any means. In fact, they seem to me rather like the computers of the 1980s—cumbersome, expensive, not very user friendly, not interconnected, limited in range and application, etc. There are a lot of problems, theoretical and practical, with such instruments. Hence they are open to all sorts of objections, nay saying and foot dragging. But, as with computers, they have immense potential and even before that potential is fully realized they confer one great, unspoken benefit on those who use them wisely. The secret benefit is this: You have to be clear about goals before you can assess. Good assessment drives clarity about goals, and that clarity can help us structure our assignments, design our courses, rethink our majors. In short it can help us teach better and our students learn better. So, please, do as I say and not as I did.
Having sketched out these three new developments inside higher education, let me turn to the opportunity they present to us.
Bob and Jill Reich (Bates College)
The Opportunity: The three points—1+ 2 + 3—don't just add up, they transform. That's the opportunity and, as you can guess, the problem. If a college sets clear cognitive goals for itself, communicates them clearly, finds effective ways to reach those goals, and assesses progress towards them in appropriate ways, student learning can reach much higher levels. By "higher level," I mean several intersecting things—a higher level of interest, curiosity, engagement, a greater mastery of the processes of reasoning and argument, higher levels of transference from one problem or field to another; hence better analytical reasoning and critical thinking, and finally a richer understanding of problems that can't be solved, or don't have a single right answer. That's a lot, I know, but liberal education is, or should be, an ambitious and rigorous process.
All this is now within reach; that it is possible to bring student learning to a much higher level. If we can, we must. So please, do as I say, not as I did! If you don't believe me about the opportunity, look at the canonical Gospel according to St. Derek, Our Underachieving Colleges. Don't be put off by the title. American colleges and universities are not doing a bad job. It's just that they can do so much better. That's the sense in which they are "underachieving," and the sense in which we can all do better. If we can we must. Do as I say, not as I did.
The Problem: So what's the problem? It's not that American higher education is doing a bad job, or that it won't change. It's the pace, and in particular, the disparity between the pace at which external developments are emerging and the pace of internal change.
In higher education we are accustomed to one kind of change—additive change. Got a problem? Add a program, a requirement, a staff person, and some parking spaces. If you add and never subtract nobody gets hurt or very angry. But the redefinition of goals, the serious use of new knowledge about student learning, and systematic assessment add up to transformational change, not additive change. And that is very scary. No wonder there's resistance on all these fronts; I'd say "glacial" change if the glaciers weren't melting so rapidly and depriving me of one of my favorite metaphors.
In the meantime the freight train has left the station in Washington DC. The nature of the problem we face came home to me last week when two reports came across my desk. One was Doug Lederman's article "Consensus (or Not) about Comparability" in InsideHigherEd (November 30, 2006). From that article, it appears that Mrs. Spellings is moving rapidly to implement the National Commission's desire for comparable (and public) outcome measures. These, it appears, may soon become a basic requirement for reaccreditation.
Many people outside academia will ask, "What's the matter with that? Why shouldn't we expect higher levels of accountability from higher education?" The fearful answer is that this kind of accountability, this "high stakes" assessment will turn out to be "low yield" for student learning. It may produce quite misleading figures for public consumption and resource allocation while doing very little to help us improve student learning. This fear is likely to become reality when there is some rush to come up with an assessment method. "The accreditors are coming; the accreditors are coming; quick let's come up with something." In such a scramble the whole assessment process quickly becomes distorted, inappropriate assessment measures may well be adopted, and worst of all, the spotlight may be pushed away from the most valuable use of assessment—to increase student learning.
The second report came from Ross Miller at AAC&U after he visited a number of the colleges that are members of Teagle's ad hoc collaboratives in value-added assessment. Ross and his colleague went not to evaluate these projects but to develop and to convey to us at the Foundation a hands-on sense of the status of assessment on various campuses. The AAC&U team found change was indeed taking place, but slowly and often haltingly. My friend Todd Glass summed up the situation on some campuses by saying there was "more fear of assessment than understanding of it."
Still it's dramatic change if one looks back a decade or two. In 1989, for example, a friend recollects that "I...visited campuses…in the service of a FIPSE funded 'assessing learning in the major' project, I was told that faculty regarded it as 'treason' for their…colleagues to collude in this malevolent effort. (And the selective New England liberal arts faculty flatly refused to participate at all, their deans' prior assent notwithstanding.)" In fact one college responded: "There is no significant question that assessment can answer. That being the case, we will not be part of the FIPSE assessment project."
The report we have just received from AAC&U paints a very different picture, revealing a growing campus interest in "creating assessments that can improve student learning. In the best cases, campuses are developing common expectations for assessing liberal education outcomes across programs, collecting data about student learning, and making improvements informed by evidence. The universal thread in liberal arts institutions of strong faculty commitment to student achievement provides hope that enthusiasts will continue to enlist others in their efforts to assess systematically how well students have learned and how effective their programs are" (p.11). (See note 1 below.)
That's a big difference from 1989, but the pace of change is still very slow, especially when compared to the likely pace of federally imposed changes in accreditation. The freight train is coming down the track from Washington DC, full throttle, and a lot of colleges, I fear, have strolled onto on the railroad crossing, and have spread out the picnic blanket while they chat about whether they should be doing anything about systematic assessment on their campus. When the train comes around the bend, these folks will scramble and in their rush end come up with bad assessment plans—poorly executed—and no improvements in student learning.
Getting There: The solution, clearly, is to get out ahead of the freight train, now, while there is still time to do it right. Get out in front. Get an educationally valid assessment plan in place before one is forced on you. That means building from the ground up, not the top down. (See note 2 below.) That, at least, is what we are learning from the more than one hundred colleges and universities that are currently participating in Teagle funded projects in assessment. The most productive starting place, we are hearing, is with the questions that faculty have about their students and how they are learning. The questions may vary from campus to campus, even from course to course. A group of faculty may be concerned about the quality of writing on campus. There are good ways of assessing that, and then of improving instruction in subsequent years, and assessing again to see if satisfactory progress is being made. On another campus, the issue may be the sophomore year, or overseas study programs, or capstone courses. In every case there are ways of finding out more, and using that knowledge to improve student learning.
There is nothing very radical in this. Such ground up assessment builds on the culture of evidence that already prevails on most campuses: we don't publish articles and books unless they are based on evidence; we don't hire, promote, award tenure, build new buildings, establish new programs without evidence of effectiveness. Why exempt student learning from that process?
There is, however, something more radical in most of our Teagle assessment projects—comparability. I use that word not in the sense that prevails in the Department of Education, that is, as publicly available, quantified and standardized comparisons of overall quality of institutions. I mean something simpler, and in my view, more useful. We are finding from these Teagle projects that there is much to be gained when an institution compares itself to its peers on some educationally significant topic. This can be done with anonymity since all the major instruments of assessment provide strict confidentiality. With CLA, for example you see a graph showing how your first year students compare to those at over one hundred other institutions and how your seniors compare.
The Teagle collaboratives, however, take this one step further. They have reached the point of trust and confidence at which it becomes possible to sit down with colleagues from peer institutions, compare results with a view to learning from one another how to improve student learning. This takes time, courage, trust, and above all, a determination to improve student learning. That is, after all, the name of this new ball game; it's the goal of the transformational change we have been talking about, and the reason that this old classicist feels confident that we really can teach better and help our students learn better. And if we can, we must. So, please, do as I say, not as I did.
Note 1: Despite these promising beginnings, campuses will need to take further steps so that their enthusiasm, hope, and interested are turned into action. What is needed to improve student learning--to take it to a higher level--is systematic assessment of student learning (this starts with not only identifying learning outcomes, but also having a concrete notion of them and what they "look like") and then systematic use of the data gathered. Without a systematic approach, campuses will find that change will be slow and less effective.
Note 2: Though written for journalists and media, the Hechinger Institute's primer on value-added assessment, "Beyond the Rankings: Measuring Learning in Higher Education" (funded by the Teagle Foundation) can be helpful for faculty and campus administrators as well. Another resource to tap is the Center of Inquiry in the Liberal Arts at Wabash College. Through another Teagle grant, the Center is offering support services for liberal arts institutions that are undertaking assessment initiatives on their campus.