Tuesday, November 20, 2007

A Place at the Table

Here's a suggestion on alternative forms of assessment from the perspective of a teacher - the people who actually know the students we code as numbers. A quote that stuck out to me while reading: "Education has a long red-pen tradition of how we measure achievement." -Patricia

By Susan Graham | Teacher Magazine
November 18, 2007
This is a Test, This Is Only a Test….

Or is it an assessment?

Assessing and testing issues are on my mind a lot lately. They’ve been hot topics at school, on the Web, in Education Week and around the dinner table at our two-teacher home.

But this brain dump is not brought on by first-quarter grades and the parent conferences we are having tonight at my middle school. You may think that I’m going to talk about No Child Left Behind , but I’m not. Nor is this about the recent release of National Board for Professional Teaching Standards assessment scores, or the new “Trial Urban District Assessment” results from NAEP - the Nation's Report Card. It’s more big-picture than any of that.

Testing and assessment are terms we throw around a lot in education. We have a love/hate relationship with them. But are TEST and ASSESSMENT interchangeable terms and tools?

Well, you’ll have to take my test.

Type TEST and then right-click to find a synonym. Microsoft Word will offer you: examination, experiment, check, analysis, trial, assessment and ordeal. So TEST and ASSESSMENT mean the same thing.

Now try this: Type ASSESSMENT and right-click. The synonyms offered are: appraisal, estimation, measurement, judgment, review, consideration, or opinion. TEST isn’t an option. So ASSESSMENT and TEST don't mean the same thing.

Trick question? No. The problem is that too many people are trying to dress up some pretty dull graduate school reports and policy white papers by using the MS Word thesaurus to find synonyms. While a TEST may be a form of ASSESSMENT, an ASSESSMENT is more than just a TEST. They are two different words that may, on occasion, be correctly interchanged. Here is how I see it:

A TEST measures what the test designer chooses to find out about what the test taker knows. Testing is negative in that it identifies what is not known about a definable body of content. It tells what has been mastered, where there are gaps, and can be analyzed to identify patterns for improved instruction. The underlying assumption, of course, is that the test maker knows what is critical information and has the authority to determine the correct answers.

An ASSESSMENT is a more complex process that attempts to capture what the assessment taker knows or can do. It is a positive model that tries to determine how effective the assessed person is at identifying critical information and communicating a justification of how and why his response addresses the question. The assessor is not empowered to impose his interpretation of what the assessment-taker implied or meant but did not state. An assessment is not about what is wrong; it is about (and only about) what the assessment taker sees as right. While it gives more power and control to the assessment taker, it also demands more. The primary responsibility lies with the person taking the assessment.

Education has a long red-pen tradition of how we measure achievement. What most of us remember of our own school assessment process was the opportunity to demonstrate what we did or did not remember about what we were asked to learn. It was safer. It was faster. And it was more defensible. It required less from both parties. Determining real achievement is more complex. It involves more risk on both sides. As the assessment-taker, I am taking the risk that I can demonstrate my achievement effectively. As the assessment-creator, I am going out on a limb and saying that I can can recognize your achievement if you demonstrate it effectively.

This same discussion about tests and assessments that my science-teacher husband and I have at the dinner table is taking place at policy tables as well. In a recent Education Week commentary, Accountability Tests’ Instructional Insensitivity: The Time Bomb Ticketh, assessment expert James Popham describes the current testing process as an accountability time bomb because it is instructionally insensitive.

How could American educators let themselves get into a situation in which the tests being used to evaluate their instruction are unable to distinguish between effective and ineffective teaching? The answer, though simple, is nonetheless disquieting. Most American educators simply don’t know that their state’s NCLB tests are instructionally insensitive. Educators, and the public in general, assume that because such tests are “achievement tests,” they accurately measure how much students have learned in schools. That’s just not true.

Dr. Popham is right, in part. Testing as an assessment of student achievement is inaccurate. But I would argue that Dr. Popham is also wrong about one thing. He has made an inaccurate assumption that teachers don’t get it. Teachers deal with living, breathing children who are our nation's favorite test subjects, and we are very sensitive to the limitations of trying to capture 200 days of learning with a single multiple-choice, end- of-course test. We get it! But whenever we point out this abuse of good assessment practice, we are accused of being unwilling to be held accountable because we are (a) lazy; (b) well intended but incompetent; and/or (c) unwilling to believe all children can learn.

I respectfully point out that Dr. Popham's biography indicates that he has been out of the K-12 classroom for more than 30 years. While testing, rather than assessing, may have been standard procedure during his years in the high school classroom, things have changed in most schools. Just as cybermetrics and learning theory have evolved, teachers' practices have also evolved to include multiple measures in differentiated formats. Many teachers know what good assessment looks like -- and we practice it. The fact that -- when accountability time comes around -- we are not judged by "instructionally sensitive" tools dismays us, but it is not our fault.

As professionals, our hands have been tied by decision making processes that, to a great extent, have excluded practicing classroom teachers from the conversation on accountability. I'll make an offer: Invite me to the policy table and I'll be more than happy to describe ways in which we and our students might be more fairly and accurately assessed. In return, I'll invite you to my dinner table, where we can continue the conversation over pie and coffee.

Susan Graham has taught family and consumer science (formerly "home ec") for 25 years. She is a National Board-certified teacher, a former regional Virginia teacher of the year, and a Fellow of the Teacher Leaders Network. She invites readers to pull a chair up to her virtual table as she offers her voice-of-experience perspective on teaching today, with a special focus on teacher leadership and continuous professional growth.

1 comment:

  1. because really assessing means then that you would then remedy the problems in understanding. Especially with performance assessments and authentic assessments. The problems would be revealed immediately providing a prime opportunity to corrrect any misunderstanding of the learning. But standardized testing that then is NOT connected to the immediate remediation of the problem the learner is faced with, has some other purpose but certainly not remediating that student.
    So we want to show kids are not learning, schools are failing but don't want to do anything about it like using best practice in teaching.

    ReplyDelete