Fair Test
Issue: Jul 2008
Researchers continue to find that high-stakes testing has damaging consequences such as decreased graduation rates, especially for low-income and minority group students. Survey results show that employers do not find test scores meaningful for evaluating job candidates. And a study evaluated the sources of achievement gaps in part by asking how students chose answers on a fifth grade science exam.
* In January, Rice University’s Linda McNeil and colleagues published "Avoidable Losses: High Stakes Accountability and the Dropout Crisis,” which found that Texas’s accountability system has depressed graduation rates. Most severely affected were African Americans, Hispanics, and English language learners. (Summary and link to article is available HERE
Now Julian Vasquez Heilig and Linda Darling-Hammond have published a complementary study, "Accountability Texas-style: The Progress and Learning of Urban Minority Students in a High-Stakes Testing Context," in the June issue of Educational Evaluation and Policy Analysis. This study of one Texas district demonstrates how high-stakes testing prompted significant increases in grade 9 retention (grade 10 is tested, 9 is not). Up to 30% of the students were retained in 9th grade; of those "only 12% ever took the TAAS, and only 8% passed it." Up to 40% of each grade cohort "withdrew." Most were not labeled “dropouts” but simply disappeared from the rolls; only 8% transferred to other schools.
The fraudulent result was a soaring reported graduation rate, though the authors' analysis showed only one-third of a cohort graduated in five years or less. African Americans, Latinos and English language learners had the lowest completion rates. This study covers years before the federal No Child Left Behind (NCLB) law was enacted but when the Texas model for NCLB was already in full force. The accountability system did not help low-income and minority-group children, but it helped the state pretend its policies were working.
--EDUCATIONAL EVALUATION AND POLICY ANALYSIS.2008; 30: 75-110
* Exit exams have no positive impact on academic achievement, according to a peer-reviewed study published in Educational Policy in June. The study, by Eric Grodsky, Demetra Kalogrides and John Robert Warren, compared reading and math scores of students in exit exam states with those in states without exit exams. It found that even the most difficult exams failed to improve performance in reading and math. The researchers analyzed the scores of 13- and 17-year-olds on National Assessment of Educational Progress (NAEP) exams.
-- “State High School Exit Examinations and NAEP Long-Term Trends in Reading and Mathematics, 1971-2004”
* Businesses evaluating potential employees generally do not place a high value on test results, according to a survey of 301 employers by Peter D. Hart Research Associates. The survey, released in January, reported that employers see multiple-choice tests as ineffective. They prefer to judge potential workers based on “assessments of real-world and applied-learning approaches,” such as evaluations of supervised internships, community-based projects, and comprehensive senior projects.
-- “How Should Colleges Assess and Improve Student Learning?” is available
--A related FairTest fact sheet, “Why Graduation Tests/Exit Exams Fail to Add Value to High School Diplomas” is available.
* “Left Behind by Design” found differential results for low, middle and high-performing Chicago students on the Iowa Test of Basic Skills and the Illinois State Achievement Test. Derek Neal and Diane Whitmore Schanzenbach, both University of Chicago economists, examined the performance of Chicago public school students to see how test-based accountability systems influenced the achievement of students at different ability levels. They found that No Child Left Behind and similar district reforms failed to generate score gains for the lowest-performing students. There were gains for students in the middle of the pack, but only mixed evidence of gains among the highest achieving students. The authors said that because of requirements to have more students score proficient, "Schools may find it optimal to ignore students who have little or no chance of reaching proficiency without intensive and costly intervention … and to limit services for gifted children who are likely already proficient.” In addition, "raising standards may actually increase the number of low-achieving children who are ‘left behind’ by increasing the number for whom the standard is out of reach." They did not discuss to what extent the gains by the middle students were score inflation from narrowed curriculum and teaching to the test, consequences found in other studies.
--“Left Behind by Design: Proficiency Counts and Test-Based Accountability” is available at.
* A study of why students chose particular answers on the Massachusetts Comprehensive Assessment System (MCAS) 5th grade state science test provides broader insights into achievement gaps and the limitations of standardized tests, particularly for low-income students and English language learners. The test appears to underestimate how much these students know.
The researchers studied measurement error caused when students who know the material nevertheless choose the wrong answer ("false negatives") or when students who don’t know the material get the answer right ("false positives"). The researchers interviewed students as to why they chose the answers they did.
Question wording seemed to cause both false positives and false negatives. On questions and answers with long or complicated wording, more students chose the wrong answer even though they knew the science behind the question. Low-income and English language learner students primarily fell into the false negative error pattern.
The authors wrote that when students, classes or schools do poorly on a test, there is a tendency to assume that the student, teacher or school is at fault. “The results reported here, while preliminary, suggest a far more complicated picture that at the very least casts doubt on the use of scores on MCAS and similar high-stakes tests to make consequential decisions about students, teachers, administrators, and schools.”
--For more information on the study, “Making Sense of Children’s Performance on Achievement Tests: The Case of the 5th Grade Science MCAS,” contact TERC, at www.terc.edu.
This blog on Texas education contains posts on accountability, testing, K-12 education, postsecondary educational attainment, dropouts, bilingual education, immigration, school finance, environmental issues, Ethnic Studies at state and national levels. It also represents my digital footprint, of life and career, as a community-engaged scholar in the College of Education at the University of Texas at Austin.
No comments:
Post a Comment