Problems with Test Prep, Related to "Disaggregating Education"?

I’ve already commented on this before here and elsewhere, but I’d be a bit cautious about separating certification assessments from learning support (“disaggregating education”). The example I gave before was programming certifications, which some people just cram for (test preparation), and it has devalued the worth of those certifications to the detriment of the people (with a CS degree or not) who need these jobs and the employers who need those people.

This is already happening in other areas of K-12 and higher education to some degree, however. Dan Hickey (professor at Indiana) explains why test preparation in K-12 schools is “is educational malpractice, because that knowledge is useless for any other purpose” in his post What participatory assessment is NOT on a new “re-mediating assessment” blog devoted to assessment issues:

Test prep programs raise achievement scores by training students to recognize hundreds or even thousands of specific associations that might appear on tests. Because of the way our brain works, we don’t need to “understand” an association to recognize it. All test prep programs have to do is help students recognize a few more associations as being “less wrong” or “more correct” to raise scores. Because of the way tests are designed, getting even a handful of the more difficult items correct can raise scores. A lot. And this is the root of the problem this blog is dedicated to solving. We believe that the way knowledge is remediated for tests makes that knowledge entirely worthless for teaching, and mostly worthless for classroom assessment. Specifically, we believe that training kids to recognize a bunch of isolated associations is mostly worthless for anything other than raising scores on the targeted tests. Test preparation practices and the politically motivated lowering of passing scores (“criteria”) on state achievement tests is why scores on state tests have gone up dramatically under No Child Left Behind, while scores on non-targeted tests (like the National Assessment of Educational Progress) and lots of other educational outcomes (like college readiness) have declined. Here is an article referencing some of the earlier studies. We are particularly distressed the so many schools find their computer laboratories locked up and their technology budgets locked down by computer based test preparation and interim “formative” testing. Despite a decade of e-rate funding, many students in many schools still don’t have access to networked computers to engage in networked technology practices that are actually useful.

There is a lot of debate about consequences of test preparation for achievement and its impact on other outcomes. We think that any programs that directly train students in specific associations on targeted tests is educational malpractice, because that knowledge is useless for any other purpose. This is because we think that knowledge is more about successful participation in social practices. And these practices have very little to do with tests scores. So, in summary, test preparation is the epitome of what participatory assessment is not. Our next post will try to explain what it is.

That’s not to say ‘disaggregating education’ is wrong or a bad idea, necessarily, but there are some tough issues such as these. One solution might be to include performance assessments in the disaggregated certification and assessment services, but I’m not sure how to formalize performance assessments in all areas of K-12 and higher education (or perhaps formalizing is itself part of the difficulty). Another issue is that most recent research is showing that assessment works best for students when it is integrated with instruction and is a part of learning support, as in formative assessment. Perhaps formal certification PLUS for example a portfolio of work related to one’s learning is a step towards solving that issue. As the Carnegie Foundation and Lee Schulman have stated, the first step toward improving teaching is making it public (creating a ‘teaching commons’), and perhaps the same is true for learning and instruction as well.

Alfred Thompson struggled with a related issue on his blog, too, in regards to AP tests in high school computer science education:

I see a lot of great success in the computer science field on the part of students who did not even pass the Advanced Placement Computer Science exam. The students who did get 4s and 5s on the exam have also done well. So what does the APCS exam tell me about my students future success? Nothing.

….for the teachers who are not readers and for the teachers who worry about the multiple choice questions I’m not sure they get a lot of value from their students taking the exam. And there is that nagging problem of “teaching to the test” that gets to some of us.

I’ll leave you with one more thought. Real life is an open book test. I strongly believe that. It is one of the great lessons I have learned in my life. Some people never do well on the “read and regurgitate” sort of test that makes up so much of standardized testing. It is just not the way their minds work. They learn well. They know how to find things out. They are willing to work hard to find a way. They’re just not test takers. On the other hand the kids who do well on standardized tests so what? If the real world is really an open book tests how do standardized closed book tests reflect how the test takers will do in real life?

Also on the re-mediating assessment blog is a post by Jenna McWilliams about mediation and re-mediation. Mediation “refers to communication technologies that we use to mediate, frame, and scaffold our social relations with one another and with our material worlds,” and re-mediation “is a complete reorganization of thinking–new ideas that are mediated in new ways.”

Posted in education, research

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,346 other followers

%d bloggers like this: