Outside the Cave has moved!

You should be automatically redirected in 6 seconds. If not, visit
http://blog.stephenlazar.com
and update your bookmarks.

3.09.2006

Standards: How Can We Measure Schools?

This post has been swimming around in my head in different incarnations for the past week, and is in response to three different posts. It started when reading David Warlick's post about his shock about finding out that many of his son's peers are having their college application essays professionally edited:

“Wait a minute!” I said. “Kids are having their essays edited by professional writers, and then submitting them as part of their application packet?”
“Well, yes!” she said.
“But that’s cheating! But that’s cheating?”
Do the universities know that students are doing this? Do they care? How much does it cost? Would I encourage my son to take advantage? (No!)
I'll get back to this in a bit. A few days later, I read Tim Stahmer's very insightful and accurate critique of Jay Matthews's (Washington Post) Challenge Index, which is the source for Newsweek's annual Top HS in American List, based on a recent Education Sector report :
Creating a list like this wouldn’t be so bad if it wasn’t given such high credibility by the news media. School quality is a complex issue and the flurry of publicity that surrounds the Challenge Index masks many other factors that need to be addressed.

[From the report] Using publicly available student performance data, we found that many schools on Newsweek’s 2005 ranking have glaring achievement gaps and high dropout rates. By presenting them as America’s best, Newsweek is misleading readers and slighting other schools that may in fact be better than those on Mathews’ list.
Finally, Chris Lehman just posted a really beautiful piece on what schools are, and are not:
We are not:
Only a sorting mechanism for colleges
A target market
The next great money-making scheme.
A business
A way to create the next generation of workers.
With these three pieces in the back of my head, I've been thinking a lot about how we should and can measure schools. As much as I hate to admit it, I think the spirit of NCLB is on the right path - Schools/teachers should be held accountable for the achievement of their students, and this should be measured not just in aggregate, but also for sub-populations within the school based race, gender, and eligibility for special education services (I would add class to the list). Now in its implementation, NCLB is so wrong in so many ways, but those have all been well explained elsewhere.

When I first read Tim's piece, I thought about what "Challenge Index," or other such ranking would look like if it actually responded to what the community wants for its students, i.e., what a community believes its students should know and be able to do upon graduating. One of the obvious answers that came to mind was to measure schools by what their students are doing 1, 5, 10, etc... years after graduating. In suburban schools for example, (the ones that look the best in Matthews' Challenge Index), the primary concern of parents and students is college admissions. Wouldn't then it make sense to measure schools by where their students are admitted to college?

However, this raises another program, which David ignores in his piece: the problem of unequal access to college essay editors, test prep, tutoring, etc. When I started off at Brown, one of the biggest shocks for me was that it seemed that every middle and upper class student from one of the coasts had taken private SAT prep courses, and many (the majority?) had private tutors. In the community I grew up in, a middle/upper middle class suburb in Ohio, the only students who took SAT prep courses were those who had difficulties with test taking. The ability to be tutored for the SAT and AP tests, as well as hiring private editors for college essays gives the already advantaged yet another advantage over most students. So would it really be fair to measure a school on college admissions?

Which leads me to a quandary. How can we measure schools in a way that would satisfy the (unfortunate) cultural demand for numerical measurement and comparison while staying true to what a school is (or at least should be)? How can we measure in a way that avoids becoming what Chris very eloquently does not want us to become?

No comments: