The Virtue Of Boring In Education
The College Board recently released the latest SAT results, for the first time combining this release with that of data from the PSAT and AP exams. The release of these data generated the usual stream of news coverage, much of which misinterpreted the year-to-year changes in SAT scores as a lack of improvement, even though the data are cross-sectional and the test-taking sample has been changing, and/or misinterpreted the percent of test takers who scored above the “college ready” line as a national measure of college readiness, even though the tests are not administered to a representative sample of students.
It is disheartening to watch this annual exercise, in which the most common “take home” headlines (e.g., "no progress in SAT scores" and "more, different students take SAT") are in many important respects contradictory. In past years, much of the blame had to be placed on the College Board’s presentation of the data. This year, to their credit, the roll-out is substantially better (hopefully, this will continue).
But I don’t want to focus on this aspect of the organization's activities (see this post for more); instead, I would like to discuss briefly the College Board’s recent change in mission.
The Board’s heavily-publicized plans to strengthen the design of their exams, as well as several new programs to increase participation in the PSAT/SAT/AP exams, are clearly laudable. Yet the organization, which is under new leadership, has also committed itself not only to administering a better test to more students, but also to improving student performance (e.g., “college readiness”).
David Coleman, the new president and CEO of the College Board, explains:
For a long time institutions like ours have been reporting that too many students aren’t ready for college and career workforce training. It’s time to do something about it…Offering the same old test in the face of lasting problems is just not good enough.I’m not so sure.
Look, I’m all for advocacy and programs geared toward improving student performance. And nobody disagrees that students could and should be better-prepared. But, putting aside the fact that the SAT/PSAT, like all assessments, are imperfect instruments, organizations that stick around for decades and focus exclusively on administering exams that are designed to gauge students’ knowledge and skills are not very sexy. But they are necessary.
And this need for independence and rigor is particularly salient in the case of the College Board, which is entrusted with administering an exam that often has a tremendous influence on young people’s lives.
So, my concern, put simply, is that the College Board’s efforts to improve “college readiness” will not remain independent of their design, administration and (most importantly) the presentation of results of their assessments. The idea of an organization trying to improve outcomes on tests that it designs and administers is fraught with complications. For example, how will the new advocacy efforts focused on improvement be evaluated? If they’re planning to use simple average SAT scores and AP passing rates as their primary measures of success or failure, this is a huge problem – not only is it the wrong approach, but, again, it is also somewhat "in conflict" with their concerted efforts to expand the test-taking sample.
(Side note: No matter what the Board does, it's a pretty sure bet that, every year, a bunch of reporters and advocates will present flat scores as evidence of these efforts’ failure, and increasing scores as evidence of their success. Either inference would be totally incorrect, but it bears mentioning that this move is almost certain to generate this kind of attention and pressure.)
And, even if the Board decides to evaluate their improvement programs in a valid manner (e.g., RCTs), the type of program would matter in terms of interpreting the effects - i.e., the extent to which positive estimated impacts would represent “improving college readiness," rather than simply improving scores on the SAT, is unclear. There's a big difference between generating meaningful increases in students' knowledge/skills and coaching them to score more highly on the SAT (an endeavor that is notoriously possible).
To reiterate, most of the programs announced thus far by the College Board, particularly those aimed at reducing costs and increasing participation, are beyond reproach. In addition, to be clear, the College Board’s branching out into “improvement-oriented advocacy” could very well be a good thing (most of the specific programs that will embody this goal are yet to be announced, so I don't want to prejudge). I think, for example, their new focus on following PSAT test takers and seeing how they did once they finally take the SAT is an excellent idea. And, certainly, the Board could ramp up programs that help educators use these tests for diagnostic purposes.
My point, rather, is that being boring and measurement-oriented is sometimes as noble and helpful as being a frontline crusader. It is indeed a crusade in itself. And I for one wouldn’t mind there being a few more smart, boring voices out there.
- Matt Di Carlo