A recent article in USA Today discussed standardized testing in D.C. public schools. It mentioned that some high-performing schools have very high numbers of answers erased on tests (when you erase an answer on the Scantron sheet and put in a new one, the machine can detect the residue). The article claimed that this could be possible evidence that teachers tampered with the tests prior to submitting them for grading.

The following quote appeared:

"Noyes is one of 103 public schools here that have had erasure rates that surpassed D.C. averages at least once since 2008. That's more than half of D.C. schools."

This statement is not too surprising. If the distribution of erasure rates were symmetrical, then in any given year half of them will be above average. Since the schools above average will change from year to year (if only due to random variation), then over a 3-year period more than half of the schools will be above average in at least one of those years. (For instance, if the erasure rates are random and independent, then each year each school will have a 1/2 probability of being above average, so each school will have a 7/8 probability of being above average in at least one year.)

Aside from this sentence, the rest of the article was actually fairly good statistically. It mentioned that this particular school hasd erasure rates so far higher that it wasn't due to chance, and included a lengthy discussion of possible alternative explanations for the data.

## Monday, March 28, 2011

Subscribe to:
Post Comments (Atom)

## 3 comments:

This reminds me of a problem I once saw on a test. Assume you have 10 mutual funds. Without a doubt, one of them will have the highest return next year. And without a doubt they will advertize that they were the best mutual fund. But how can you know? Someone is always going to be tops that year even if it is just random variation. So (assuming you have no informtion about prior years' performance, but you do know the distribution)...the question is, how high of a return does the top firm have to have before you believe (1) That they really are better than average, and (2) that they really are the best mutual fund?

The sentence you mention is ridiculous but the rest of the article is good. It makes me really suspicious.

For part (1) this is a standard multiple comparisons correction. If you want, say, 95% confidence then since there are 10 mutual funds you require a return that's in at least the top 0.5% of the distribution. This means that if all the mutual funds were average there would be less than a (0.5% * 10) = 5% chance you would get one that high by chance (this is what a 95% confidence interval means).

For part (2) it would be more complicated and I'm not sure how to solve if off the top of my head.

Post a Comment