I escaped (to Virginia) from New York just in the nick of time before the threat of Hurricane Sandy led Bloomberg to completely shut things down (a whole day in advance!) in expectation of the looming “Frankenstorm”. Searching for the latest update on the extent of Sandy’s impacts, I noticed an interesting post on statblogs by Dr. Nic: “Which type of error do you prefer?”. She begins:
Mayor Bloomberg is avoiding a Type 2 error
As I write this, Hurricane Sandy is bearing down on the east coast of the United States. Mayor Bloomberg has ordered evacuations from various parts of New York City. All over the region people are stocking up on food and other essentials and waiting for Sandy to arrive. And if Sandy doesn’t turn out to be the worst storm ever, will people be relieved or disappointed? Either way there is a lot of money involved. And more importantly, risk of human injury and death. Will the forecasters be blamed for over-predicting?
Given that my son’s ability to travel back here is on-hold until planes fly again—not to mention that snow is beginning to swirl outside my window,—I definitely hope Bloomberg was erring on the side of caution. However, I think that type 1 and 2 errors should generally be put in terms of the extent and/or direction of errors that are or are not indicated or ruled out by test data. Criticisms of tests very often harp on the dichotomous type 1 and 2 errors, as if a user of tests does not have latitude to infer the extent of discrepancies that are/are not likely. At times, attacks on the “culture of dichotomy” reach fever pitch, and lead some to call for the overthrow of tests altogether (often in favor of confidence intervals), as well as to the creation of task forces seeking to reform if not “ban” statistical tests (which I spoof here).
Dr. Nic continues:
Types of error
There are two ways to get this sort of decision wrong. We can do something and find out it was a waste of time, or we can do nothing and wish that we had done something. In the subject of statistics these are known as Type 1 and Type 2 errors. Teaching about Type 1 and Type 2 errors is quite tricky and students often get confused. Does it REALLY matter if they get them around the wrong way? Possibly not, but what really does matter is that students are aware of their existence. We would love to be able to make decisions under certainty, but most decisions involve uncertainty, or risk. We have to choose between the possibility of taking an opportunity and finding out that it was a mistake, and the possibility of turning down an opportunity and missing out on something.
Granted, in some contexts there may be “acts” associated with test results, e.g., to evacuate or not. But in using tests for finding things out and scrutinizing given evidence, we may wish instead to quantify discrepancies that are ruled out, and with what stringency or severity. (This is related to, yet different from, confidence intervals.) Quantitative errors of interest with storm Sandy might be: inferring (claiming, supposing) that winds will be no greater than 60 miles an hour when they’ll go to 90 or 100. Likewise with erroneous claims about the extent of power loss, flooding, snow etc. That is a much more nuanced construal of tests, and far more appropriate for learning contexts, or so I have argued. But anyway, there are other interesting examples in Dr. Nic’s post, and I do hope she is right in thinking Bloomberg was being precautionary. It surely felt that way yesterday, experiencing the calm before the storm, but much less so now watching Atlantic City under water, and the swaying trees and thickening snowflakes outside my round window.
Pretty clearly, Bloomberg was not underestimating the risks of Sandy! It’s unbelievable to see so much of NYC without power!