Some argue that generating and interpreting data for purposes of risk assessment invariably introduces ethical (and other value) considerations that might not only go beyond, but might even conflict with, the “accepted canons of objective scientific reporting.” This thesis, we may call it the thesis of ethics in evidence and inference, some think, shows that an ethical interpretation of evidence may warrant violating canons of scientific objectivity, and even that a scientist must choose between norms of morality and objectivity.
The reasoning is that since the scientists’ hands must invariably get “dirty” with policy and other values, they should opt for interpreting evidence in a way that promotes ethically sound values, or maximizes public benefit (in some sense).
I call this the “dirty hands” argument, alluding to a term used by philosopher Carl Cranor (1994).1
I cannot say how far its proponents would endorse taking the argument.2 However, it seems that if this thesis is accepted, it may be possible to regard as “unethical” the objective reporting of scientific uncertainties in evidence. This consequence is worrisome: in fact, it would conflict with the generally accepted imperative for an ethical interpretation of scientific evidence.
Nevertheless, the “dirty hands” argument as advanced has apparently plausible premises, one or more of which would need to be denied to avoid the conclusion which otherwise follows deductively. It goes roughly as follows:
- Whether observed data are taken as evidence of a risk depends on a methodological decision as to when to reject the null hypothesis of no risk H0 (and infer the data are evidence of a risk).
- Thus interpreting data to feed into policy decisions with potentially serious risks to the public, the scientist is actually engaged in matters of policy (what is generally framed as an issue of evidence and science, is actually an issue of policy values, ethics, and politics).
- The public funds scientific research and the scientist should be responsible for promoting the public good, so scientists should interpret risk evidence so as to maximize public benefit.
- Therefore, a responsible (ethical) interpretation of scientific data on risks is one that maximizes public benefit–and one that does not do so is irresponsible or unethical.
- Public benefit is maximized by minimizing the chance of failing to find a risk. This leads to the conclusion in 6:
- CONCLUSION: In situations of risk assessment the ethical interpreter of evidence will maximize the chance of inferring there is a risk–even if this means inferring a risk when there is none with high probability (or at least a probability much higher than is normally countenanced)
The argument about ethics in evidence is often put in terms of balancing type 1 and 2 errors.
Type I error:test T finds evidence of an increased risk ( H0 is rejected), when in fact the risk is absent (false positive)
Type II error: test T does not find evidence of an increased risk ( H0 is accepted), when in fact an increased risk δ is present (false negative).
The traditional balance of type I and type II error probabilities, wherein type I errors are minimized, some argue, is unethical. Rather than minimize type I errors, it might be claimed, an “ethical” tester should minimize type II errors.
I claim that at least 3 of the premises, while plausible-sounding, are false. What do you think?
(1) Cranor (to my knowledge) was among the first to articulate the argument in philosophy, in relation to statistical significance tests (it is echoed by more recent philosophers of evidence based policy):
Scientists should adopt more health protective evidentiary standards, even when they are not consistent with the most demanding inferential standards of the field. That is, scientists may be forced to choose between the evidentiary ideals of their fields and the moral value of protecting the public from exposure to toxins, frequently they cannot realize both (Cranor 1994, pp. 169-70).
Kristin Shrader-Frechette has advanced analogous arguments in numerous risk research contexts.
(2) I should note that Cranor is aware that properly scrutinizing statistical tests can advance matters here.
Cranor, C. (1994), “Public Health Research and Uncertainty”, in K. Shrader-Frechette, Ethics of Sciencetific Research. Rowman and Littlefield, pp. 169-186.
Shrader-Frechette, K. (1994), Ethics of Scientific Research, Rowman and Littlefield
I really hate the Type 1, Type 2 error framework here. In many important cases, the question is not whether a risk is zero but rather how large the risk is. Or, more generally, the magnitudes of different effects.