Junk Science (as first coined).* Have you ever noticed in wranglings over evidence-based policy that it’s always one side that’s politicizing the evidence—the side whose policy one doesn’t like? The evidence on the near side, or your side, however, is solid science. Let’s call those who first coined the term “junk science” Group 1. For Group 1, junk science is bad science that is used to defend pro-regulatory stances, whereas sound science would identify errors in reports of potential risk. (Yes, this was the first popular use of “junk science”, to my knowledge.) For the challengers—let’s call them Group 2—junk science is bad science that is used to defend the anti-regulatory stance, whereas sound science would identify potential risks, advocate precautionary stances, and recognize errors where risk is denied.
Both groups agree that politicizing science is very, very bad—but it’s only the other group that does it!
A given print exposé exploring the distortions of fact on one side or the other routinely showers wild praise on their side’s—their science’s and their policy’s—objectivity, their adherence to the facts, just the facts. How impressed might we be with the text or the group that admitted to its own biases?
Take, say, global warming, genetically modified crops, electric-power lines, medical diagnostic testing. Group 1 alleges that those who point up the risks (actual or potential) have a vested interest in construing the evidence that exists (and the gaps in the evidence) accordingly, which may bias the relevant science and pressure scientists to be politically correct. Group 2 alleges the reverse, pointing to industry biases in the analysis or reanalysis of data and pressures on scientists doing industry-funded work to go along to get along.
When the battle between the two groups is joined, issues of evidence—what counts as bad/good evidence for a given claim—and issues of regulation and policy—what are “acceptable” standards of risk/benefit—may become so entangled that no one recognizes how much of the disagreement stems from divergent assumptions about how models are produced and used, as well as from contrary stands on the foundations of uncertain knowledge and statistical inference. The core disagreement is mistakenly attributed to divergent policy values, at least for the most part.
Over the years I have tried my hand in sorting out these debates (e.g., Mayo and Hollander 1991). My account of testing actually came into being to systematize reasoning from statistically insignificant results in evidence based risk policy: no evidence of risk is not evidence of no risk! (see October 5). Unlike the disputants who get the most attention, I have argued that the current polarization cries out for critical or meta-scientific (or meta-statistical) scrutiny of the uncertainties, assumptions, and risks of error that are part and parcel of the gathering and interpreting of evidence on both sides. Unhappily, the disputants tend not to welcome this position—and are even hostile to it. This used to shock me when I was starting out—why would those who were trying to promote greater risk accountability not want to avail themselves of ways to hold the agencies and companies responsible when they bury risks in fallacious interpretations of statistically insignificant results? By now, I am used to it.
This isn’t to say that there’s no honest self-scrutiny going on, but only that all sides are so used to anticipating conspiracies of bias that my position is likely viewed as yet another politically motivated ruse. So what we are left with is scientific evidence having less and less a role in constraining or adjudicating disputes. Even to suggest an evidential adjudication risks being attacked as a paid insider.
I agree with David Michaels (2008, 61) that “the battle for the integrity of science is rooted in issues of methodology,” but winning the battle would demand something that both sides are increasingly unwilling to grant. It comes as no surprise that some of the best scientists stay as far away as possible from such controversial science.
What about the recent case of some scientists asking Obama to prosecute “global warming skeptics”? Science is being politicized but on which side (or both)?
*Just as relevant now as when I first blogged this 4 years ago (under “objectivity”).
Mayo,D. and Hollander. R. (eds.). 1991. Acceptable Evidence: Science and Values in Risk Management, Oxford.
Mayo. 1991. Sociological versus Metascientific Views of Risk Assessment, in D. Mayo and R. Hollander (eds.), Acceptable Evidence: 249-79.
Michaels, D. 2008. Doubt Is Their Product, Oxford.
Mayo,
Thanks for your thoughts. It would seem that the retreat from an evidence-based world view can be found on both sides of the aisle. From the left, just a look at the science writing in Mother Jones. From the right, look at the latest pronouncements of the Republican chaired House Science Committee.
Tracey Brown delivered this year’s annual lecture for Sense about Science, which you may find interesting. Brown takes the researcher community generally for failing to acknowledge the uncertainty in their positions. The Guardian has her lecture available for download at:
http://www.theguardian.com/science/audio/2015/oct/02/sense-sensibility-untrustworthy-nature-ugly-truth-tracey-brown-podcast
Of course, the politicization of science pushes most everyone in the opposite direction, to overstate the validity and certainty of scientific claims. In my business, specious claiming is so common that it becomes the norm in the courtroom.
Nathan
Nathan: It’s true that “the retreat from an evidence-based world view can be found on both sides of the aisle”. Maybe it’s the offshoot of years of promulgating social constructivism, radical relativism, da da-ism, and anarchy about science, but I think there’s more to it. I’ve begun to take the view that any “side” that discounts and vilifies genuine arguments, questions, or criticism of a favored view V, resorting to ad homining the critics as driven by politics, despite the V-skeptics giving legitimate reasons, should,for that reason alone, be considered as promoting bad-science (not to mention squashing free speech). That entails opposing those who resort to such bad arguments even if we think V is warranted and well tested. I, for one, find such behavior incredibly offensive, and hurtful, even when I too agree with view V. As I say in my last sentence, people are increasingly unwilling to do that these days. The only cure, as I see it, is to join forces against such behavior and on a very, very routine basis, regardless of issue. I’m not saying there’s no limit to how far we ought to grant a critical hearing even to those who are raising sincere questions, but nowadays, criticism in the public realm is cut off almost immediately and charged as a sign of bad (or “__ist”) motives.
I don’t know if people watched Bill Maher and Richard Dawkins last night, by the way.
As for law, I tend to consider it as in a different category, at least to some degree, because lawyers are supposed to go just as far as what’s legally permissible to win the case. Aren’t they?
(Just to be clear, this is a reply to Mayo’s reply, not a direct reply to Nathan’s comment.)
On Twitter, you ( = Mayo) characterized this as “de-politicizing science.” But I wouldn’t say it’s depoliticizing; you’re talking about virtues that are necessary to do politics well, especially charity and honesty (about the limits of one’s position and the strengths of the other side’s), and maybe also a certain degree of humility (science isn’t perfect and so there’s always a chance that my policy predictions are erroneous).
Dan: Sure, but then the original, pejorative meaning goes by the board. One may well see it as a political or ethical stance to subscribe to methods that would help de-politicize science and its discussion.
It would be interesting to ponder how such an activity could be promulgated; I don’t think it’s far-fetched at all.
However, I don’t know that these are virtues that are necessary to do politics well. Maybe I’m just jaded.
I would add data access. If the data is not public or available to a trusted 3rd party so that data and methods can be examined, then it is not really science. It is only human to make a case favorable to one’s cause (cheat), if there is no oversight.
Hi Stan: I agree. I know you’ve said in the past that the EPA hasn’t shared data on air pollution. Are they not required?
“no one recognizes how much of the disagreement stems from divergent assumptions about how models are produced and used …. The core disagreement is mistakenly attributed to divergent policy values, at least for the most part.”
Disagreements about how models are produced and used, the acceptability of different kinds of evidence, and other epistemic disagreements can themselves be influenced by policy disagreements.
To make this concrete, consider the formaldehyde case discussed in the linked 2011 post. There are at least two epistemic issues at play here, and which side different parties take on those epistemic issues is easy to predict based on their policy interests.
The first issue is the status of animal feeding experiments. Do these provide good evidence for claims about the effects of formaldehyde in humans? Environmentalists and public health advocates argue yes, pointing to qualitative and quantitative interspecies similarities. Chemical industry representatives and animal rights advocates argue no, pointing to qualitative and quantitative interspecies differences. Notably, these positions can be switched in particular cases based on the policy implications. When industry uses estrogen-insensitive strains of lab rats to test bisphenol-A [BPA], they claim that these experiments provide good evidence, while environmentalists and public health advocates claim that they don’t. (For more discussion of this case, see this paper: http://www.sciencedirect.com/science/article/pii/S0039368108001155; and I have a paper making an analogous point about the yield benefits of genetically modified crops: http://www.sciencedirect.com/science/article/pii/S1369848615000321)
The second issue is over the burden of proof, or how to set error tolerance thresholds and which kinds of errors count as type I vs. type II. Which error would be worse in the formaldehyde case?
(1) incorrectly concluding that formaldehyde is safe, or
(2) incorrectly concluding that formaldehyde is carcinogenic?
Environmentalists and public health advocates would presumably say that (1) is worse (or, more precisely, that (1) is so much more worse than (2) that our error tolerances should be set to avoid (1) even if that means a substantial chance of error (2)). Chemical industry representatives would presumably say that (2) is worse (or, more precisely, that our error tolerances should be set so that the chances of both kinds of errors are about the same).
Dan: Thanks for your comment. It’s true that political and other values seep through at every stage, but we can critically evaluate the protectiveness/non-protectivness of various choices. I call these risk assessment policy RAP values. It’s methods that aren’t open to such critical appraisal that are most problematic. Ironically, failure to understand the capabilities of methods, especially statistical,can and does lead advocates of a certain degree and kind of protectiveness (policy A, say) to argue for a method or interpretation that is opposed to policy A.
Mayo,
Lawyers are ethically charged with representing their clients zealously. Whatever that means, it doesn’t include misrepresenting the evidence. Courts speak of the lawyers’ right to give “fair comment” on the evidence, with a great deal of latitude. The opportunities for distorting the evidence without the adversarial framework are substantial but probably not much more than what I see in the media, in the legislatures, in regulatory agencies, and even in journals and in textbooks, not to mention the expert witnesses themselves who are drawn from the scientific community. The “promise” of the Daubert case is that neutral judges will rein in the both the expert witnesses and the lawyers, but for the promise is often broken.
Nathan: But surely you are right to have pressed the letter of the law in defense of Harkonen, given a precedent (on statistical significance, post-data subgroups, and other things)–even if you thought the precedent was not entirely sound on evidential grounds. Never mind that case which I don’t want to argue about, my point is simply that it would be fair, legally, to exploit whatever is permissible, given language, precedents and all those technicalities that non-lawyers rarely know about–perhaps even required. Well I’m bound to be in trouble arguing this with you, but at the very least you’ll agree that burdens of proof are rather special in law.
I will try to avoid rearguing the Harkonen case itself, but the Harkonen case is an interesting example of what a lawyer may argue. In that case, after the conviction, the defense counsel became aware of the position that the government took in the Matrixx Initiatives case, the one that Ziliak and McCloskey filed their amicus brief. The government also filed an amicus brief, in which the Solicitor General, acting on behalf of the FDA, took the position that statistical significance was unimportant in determining whether there was medical causation between the use of Zicam and anosmia. It was a crazy position, drawn out no doubt because the defendant, Matrixx Initiatives, argued that causation was essential to be pleaded in the securities fraud case, and that statistical significant was essential to causation. (Recall that all that existed were some case reports that did not permit an analysis of significance probability, but the defense still made the argument, and the plaintiffs, with the government’s support, countered. Statistical significance vel non was never really at issue.) The Supreme Court ultimately ruled that causation need not be pleaded under the circumstances, and this holding made the statistical significance issue immaterial to the outcome. Still, the Court improvidently pressed on and said some silly things about the issue, in language we lawyers call “dicta.”
The Harkonen prosecution turned, however, on statistical significance of an outcome, pre-specified with some particularity, being statistically significant. The gov’t argued an even more extreme position, which I think would be widely rejected in the clinical trial community, that if a trial failed on its primary outcome, it had “failed,” and thus could offer no evidential support for any claim. The government’s position in Harkonen was quite at odds with its position in Matrixx Initiatives.
Now lawyers, even government lawyers, are not allowed to argue out of both sides of their mouths, even if it advances their interests in each instance. So I offer this up as an example that lawyers can sometimes argue issues more opportunistically than scientists, but there are limits even for lawyers. Matrixx/Harkonen illustrates an instance in which the government, qua litigant, clearly crossed the line.
I will offer up another example, more in line with your suggestion that lawyers have greater latitude than scientists. Suppose a defendant funded a study, which provided important “exculpatory” evidence that its product did not, and could not, have contributed to the harm alleged by plaintiff. The fact of the funding was disclosed. Suppose further that this study was extremely well done, and had few if any threats to validity, internal or external. On the other hand, the studies relied upon by plaintiff were truly junk, of poor quality in terms of data integrity, collection, and analysis. The plaintiffs’ lawyer would be free to argue that the defendant-funded study should be totally disregarded solely because of the bias of its funding source. This position would, I hope, be totally rejected in the scientific community. This hypothetical fairly represents what happened in the silicone gel breast implant litigation until the defense was able to show that many of the plaintiffs’ studies actually involved research misconduct.
Nathan
First, for readers wishing to read about the interesting Harkonen case, please look at:
https://errorstatistics.com/2012/02/08/distortions-in-the-court-philstock-feb-8/
https://errorstatistics.com/2012/12/13/bad-statistics-crime-or-free-speech/
https://errorstatistics.com/2012/12/17/philstatlawstock-multiplicity-and-duplicity/
https://errorstatistics.com/2013/10/09/bad-statistics-crime-or-free-speech-ii-harkonen-update-phil-stat-law-stock/
Or any one of them.
Second: I knew you couldn’t resist relitigating Harkonen-just kidding. Is the man in jail, by the way? Or back on the biopharm streets?
On your second point, I am totally sympathetic (even if it wasn’t quite the kind of case I had in mind regarding burdens of proof). In fact you were the one who got me to realize that conflict of interest should not be simply a matter of financial connections, that having political axes to grind were every bit as, often even more, likely to bias. In any event, the evidence should be scrutinized on its merits. So are they ever going to fix that?
Haha; you were baiting me, but I couldn’t resist. Dr Harkonen’s sentence was 6 months of house arrest. He has served his sentence, but he is challenging the conviction, post-judgment, on grounds of ineffective assistance of counsel. I won’t go into details here and now, but the challenge has to do with his lawyers’ failure to call an expert witness. (Recall that after conviction, Steve Goodman and Don Rubin both filed supporting affidavits on the merits, but it was too late to litigate the issues.)
As for “fixing” the problem I identified, the Daubert gatekeeping process can, in theory, shut down an extreme case such as the breast implant litigation. Daubert was decided in 1993, but it took some time for judges to summon the courage to reverse the trend of letting the cases be submitted to juries. Ultimately, the lead judge appointed a panel of expert witnesses who ruled for the defense on the merits, and then the Institute of Medicine issued a 600 page report that further supported the defense position. Only then did courts stop the litigation on a TKO of plaintiffs’ claims.
But the more general problem of lawyers arguing invalid or fallacious arguments remains. In several cases I know of, prosecutors argued what has come to be known as the prosecutors’ fallacy in interpreting the random match probability as the probability of innocence, and appellate courts blinked, and held that the argument was “fair comment” on the trial evidence.
Nathan: I’d forgotten some of the details, but I don’t see how he can challenge after losing at the Supreme Court! I wonder what expert witness he thought would save him. I’ll bet he’s back doing the same thing. You know I sometimes have mixed feelings regarding biotech–after all, I’m as capable as anyone of being convinced (biased?) that a company’s got a great drug, especially if I own the stock. But I’ve no sympathy for Mr.Harkonen (who is vaguely similar to a Mr Hack in my new book)..
Quick answer. The Supreme Court declined to grant certiorari, which is no decision on the merits. Dr. Harkonen is not challenging the merits after exhausting his appeals; rather, he is challenging his counsel for having provided ineffective assistance at trial.
Dr. Mayo: Do you know that the person leading the call to prosecute climate skeptics is being dubbed a “climate profiteer” making hundreds of thousands a year, and his letter rescinded? http://www.climatedepot.com/2015/09/20/update-leader-of-effort-to-prosecute-skeptics-under-rico-paid-himself-his-wife-1-5-million-from-govt-climate-grants-for-part-time-work/
e.b. Thanks for the link. Yes, someone sent me articles about the ringleader (Shukla?) being investigated. I don’t know if the other signers were part of his group.
Moral appeals for better scientific behavior are needed, but may not be very effective for dealing with problems beyond obvious transgressions of simple, generally accepted principles.
My view of personal and group biases is highly relativistic: It seems usual or at least commonplace for people to take themselves as moral center in matters for which they feel free to do so (as in typical academic debates), or else take as their referent the moral center they perceive (however poorly) of the groups with which they identify. This tendency subsumes financial interests, and can result in spectacular heterogeneity in methodologic as well as theoretical preferences, whether in academia or the courtroom (although of course is financially more intense in litigation). I think Neyman touched on this issue in his 1977 Synthese paper, recognizing the subjectivity it represents as an inescapable element of his own system (see p. 104-106).
I think it safe to generalize his observation to one that no formal statistical method is free of value bias – the bias is just hidden when its loss function is left implicit or is unrecognized.This will affect choice and perception of methods when those differ in tendency toward reaching preferred inferences or decisions. To deal with that and related problems will require bringing loss functions and value biases in methods to the fore, and a new component of statistical science involving cognitive psychology. Developing that component of and getting it into teaching is a challenging project that nonetheless seems to me in dire need, compared to the extensive philosophical and enormous mathematical components already in place. A good place to start would be by covering cognitive biases as part of statistical training (as is done by some instructors).
Sander: Neyman said there, as he often did, that the error deemed most important to avoid may be a matter of the “subjective attitudes” of the researcher, but he was also clear that this was irrelevant to the objective critique of the inferences warranted, given the choice of test specifications. So, for example, he applies a power analysis in order to critically evaluate a negative result from a test with low power to detect discrepancies of concern. It is not warranted to take a negative result as “confirming” a claim that the discrepancy < d if the test had low power to detect d if present. Neyman himself assumed the type 1 error would be the one "first" in importance, but no such judgments are needed to run and then critique a test.
If your claim is that teaching cognitive biases, and other kinds of statistical fallacies, would advance the critical scrutiny of methods and inferences, then you agree with Neyman and with me. But if you consider that methods invariably contain hidden biases that cannot be disinterred and cannot be taken account of in critically appraising an inference, then what would be the purpose of learning cognitive biases?
.
I see no disagreement in principle among us, although of course the devil is in the details. We (you, me, Neyman) want to detect and delineate biases, and learning about cognitive biases should aid that goal. As I think you might agree, I just don’t think the process can be captured by a general algorithm, in no small part because bias is recurrent: our critical analyses will be biased just as is the target of criticism.
I also don’t think everyone shares these goals all the time, especially when there are material stakes riding on overlooking biases – although as you noted at the outset, the biases to highlight or overemphasize and the biases to hide or deny will vary across groups.
Sander: We should not let “bias” lose it’s meaning. An inference or interpretation is biased when factors distort the warranted interpretation/inference, not merely when interests or background enter (as they must). If “our critical analyses will be biased just as is the target of criticism”, then there’d be no purpose in the critical analysis–it would be self-sealing, not self-correcting.
The meaning of bias is clear enough here: Use of methods or reasoning that favor one inference over another, whether the factors leading to that usage are financial, ideological, or accidents of tradition, And yes, in some medical as well as legal reports you will see that the purpose of what is offered as critical analysis is self-sealing, not self correction, albeit often buried in contextual jargon.
Just because bias will infect all stages of (even) an honest and open process of criticism does not mean we can’t learn to cope with it. But effective coping will have to start with accepting its existence. Instead, the notion of ubiquitous bias seems to encounter resistance similar to that encountered in the 19th century to the notion of ubiquitous microbial pathogens. Resistance to the idea is unsurprising because it would imply that each of us is biased in some relevant way (just as ubiquity of microbial pathogens was resisted because it implied physicians had been killing patients via nonsterile procedures).
Sander: I maintain it is logically inconsistent to claim both that all methods are (equally?) biased, and also that there are methods for critically assessing and correcting the bias.
I said nothing about equally biased, which is unlikely. Any bias difference can be exploited to estimate and account for bias, even if that accounting is also subject to error and bias, Also, I maintained that bias is relevant to a chosen origin or reference point, which will alter the process depending on the analysts and their loss functions. There is nothing logically inconsistent about any of this; in fact for familiar methodologic biases there are statistical theories for dealing with the problem (e.g., see Gustafson & Greenland, Stat Sci 2009). Their extension to account for cognitive and other user biases is a research frontier, one that I was making a case for concentrated effort considering its importance to the legal and policy issues raised earlier in this thread.
Sander: Although I added “equally” as a parenthetical remark, in case one would want to try and appeal to “degrees of bias”, I don’t really know how that works for inference in general. I can imagine, I suppose, one party using M1 that ensures with probability 1 that H is saved from challenge, say; whereas another method M2 only does so 90% of the time. But I wouldn’t regard those as bias correcting methods, would you? I’d be interested in the idea of exploiting bias differences to correct biases (if I knew just what is meant). On the other hand, if Greenland’s “interest” is in getting it right, then I don’t see that as a “bias” in the pejorative sense we are using it.
I’ll check out your paper, I don’t think it’s one you’ve sent.
My ultimate concern is with distortion of results, regardless of its origin. Thus it is unsurprising if I am using “bias” more generally than you are, subsuming not only personal biases (which again everyone has to some degree) but also statistical biases – which are usually unintentional and occasionally can even be useful (as in variance/MSE reduction).
The topic of bias analysis and hence sound approaches to it is complex even in the most innocent cases, and does not get simpler when we try and deal with cases where a presumption of innocence is unwarranted on scientific grounds (however nobly it fits into ideals of criminal justice). This leads to a close connection to forensic statistics.
Again, I think cognitive sciences (including behavioral economics) will have to play a major role in sorting out and dealing with issues of personal and social (group) biases. I also think that, by and large, the field of statistics has yet to address these issues seriously beyond basic preventives like randomization and masking. Thus its inferential theories seem woefully incomplete for applications in which the issues arise – these “schools” are so far only theories of ideal agents acting in ideal circumstances (I suppose this is not unlike complaints lodged against classical economics).
Sander: What happens when behavioral economics and “bias research” are based on (possibly) artificial lab settings and various presumptions about “rationality”–both within the study design and its analysis? Some of the famous work showing people aren’t naturally Bayesians is typically analyzed using non-Bayesian methods. The literature on biases shows how many different ways there are to explain away failure to live up to someone’s view of “valid” inference. I’ve seen some rather naive uses of regression in behavioral econ and experimental econ (though I’m obviously not dismissing all of it). Even the recent work appealing to science-wise screening rates (e.g., Ioannidis) makes many assumptions about the statistical biases that you and others have questioned. We know how politicized statistics is now. I wouldn’t want someone who declares significance tests are “invalid” because they do not give posterior probabilities determining what counts as legitimate or biased inference. Yet there are journal editors who say just that (and try to ban them). https://errorstatistics.com/2015/03/05/a-puzzle-about-the-latest-test-ban-or-dont-ask-dont-tell/
Until at least some of the more egregious confusions are exposed, pretending some particular statistically based field can improve things in statistics is to paper over genuine abuses.
Now maybe it appears I’ve changed places with you and I am the skeptical one. I’m not skeptical of the ability to root out statistical bias (which of course is a small part of the focus of the blog), but I’m very skeptical that even the most sagacious statisticians are willing to call out the offenders (when significance test bashing is so fashionable). But this requires a blogpost of its own.
Mayo:
Sander put this clearly for those who have been working in these areas ” I also think that, by and large, the field of statistics has yet to address these issues seriously beyond basic preventives like randomization and masking. Thus its inferential theories seem woefully incomplete for applications in which the issues arise – these “schools” are so far only theories of ideal agents acting in ideal circumstances”
The ” theories of ideal agents acting in ideal circumstances” gets taken too simplistically as knowing how to avoid bias in actual research – to avoid bias in research we would have to get outside of ourselves and see reality directly which you surely agree we can’t do – we can just continually strive to lessen recalcitrant experiences that we are wrong.
As for “pretending some particular statistically based field can improve things in statistics is to paper over genuine abuses.” we all have to choose the community to wish to engage with.
Keith O’Rourke
Phanerono: I don’t agree that “to avoid bias in research we would have to get outside of ourselves and see reality directly which you surely agree we can’t do”. This is redolent of a certain philosophical idea that the “world-in-itself” or “the view from nowhere” would somehow be the most objective, but in fact it would be utterly irrelevant, even if we could imagine it.
I really don’t see Neyman-Pearson-Fisher-Cox theories, or whatever you want to call it, as “theories of ideal agents acting in ideal circumstances”, and they are very clear about that. That is why they hesitate to try to formalize the informal qualitative issues. But it doesn’t mean they say nothing about them. Bayesian theories have historically been about ideal agents, but perhaps not as much now. This issue, I think, is reflected in contrasting views about what needs to be formalized, and how to formalize it. I think that things can be systematized and scrutinized without formalizing their entry into inference. The majority of science is not statistical, and good science manages to be self-critical without feeling the need to consult the latest theory in cognitive science. (I’m not saying it can’t have a role; people have been trying to scientize science in different ways for a long time.)
But the real issues are likely to be more in line with the general ones in this post: Is the problem really to be solved by understanding why people politicize or otherwise bias science or inference? That’s of interest in its own right, but scarcely an avenue for making researchers more responsible. An analogy might be the difference been how to understand criminal behavior and how to block it. Even if you could “incentivize” people to behave in a certain way there’s a question as to whether this ought to be done.