Monthly Archives: June 2021

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

.

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl

Abstract: This applied paper reflects on the challenges in measuring the impact of Covid-19 lockdowns on the volume and profile of domestic violence. The presentation has two parts. First, I present preliminary findings from analyses of large-scale police data from seven English police forces that disentangle longer-term trends from the effect of the imposing and lifting of lockdown restrictions. Second, I reflect on the methodological challenges involved in accessing, analysing and drawing inferences from police administrative data.

Katrin Hohl (Department of Sociology, City University London). Dr Katrin Hohl joined City University London in 2012 after completing her PhD at the LSE. Her research has two strands. The first revolves around various aspects of criminal justice responses to violence against women, in particular: the processes through which complaints of rape fail to result in a full police investigation, charge, prosecution and conviction; the challenges rape victims with mental health conditions pose to criminal justice, and the use of victim memory as evidence in rape complaints. The second strand focusses on public trust in the police, police legitimacy, compliance with the law and cooperation with the police and courts. Katrin has collaborated with the London Metropolitan Police on several research projects on the topics of public confidence in policing, police communication and neighbourhood policing. She is a member of the Centre for Law Justice and Journalism and the Centre for Crime and Justice Research.


Readings:

Journal article
Piquero et al. (2021) Domestic violence during the Covid-19 pandemic – Evidence from a systematic review and meta-analysis, Journal of Criminal Justice, 74 (May-June). (PDF)
Blog post: 
Hohl, K. and Johnson K. (2020) A crisis exposed – how Covid-19 is impacting domestic abuse reported to the police.

Slides & Video Links:


*Meeting 18 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

Categories: Error Statistics | Leave a comment

At long last! The ASA President’s Task Force Statement on Statistical Significance and Replicability

The ASA President’s Task Force Statement on Statistical Significance and Replicability has finally been published. It found a home in The Annals of Applied Statistics, after everyone else they looked to–including the ASA itself– refused to publish it.  For background see this post. I’ll comment on it in a later post. There is also an Editorial: Statistical Significance, P-Values, and Replicability by Karen Kafadar.

THE ASA PRESIDENT’S TASK FORCE STATEMENT ON STATISTICAL SIGNIFICANCE AND REPLICABILITY

BY YOAV BENJAMINI, RICHARD D. DE VEAUX, BRADLEY EFRON, SCOTT EVANS, MARK GLICKMAN,*, BARRY I. GRAUBARD, XUMING HE, XIAO-LI MENG,†, NANCY REID8, STEPHEN M. STIGLER, STEPHEN B. VARDEMAN, CHRISTOPHER K. WIKLE, TOMMY WRIGHT, LINDA J. YOUNG AND KAREN KAFADAR (for affiliations see the article)

Over the past decade, the sciences have experienced elevated concerns about replicability of study results. An important aspect of replicability is the use of statistical methods for framing conclusions. In 2019 the President of the American Statistical Association (ASA) established a task force to address concerns that a 2019 editorial in The American Statistician (an ASA journal) might be mistakenly interpreted as official ASA policy. (The 2019 editorial recommended eliminating the use of “p < 0.05” and “statistically significant” in statistical analysis.) This document is the statement of the task force, and the ASA invited us to publicize it. Its purpose is two-fold: to clarify that the use of P -values and significance testing, properly applied and interpreted, are important tools that should not be abandoned, and to briefly set out some principles of sound statistical inference that may be useful to the scientific community.

P -values are valid statistical measures that provide convenient conventions for communicating the uncertainty inherent in quantitative results. Indeed, P -values and significance tests are among the most studied and best understood statistical procedures in the statistics literature. They are important tools that have advanced science through their proper application.

Much of the controversy surrounding statistical significance can be dispelled through a better appreciation of uncertainty, variability, multiplicity, and replicability. The following general principles underlie the appropriate use of P -values and the reporting of statistical significance and apply more broadly to good statistical practice.

Capturing the uncertainty associated with statistical summaries is critical. Different measures of uncertainty can complement one another; no single measure serves all purposes. The sources of variation that the summaries address should be described in scientific articles and reports. Where possible, those sources of variation that have not been addressed should also be identified.

Dealing with replicability and uncertainty lies at the heart of statistical science. Study results are replicable if they can be verified in further studies with new data. Setting aside the possibility of fraud, important sources of replicability problems include poor study design and conduct, insufficient data, lack of attention to model choice without a full appreciation of the implications of that choice, inadequate description of the analytical and computational procedures, and selection of results to report. Selective reporting, even the highlighting of a few persuasive results among those reported, may lead to a distorted view of the evidence. In some settings this problem may be mitigated by adjusting for multiplicity. Controlling and accounting for uncertainty begins with the design of the study and measurement process and continues through each phase of the analysis to the reporting of results. Even in well-designed, carefully executed studies, inherent uncertainty remains, and the statistical analysis should account properly for this uncertainty.

The theoretical basis of statistical science offers several general strategies for dealing with uncertainty. P -values, confidence intervals and prediction intervals are typically associated with the frequentist approach. Bayes factors, posterior probability distributions and credible intervals are commonly used in the Bayesian approach. These are some among many statistical methods useful for reflecting uncertainty.

Thresholds are helpful when actions are required. Comparing P -values to a significance level can be useful, though P -values themselves provide valuable information. P – values and statistical significance should be understood as assessments of observations or effects relative to sampling variation, and not necessarily as measures of practical significance. If thresholds are deemed necessary as a part of decision-making, they should be explicitly defined based on study goals, considering the consequences of incorrect decisions. Conventions vary by discipline and purpose of analyses.

In summary, P-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data. Analyzing data and summarizing results are often more complex than is sometimes popularly conveyed. Although all scientific methods have limitations, the proper application of statistical methods is essential for interpreting the results of data analyses and enhancing the replicability of scientific results.

“The most reckless and treacherous of all theorists is he who professes to let facts and figures speak for themselves, who keeps in the background the part he has played, perhaps unconsciously, in selecting and grouping them.” (Alfred Marshall, 1885)

Categories: ASA Task Force on Significance and Replicability | 7 Comments

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

.

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl Continue reading

Categories: Error Statistics | Leave a comment

The F.D.A.’s controversial ruling on an Alzheimer’s drug (letter from a reader)(ii)

I was watching Biogen’s stock (BIIB) climb over 100 points yesterday because its Alzheimer’s drug, aducanumab [brand name: Aduhelm], received surprising FDA approval.  I hadn’t been following the drug at all (it’s enough to try and track some Covid treatments/vaccines). I knew only that the FDA panel had unanimously recommended not to approve it last year, and the general sentiment was that it was heading for FDA rejection yesterday. After I received an email from Geoff Stuart[i] asking what I thought, I found out a bit more. He wrote: Continue reading

Categories: PhilStat/Med, preregistration | 8 Comments

Blog at WordPress.com.