Summary
Three amici briefs filed by the Crime Prevention Research Center (CPRC), the State of Arizona, and law enforcement groups in support of the petitioners in New York State Rifle & Pistol Association v Bruen rely extensively on empirical data and research in support to their claim that loosening concealed carry laws has not increased crime. Their position relies on three key arguments:
- that permit revocation data shows that concealed carry permit holders are extremely law abiding,
- that the empirical academic literature and a majority of academics who study gun violence support the conclusion that loosening concealed carry laws actually reduce crime (or at worst have no effect on crime), and
- that defensive gun use by law-abiding citizens is extremely common and outweighs the number of crimes committed with firearms. As this document will demonstrate, each of these claims is false.
Arguments relying on the rate of permit revocations ignore multiple investigations revealing that the data is fatally flawed. A 2011 investigation of two large Michigan counties uncovered that 77% and 79% of permit holder convictions went unreported, meaning that many permit holders who should have had their permits revoked did not. Another 2011 investigation of North Carolina permit revocation data discovered that, in more than half of felony convictions, authorities failed to revoke or suspend permits. These investigations demonstrate that at the very least, revocation data cannot be relied on to determine how law-abiding concealed carry permit holders are.
The CPRC brief argues that the academic literature demonstrates that loosening concealed carry laws reduces violent crime, and that a majority of academics support that view point. Both assertions are incorrect, and the CPRC’s literature review suffers from fatal flaws. A more accurate analysis reveals the following:
From 1997-2021, 65 academic studies were conducted on concealed carry laws and their impact on violent crime. Eighteen studies found a decrease, 21 found no effect (or mixed), and 26 found an increase. In summary, 40% of studies find that loosening concealed carry laws has a detrimental effect on crime, which is a plurality; only 28% found a beneficial impact.
Since the 2005 NRC report, thirty-five studies were conducted on loosening concealed carry laws. Five found a decrease, seven found no effect (or mixed), and 23 found an increase in crime. In summary, 66% of the modern academic literature finds that loosening concealed carry laws has a detrimental effect on crime, while only 14% finds a beneficial impact.
Reviewing the concealed carry studies, 18 academics wrote studies that found loosening concealed carry laws decrease crime, 52 academics wrote studies that found no effect on crime, and 46 wrote studies that found an increase in crime. When looking at the literature since 2005, the results shift decisively in favor of academics arguing that loosening concealed carry laws increases crime. Only seven academics publish work finding a decrease in crime, 13 publish work showing no effect on crime, and 46 publish work showing an increase in crime (meaning every author that has published work showing an increase in crime has authored at least one study since 2005).
Further, a July 2014 Harvard Injury Control Research Center survey of concealed carry laws was completed by 140 researchers (which is larger than the CPRC’s cited survey of 120 academics). A majority of researchers agreed more permissive concealed carry laws have not reduced crime rates.
A series of surveys conducted by Gary Kleck and Marc Gertz in the early 1990s is the origin of the erroneous claim that hundreds of thousands or perhaps millions of defensive gun uses occur yearly in the United States. Subsequent validity tests reveal that the survey numbers are mathematically impossible as surveys of rare events such as DGUs exaggerate the frequency due to false positives. The Kleck and Gertz surveys suffer from severe false-positive issues, telescoping, and social desirability bias, in addition to other methodological problems that render the data useless. The reliable empirical evidence from the Gun Violence Archive reveals approximately 2,000 verified DGUs annually — a far cry from millions.
Overall, the amici briefs rely on decades-old research and findings that have been refuted by the overwhelming majority of more recent scholarship and empirical data.
Revocation Rates
Comparing crime rates among permit holders to those of the general public and even police officers is highly misleading in relation to whether concealed carry laws increase crime. All of the evidence indicates that permit holders commit fewer crimes per capita than the general population, but this is a byproduct of the fact that permit holders are required to pass a background check in the first place. A population of individuals who can pass a criminal background check is going to have a lower conviction rate than a population that includes individuals who have a criminal history and therefore could not have passed that background check. The real issue is whether permit holders are more or less law-abiding than the rest of the population who could pass a background check but who don’t obtain a permit. And the comparison with police officers is a tangent, as law enforcement frequently encounters situations where the opportunity for criminal conduct by officers such as the misuse of force is a potential, whereas permit holders as a group will almost never face similar situations.
Multiple in-depth investigations have uncovered that permit revocation data is notoriously inaccurate. A 2011 report by MLive analyzed revocation data in Michigan and found that in two large counties, 77% and 79% of the convictions of permit holders were unreported, meaning the permits should have been revoked for many people but were not. Many counties file incomplete reports, or none at all, making statewide revocation data useless in determining crime rates among permit holders. Frequently, the boards overseeing permits are not notified when a permit holder is convicted.
A 2011 investigation by “The New York Times” examined concealed carry permit data in North Carolina. They found:
“More than 2,400 permit holders were convicted of felonies or misdemeanors, excluding traffic-related crimes, over the five-year period, The Times found when it compared databases of recent criminal court cases and licensees. While the figure represents a small percentage of those with permits, more than 200 were convicted of felonies, including at least 10 who committed murder or manslaughter. All but two of the killers used a gun….
More than 200 permit holders were also convicted of gun- or weapon-related felonies or misdemeanors, including roughly 60 who committed weapon-related assaults.
In addition, nearly 900 permit holders were convicted of drunken driving, a potentially volatile circumstance given the link between drinking and violence.
The review also raises concerns about how well government officials police the permit process. In about half of the felony convictions, the authorities failed to revoke or suspend the holder’s permit, including for cases of murder, rape and kidnapping. The apparent oversights are especially worrisome in North Carolina, one of about 20 states where anyone with a valid concealed handgun permit can buy firearms without the federally mandated criminal background check. (Under federal law, felons lose the right to own guns.)
…The Times also found scores of people who kept their permits after convictions for violent misdemeanors. They included more than half of the roughly 40 permit holders convicted in the last five years of assault by pointing gun and nearly two-thirds of the more than 70 convicted of a common domestic violence charge, assault on a female.”
It is worth mentioning that the Crime Prevention Research Center’s brief cites the revocation data from both states as evidence to bolster their claims about permit holders being extremely law abiding.
Any analysis that relies on permit revocation data without significant caveats is fundamentally flawed — and this is before considering other issues with the concealed carry permitting system, including cases of not properly conducting background checks on permits, states such as Florida pushing to allow for incomplete background checks (or none in the case of “Constitutional Carry” states), the many states relying on an honor system for permit revocation (physical confiscation of licenses is rare), and a majority of states in which permit holder data is not available to the public (therefore preventing researchers from checking the veracity of the claim that law-abiding permit holders are indeed law-abiding or not). Further, in their 2017 book Punishment and Crime, Gary Kleck and Brion Sever indicate that only approximately one in five violent crimes result in the offender’s arrest and conviction. If this ratio is applied to permit revocations, which are already a substantial undercount of crime, revocations could be capturing less than one-tenth of permit holder crime. Until additional higher-quality data becomes available, revocation data cannot be reliably used to determine criminality among permit holders.
The question of whether concealed carry permit holders are more law-abiding than their peer citizens who could also pass a background check is murkier. No accurate measure of criminality among permit holders exists and no data are available on the entire subset of the population who could pass a background check. More data is needed before any concrete conclusions are drawn.
That being said, some evidence suggests that permit-holders may engage in more firearm-related crime than other law-abiding citizens. A 2013 study analyzed conviction rates of permit holders in Texas for a variety of crimes relative to the general population from 2001 to 2009. However, rather than a direct comparison, the authors examined the ratio of crimes within each group. The underlying assumption of this research is that permit holders and law-abiding citizens should have the same crime rates, if obtaining a permit has no effect on criminality (and that firearm related crimes should not be disproportionately large for permit holders). Yet 22.3% of convictions of permit holders involved “weapons offenses” and 13.3% were for “deadly conduct.” For the general population, those percentages were 9% and 2.8% respectively. It is important to note that the general population had a higher rate of both crimes than permit holders, but that when permit holders were convicted, the odds were greater that the conviction was related tor a weapons charge or deadly conduct.
Further, permit holders had a disproportionately high rate of gun related crimes relative to other non-firearm related crimes. While this difference could purely be a result of demographics in the state of Texas, the evidence tentatively indicates that concealed carry results in otherwise law-abiding citizens to engage in more firearm-related crime.
It is important to again stress that the conviction data is likely an undercount (due to the reporting errors investigations have found in other states), and that good data does not yet exist for a comparison group of individuals who could pass a background check but do not carry firearms.
Academic Literature
The brief filed by the Crime Prevention Research Center states: “A survey of the empirical academic regression literature finds that 25 studies have found that Right-to-Carry laws reduce violent crime, 15 studies find no significant effect, and 12 find that Right-to-Carry laws increase violent crime.” In other words, the brief argues that academic evidence indicates that loosening concealed carry laws reduces crime.
However, a closer analysis of the literature review reveals concerns about many of the included studies:
- Some studies are not analyzing the impact of concealed carry laws
- Several studies are misclassified
- The findings of at least two papers have been rejected by their own authors
- The lone dissent of the National Research Council Report is given the same weight as the main conclusion supported by the 15 other academics
- One of the studies misreads its own results
- A large number of germane studies were ignored
Correcting all of these substantial errors completely reverses the conclusion of the review.
The following are five of the 25 studies purporting to show that loosening concealed carry laws decrease crime, but which failed to measure the effect of concealed carry and crime as a key variable. Instead, safe storage laws, private security contractors, juvenile handgun possession, abortion laws, and unmarried fertility rates were measured:
- John Lott and John Whitley, “Safe-Storage Gun Laws: Accidental Deaths, Suicides, and Crime,” Journal of Law and Economics, October 2001
- Bruce Benson and Brent Mast, “Privately Produced General Deterrence,” Journal of Law and Economics, October 2001
- Thomas Marvell, “The Impact of Banning Juvenile Gun Possession,” Journal of Law and Economics, October 2001
- John Lott and John Whitley, “Abortion and Crime: Unwanted children and out-of-wedlock births,” Economic Inquiry, October 2007
- Todd Kendall and Robert Tamura, “Unmarried Fertility, Crime, and Social Stigma,” The Journal of Law and Economics, 2010
One of the 15 studies which found no significant effect measured the impact of abortion on crime, not concealed carry:
- John Donohue and Steven Levitt, “The Impact of Legalized Abortion on Crime,” Quarterly Journal of Economics, 1999
The CPRC brief also cites John Lott’s book, More Guns, Less Crime, as a study despite being neither a study nor appearing in a peer or law review journal.
An additional two studies are misclassified. Both Sabbath, et al (classified as decreasing crime) and Durlauf, et al (classified as having no effect) show an increase in crime.
- Erika Sabbath, Summer Hawkins, and Christopher Baum, “State-level changes in firearm laws and workplace homicide rates: United States, 2011 to 2017” American Journal of Public Health, 2020
- Steven Durlauf, Salvador Navarro, and David Rivers, “Model Uncertainty and the Effect of Shall-Issue Right-to-Carry Laws on Crime,” European Economic Review, 2016
The authors of two studies that found loosening concealed carry laws reduce crime no longer support their own findings because they relied on flawed Uniform Crime Report (UCR) data. In a paper about the unreliability of county-level crime data, Michael Maltz stated Lott and Mustard used flawed UCR crime statistics to reach their counter-intuitive conclusion that more guns means less crime. Maltz stated that “counties in those states with the most coverage gaps have laws permitting the carrying of concealed weapons” and “in their current condition, county-level UCR crime statistics cannot be used for evaluating the effects of changes in policy.”
- David Olson and Michael Maltz, “Right-to-Carry Concealed Weapon Laws and Homicide in Large U.S. Counties: The Effect on Weapon Types, Victim Characteristics, and Victim-Offender Relationships,” Journal of Law and Economics, October 2001 (MLR)
- Florenz Plassmann and John Whitley’s 2003 study “Confirming More Guns, Less Crime” relied on the same flawed county data set created by Lott for his 2003 book The Bias Against Guns. Ayres and Donohue uncovered multiple errors in their data set and after correcting these errors, found no evidence that RTC laws reduced crime. In a September 2003 email exchange with Tim Lambert, a computer scientist at the University of South Wales, Plassmann admitted that correcting the coding errors changed the paper’s conclusions.
- Florenz Plassmann and John Whitley, “Confirming More Guns, Less Crime,” Stanford Law Review, 2003
The lone dissent cited by CPRC is from the late criminologist James Q. Wilson during the 2004 examination by the National Research Council of gun laws. Out of a panel of 16 scholars, Wilson was the only scholar who believed the research showed murder rates declined after the states adopted RTC laws, a view explicitly rejected by the remaining panelists. CPRC strangely counts Wilson’s lone dissent as one study with the weight and status equal to the majority opinion supported by 15 scholars who disagreed with the claim that more guns means less crime.
- James Q. Wilson, Firearms and Violence: A Critical Review, National Academies Press, 2005
One study by a Carlisle Moody, a board member of CPRC, and several colleagues stated: “Once corrected for omitted variables, the most robust result, confirmed using both county and state data, is that RTC laws significantly reduce murder. There is no robust, consistent evidence that RTC laws have any significant effect on other violent crimes, including assault.”
However, Moody and his co-authors misread their own analysis.
As Table 3 on page 7 of Moody’s paper (pictured below) demonstrates, the increase in aggravated assault for county level data is statistically significant, yet is not bolded by the authors like the other statistically significant findings. In statistics, a result is usually considered significant if there is a less than 5% chance that the result is due to random chance, meaning it has a “t-statistic” greater than 1.96. A significant result means that the authors of a study can put a higher degree of confidence in their finding. As the table below shows, the “stat” for the “post-law trend” for “Assault” (highlighted with a red box) has t-statistics of 2.8 and 2.25 for the general and specific model respectively. Further, the result itself is a positive number, indicating an increase in assault. Therefore, this study should not be included in the “decreases crime” category, and instead moved to “no effect or mixed” category.
- Carlisle Moody, The Impact of Right-to-Carry Laws on Crime: An Exercise in Replication,” Review of Economics and Finance, 2014
Finally, the CPRC brief fails to include 23 empirical academic regression analyses. Of these, four find that loosening concealed carry standards decrease crime, seven find no effect on crime, and 12 find an increase in crime. All of the studies in this analysis are listed in Exhibit A.
We wish to provide the full scope of work relied upon by Arizona and the Law Enforcement briefs. To do so requires that our analysis below include unpublished regression analyses by academics. As discussed below, excluding these papers does not alter the overall conclusions.
From 1997-2021, 65 academic studies on concealed carry laws and their impact on violent crime have been published: 18 find a decrease, 21 find no effect (or mixed), and 26 find an increase. Forty percent of studies find that loosening concealed carry laws has a detrimental effect on crime, which is a plurality. Only 28% find a beneficial impact.
The publication of the National Research Council (NRC) report in 2005 marked a key turning point in the literature on concealed carry. After reviewing the existing literature (through 2003) and conducting their own statistical analysis, the panel concluded:
“The literature on right-to-carry laws summarized in this chapter has obtained conflicting estimates of their effects on crime. Estimation results have proven to be very sensitive to the precise specification used and time period examined. The initial model specification, when extended to new data, does not show evidence that passage of right-to-carry laws reduces crime. The estimated effects are highly sensitive to seemingly minor changes in the model specification and control variables. No link between right-to-carry laws and changes in crime is apparent in the raw data, even in the initial sample; it is only once numerous covariates are included that the negative results in the early data emerge. While the trend models show a reduction in the crime growth rate following the adoption of right-to-carry laws, these trend reductions occur long after law adoption, casting serious doubt on the proposition that the trend models estimated in the literature reflect effects of the law change. Finally, some of the point estimates are imprecise. Thus, the committee concludes that with the current evidence it is not possible to determine that there is a causal link between the passage of right-to-carry laws and crime rates.”
The NRC chapter on concealed carry laws stands out from the rest of the report in a number of aspects. First, the findings on concealed carry had one dissenter out of a panel of sixteen academics. The lone dissenter argued that RTC laws might reduce murder. In their own summary, the remaining fifteen empaneled academics, strongly disputed the conclusion of the lone dissenter. Second, the panel used their own regression analysis to analyze the effect of RTC laws, which was in stark contrast to the remainder of the report which operated as a broad literature review, tepidly weighing in on studies concerning gun violence and various policies. Third, perhaps in part because the fifteen academics employed regression analysis of their own, they were quite pessimistic about future research finding conclusive results if the same methodology was employed as in the existing literature.
Unlike the statistical analysis of the Crime Prevention Research Center brief, statistical best practices have significantly improved since the publication of the NRC Report in 2005. For example, earlier studies often did not use clustered standard errors, were significantly biased by the crack epidemic, failed to properly consider the problems with multicollinearity when choosing control variables, did not properly select appropriate criminal justice variables, and did not properly account for major structural flaws in underlying county-level data. Given these facts and the NRC’s skepticism of the existing literature’s ability to identify the causal impact of concealed carry laws, it is useful to look separately at the modern academic literature since the 2005 report.
Since the 2005 NRC report, 35 studies were conducted on loosening concealed carry laws, five of which find a decrease, seven find no effect (or mixed), and 23 find an increase in crime. In summary, 66% of the modern academic literature finds that loosening concealed carry laws has a detrimental effect on crime, while only 14% finds a beneficial impact.
Excluding studies which have not yet been published in an academic journal does not alter the overall conclusion of the academic literature (although it would substantially weaken the Arizona and Law Enforcement briefs). Without unpublished papers, 15 studies that find loosening concealed carry laws decrease crime, 20 that find no effect (or mixed) on crime, and 22 that find an increase in crime. Since 2005, five published studies find a decrease, six find no effect (or mixed), and 19 find an increase.
After correcting for classification errors and excluding studies not germane to the debate over concealed carry laws, the results are clear: since the 1990s a plurality of studies find that loosening concealed carry laws increase crime, and that conclusion shifts to a clear majority of modern studies after the National Research Council report in 2005.
Surveys of Academics
The survey of 120 experts cited by the Crime Prevention Research Center is not the largest of its kind. A July 2014 Harvard Injury Control Research Center survey about concealed carry laws was completed by 140 researchers, with a majority believing more permissive concealed carry laws have not reduced crime rates.
Lott’s survey results are more pro-gun than the New York Times and Harvard surveys because Lott and Mauser use dubious search terms to define their sample. The economists surveyed by Lott were found by searching JSTOR for the term “gun control” which is not a neutral term and is primarily used by advocates of weaker gun laws. A more representative sample of researchers studying the relationship between gun ownership and crime could have been found by searching JSTOR for “guns” and “crime.” For example, the sample of public health researchers was created by searching PubMed for the terms “firearms,” “violence,” “clinical trial,” “study,” “comparative study,” and “journal article.” This might help explain why the views of the public health researchers were more consistent with other surveys of experts than Lott’s surveys of economists and criminologists.
The sample of economists includes scholars who published as early as January 1997, the same month Lott’s major article about concealed carry was published. Lott included the views of economists who wrote one article about gun violence nearly two decades prior to the study, but excluded other researchers with more recent publications that fell outside the “gun control” search term. The long window (from 1997 to 2013) allows Lott to include all of his own studies, but excludes economists who published studies finding that gun ownership is a risk factor for murder and suicide. In contrast, the Harvard survey focused on current researchers who were the first author on at least one peer-reviewed journal article from 2011-2014.
An impact of Lott’s time frame and search term is that it includes ten articles that appeared in a 2001 issue of the Journal of Law & Economics (JLE). This special JLE issue featured pro-gun articles presented at a conference sponsored by the conservative think tank American Enterprise Institute (AEI) and organized by Lott while he was an AEI resident scholar.
Lott’s sample of criminologists is problematic. Unlike the sample of economists and public health researchers, the authors do not disclose which search terms they used. Lott states “A total of 80 articles were identified” by 80 different criminologists after searching peer-reviewed criminology journals for articles about firearms and violence between January 2000 and December 2013. Lott implies only 80 peer-reviewed articles about gun violence by 80 criminologists were found, but a simple database search during those 14 years yielded hundreds of hits. Only 39 criminologists responded to his survey and top firearm researchers who have been published in criminology journals during this period were not sampled by Lott and Mauser.
GVPedia maintains the largest publicly available database of gun violence studies with 1,118 studies as of December 2020. GVPedia’s study database includes 35 researchers who have published ten or more peer-reviewed studies about gun violence. Only four of the 35 (11%) researchers wrote articles consistently concluding that gun control laws do not reduce violence: John Lott, Gary Kleck, Thomas Marvell, and Carlisle Moody. In contrast, 30 (86%) researchers generally find that gun ownership does not make people safer. We classified the remaining researcher, Mark Gius, as neutral because his findings do not consistently fall on either side of the question. This unscientific review of the most published researchers on gun violence supports the findings of the Harvard and New York Times surveys, but not the Lott survey.
A final way to examine the views of scholars on concealed carry laws is to tally the authors who have published on this exact topic. Reviewing the concealed carry studies listed earlier, 18 academics wrote studies that find loosening concealed carry laws decrease crime, 52 academics wrote studies that found no effect on crime, and 46 wrote studies that found an increase in crime (all of the researchers are listed in Exhibit B). Fifteen of the 52 academics that wrote studies finding no effect on crime were part of the National Research Council report in 2005. When looking at the literature since 2005, the results shift decisively in favor of academics arguing that loosening concealed carry laws increases crime. Only seven academics published work finding a decrease in crime, 13 published work showing no effect on crime, and 46 published work showing an increase in crime.Every author who has published work showing an increase in crime has authored at least one study since 2005.
Further, some academics have published multiple studies that have come to different conclusions over the years. Two researchers published separate papers showing a crime decrease and no effect, five researchers separate papers showing no effect and a crime increase, and one researcher published three separate papers finding a decrease, no effect, and increase. Using the researchers’ most recent publication to indicate their views on concealed carry, only four academics since 2005 found a decrease in crime, 11 found no effect, and 46 found an increase.
Just as with the review of academic literature review, once poor methodology and errors are corrected, the overwhelming view of academics who have published modern research on gun violence is that loosening concealed carry laws does not reduce crime, and in fact likely increases it.
Defensive Gun Use
A DGU occurs when a citizen either fires, brandishes, or reveals a firearm in an attempt to stop an assailant from committing or completing a crime. This action can be in defense of oneself, others, or even property. The rigorous empirical work on the relationship between DGUs and crime, unfortunately, is fraught with definitional challenges; researchers have not defined the behaviors that are sufficient to constitute a genuine defensive gun use. For example, a potential victim might merely announce that they have a gun (without physically brandishing it) and this announcement results in the perceived deterrence of crime. Is that situation considered a DGU? If a legitimate gun holder misperceives a situation as dangerous – such as a few teenagers crowding around his car – and brandishes his weapon, is that considered a deterrence?
Regardless of how generally or specifically one defines a DGU, these borderline cases are unlikely to be sufficient to materially impact actual crime rates. As far as direct deterrence is concerned, the most conservative definitions of DGU are more meaningful, as they represent cases that, had they not occurred, would have actually contributed to crime rates.
The three major estimates of the yearly frequency of defensive gun use (DGU) are:
- Approximately 2.5 million (according to Gary Kleck and other private surveys)
- Fewer than 100,000 (according to the National Crime Victimization Survey)
- Approximately 2,000 (according to the Gun Violence Archive, based on police and media reports)
While the FBI reported 331 justifiable homicides in 2016, this number is necessarily a subset of overall DGUs (which include cases where the attacker was not killed), and is therefore of limited use in determining the overall prevalence of DGUs. Further, given that FBI data is incomplete for many states due to voluntary reporting requirements, this number is likely an underestimate of the true justifiable homicide total. The Gun Violence Archive records 611 total deaths from DGUs in 2016 (460 instances in which the defender killed the attacker), which is a more accurate assessment. All of these numbers involve civilian DGUs, not law enforcement shootings.
Gary Kleck and 2.5 million
The most frequently cited figure by those advancing the direct deterrence hypothesis is 2.5 million DGUs annually. That number comes from Dr. Gary Kleck and Mark Gertz’s “Armed Resistance to Crime: The Prevalence and Nature of Self-Defense with a Gun” published in 1995. The survey itself was administered in 1993 to 4,977 individuals who were directly asked whether they had used a firearm in self-defense over the past year. The survey also asked whether such a use had occurred over the past five years, but the main focus of the study and subsequent articles has been on the yearly total. Of these roughly 5,000 individuals, 66 indicated they had indeed used a firearm in self-defense in the past year (1.33% of the sample), which extrapolated to 2.5 million for the entire country. Similar telephone surveys, referred to as one-shot surveys, produce DGU estimates ranging from 760,000 to 3.6 million.
One-shot telephone surveys have a number of advantages. Assuming the participants accurately describe their experiences, the primary benefits are allowing people to report legitimate incidents that for various reasons were not reported to the police or media. (This is unlike the National Crime Victimization Survey which asks a broad, open-ended question about what action the participants took in response to an attempted or completed crime; one-shot surveys directly ask respondents about defensive gun use). In addition, respondent’s anonymity (and no connection to the government) allows respondents to relate incidents they would otherwise might be hesitant to report out of fear of repercussions.
The primary disadvantages of these surveys are a lack of verification, relatively small size, and a high risk of false positives. Because the surveys are anonymous, researchers cannot verify police or media reports or interview witnesses. Kleck makes much of the fact that if participants respond in the affirmative to whether they had a DGU, they then go on to answer 19 more questions whereas those who answer “no” don’t have follow-ups). However, researchers did not make the requisite follow-ups necessary to filter out false stories. Furthermore, these questions are not independent of each other, so a respondent can readily discern the intent of a researcher’s questions and perhaps manufacture or embellish stories to satisfy research demands (desirability bias). Compared with the NCVS which surveys 135,000 households twice a year, the one-shot surveys have sample sizes ranging from fewer than 1,000 to 5,000 participants. Finally, in surveys of statistically rare events false positives are likely to dominate false negatives. Even the most aggressive DGU survey estimate still represents less than 2% of the US adult population in a year.
A false positive occurs when a respondent claims that he or she had a defensive gun use, but in fact did not during the time period allotted by the survey. A false negative occurs when a respondent claims that he or she did not have a defensive gun use during the allotted time period, but did have one in reality.
False positives could arise as a result of social desirability bias, strategic answering, lying, or telescoping. Social desirability bias occurs when respondents want to portray themselves positively to the interviewer, or justify their purchase of their firearms. Approximately 30% of all households have firearms; the potential exists that some respondents would exaggerate or fabricate incidents in a survey to move the DGU tally upwards. Finally, telescoping occurs when a respondent reports a DGU that was actually outside the time-frame of the survey.
False negatives could arise as a result of social undesirability, i.e. not wanting to report criminality, and forgetting. Some respondents could feel that the interviewer would look negatively on a DGU incident, and therefore decide to not report it. Some gun owners could be fearful that the interviewer or the government would find out that they own firearms, and seek to hide the fact that they had used a firearm at all. Some respondents may fear that their DGU was in fact an illegal action (or that they were carrying the firearm illegally at the time) and therefore not want to disclose the incident. Further, the longer the time horizon, the more likely it will be that someone could forget a DGU, particularly if the respondent wasn’t directly assaulted.
Kleck contends respondents have stronger incentives to give false negatives than false positives, therefore his and similar surveys are likely to underestimate DGUs. He estimates that anywhere from 36-64% of the respondents were either illegally using or carrying the firearm during their DGU, and that surveys of illegal activity consistently provide underestimates of said activity. For example, surveys on illicit drug use on juvenile arrestees and individuals using walk-in clinics find that they markedly underreport their own drug usage; false negatives that are subsequently revealed with medical tests.
The connection, however, between admitting cocaine use and disclosing a defensive gun use strains credulity. For starters, cocaine use among arrestees is not a rare event, which is the most important aspect when considering whether a survey might have a false positive problem. While using cocaine is obviously illegal in any context, Kleck’s survey on DGUs states: “We made no effort to assess either the lawfulness or morality of the respondents’ defensive actions.” Kleck consistently downplays any suggestion that respondents might think their DGU was heroic, yet 46.1% of the respondents indicate that their action at least “might have” saved someone’s life, with 15.7% saying it “almost certainly would have” saved someone’s life. It is not hard to imagine that at least some survey respondents would think saving someone’s life would appear heroic and socially desirable, thereby leading to potential false positives.
Even without contesting Kleck’s core points about incentives, his dismissal of false positives as an overriding concern commits the base rate fallacy. In reply to a critique by Dr. David Hemenway of Harvard, Kleck wrote the following:
“In our view, a more realistic version of Hemenway’s hypothetical scenarios, one more in tune with research on errors in surveys of illegal behavior, might have 48 true positives, 48 false negatives (and thus 96 persons with a genuine DGU), 18 false positives, and 4,886 true negatives in a sample of 5,000 cases, implying 50% test sensitivity (the percent of true positives accurately detected) and 99.6% test specificity. Under this alternative set of hypothetical assumptions, the true DGU prevalence would be 1.92%, while the measured rate would be 1.32%, as was obtained in the NSDS, implying that the true DGU rate was actually 45% higher than the one estimated.”
Kleck’s postulation that the false negative rate might be 135 times larger than the false positive rate is completely unrealistic. Furthermore, this example unintentionally highlights just how unrealistic assumptions of false positive and negative rates must be for Kleck’s survey not to suffer from a false positive problem. Even assuming a true DGU incidence rate of 2% (or approximately 5 million DGUs at the time of the survey, which is higher than even the most aggressive estimate), the false negative rate would have to be at least 50 times larger than the false positive rate for the survey not to overestimate DGUs. Assuming a true DGU incidence the exact same as Kleck’s results (2.5 million annually), the false negative rate would need to be nearly 100 times larger than the false positive rate for the survey not to overestimate DGUs. For the NCVS estimate of roughly 50,000 DGUs, the false negative rate would have to be nearly 5,000 times larger than the false positive rate to not overestimate.
There is no defensible justification available to explain a differential this extreme between the false positive and negative rates. Indeed, the difference between the Gun Violence Archive’s finding of 2,000 verified DGUs annually and Kleck’s 2.5 million DGUs can be explained by a false positive rate of just slightly more than 1%. With 2,000 verified DGUs, an accurate survey of 5,000 individuals would find no true positives on average. This would mean that 66 DGU cases out of that 5,000 that Kleck’s survey found would all be false positives, which would mean a false positive rate of 1.32% (66/5000). Meanwhile, police and media reports would have to miss more than 99.9% of all DGUs for Kleck’s survey to be accurate.
External validation checks of the survey reveal the false positive problem is more than hypothetical:
According to Kleck’s estimates, guns were used defensively in 845,000 burglaries, We know, however, from reliable victimization surveys that there were fewer than 1.3 million burglaries in the United States at the time of the survey; 33% of the burglaries had occupants in the home who were awake at the time of the burglary; and of those households broken into, only 42% owned firearms. Even if burglars only targeted households that owned firearms, and those gun owners used guns in self-defense every time they were awake, the number would still be impossible to obtain. At most, only 182,000 defensive uses in response to burglaries would be even possible.
Kleck’s analysis also implies hundreds of thousands of criminals are getting shot every year, yet medical records do not support this claim. While Kleck has claimed the majority of criminals simply never seek medical attention, data indicates more than 90% of criminals who have been wounded prior to incarceration seek medical attention.
Respondents indicate that police were informed of or found out about 64.2% of their DGUs. This would mean empirical evidence of 1,605,000 DGUs exists. Yet police and media reports collected by the Gun Violence Archive find fewer than 2,000 verified DGUs. Even assuming that only the 23.9% of cases where shots were fired (according to Kleck’s survey) are recorded, that would still mean 597,500 cases, yet empirical evidence for only 0.3% of those actually exists.
In Kleck’s survey, 73.4% of the reported DGU incidents were allegedly against strangers. Yet according to the NCVS: “In 2010, strangers committed about 38% of nonfatal violent crimes, including rape/sexual assault, robbery, aggravated assault, and simple assault.”
Those are just a few of the many questionable results that Kleck’s survey and other one-shot surveys produce.
In addition, Kleck argues that crime estimates from the NCVS cannot be used as a validity check to his own results. Yet in his own study, when trying to make the point for why the NCVS is not well equipped to measure DGUs, Kleck admits that: “Its survey instrument has been carefully refined and evaluated over the years to do as good a job as possible in getting people to report illegal things which other people have done to them.”
Kleck further contends that the overall DGU number should not be judged by subsets of DGUs due to their size in the survey: “Our estimates of total DGUs are likely to be fairly reliable partly because they are based on a very large sample (n=4977), while any estimates one might derive pertaining to one specific crime type are necessarily less reliable because they rely partly on a far smaller subsample, i.e., the 194 reported DGU incidents, of which about 40 were linked to burglaries.” However, it would be rather highly unlikely for a survey to fail every attempt at external validation ranging from empirical data to other larger criminological surveys, and yet still have its total number be accurate. Furthermore, any survey plagued by even a small risk of false positives, becomes less (not more) accurate and more error-prone when implemented on a larger population – the base rate of false positives simply scales with the sample size.
This problem of false positives is not limited only to surveys of DGUs. Surveys ranging from measuring NRA membership, the number of people abducted by UFOs, people reporting rare diseases, and so on are all plagued by false positives due to their very nature of attempting to measure a statistically rare event. It would be highly unusual to think that one-shot surveys are the singular exception to the false positive problem, particularly when all of the available empirical data indicates that their numbers are not accurate.
William English’s 2021 survey cited in Arizona’s amicus brief offers no improvement over Gary Kleck’s original work, and is therefore subject to the same flaws that render Kleck’s numbers unusable.
NCVS and 50,000
The National Crime Victimization Survey (NCVS) is conducted semi-annually by the Bureau of Justice Statistics and surveys a nationally representative sample of 135,000 households “on the frequency, characteristics, and consequences of criminal victimization in the United States.” From 2007-2011 NCVS extrapolated 235,700 DGUs in response to violent nonfatal victimizations, which translates to slightly more than 47,000 annually. During the same period, NCVS estimated 103,000 victims of property crime used a firearm in self-defense, or approximately 20,000 annually. These figures represent approximately 1% of all violent nonfatal victimizations and 0.1% of property crime victimizations (86% of property crime victims were not present during the incident, which indicates 0.7% of victims who were present during the property crime used a firearm defensively). At the time of Kleck’s survey (1993), NCVS estimated 83,000 DGUs annually.
The primary advantages of the NCVS are its size, frequency, and sophistication. With 135,000 households being surveyed, the NCVS dwarfs one-shot surveys like Kleck’s which has a sample size of 5,000. Unlike one-shot surveys that are conducted once (hence the name), the NCVS has been conducted since 1973, usually semi-annually. Larger, more frequent surveys provide researchers a significant opportunity to analyze and revise methodology to enhance a survey’s accuracy.
The primary disadvantage of the NCVS survey with regard to DGUs is that the survey does not explicitly ask about DGUs, instead asking participants about attempted or completed crimes against them more generally. Unlike Kleck’s survey which directly asks participants whether they used a firearm in self-defense, the NCVS asks the open ended “What did you do?” and “Anything else?” to determine whether the participants took any defensive action. Without a direct prompt, participants might fail to specifically disclose that a firearm was used. Participants also would not have the opportunity to disclose a DGU if they used their firearm to stop a crime happening to someone else, leaving these types of incidents uncounted.
Kleck and others postulating widespread defensive gun use also argue that the ordering of NCVS questions is problematic. Respondents must first indicate they were the victim of a crime before they disclose a DGU. Hence if a participant doesn’t think they were the victim of a crime (i.e. they stopped the perpetrator before a crime was committed), they won’t have an opportunity to report their firearm usage.
This concern, however, is not as problematic as it first appears. The NCVS is careful to ask about attempted crimes and threats as well as completed crimes. If a respondent wasn’t at least on the receiving end of a threat, justifying the resulting firearm use as “defensive” is difficult. Instead, a more coherent methodology is to first ask whether the participant was the victim of an attempted or completed crime (or threat). That methodology will more likely counter false positives and socially undesirable firearm uses (namely aggressive rather than defensive gun use).
Despite measures to combat false positives, the NCVS still faces the same structural problems that plague one-shot surveys conducted by Kleck and others. As stated in the discussion of Kleck’s results, assuming a true DGU incidence of roughly 50,000 (in line with NCVS estimates), the false negative rate would have to be nearly 5,000 times larger than the false positive rate to not overestimate. The order of the questions (and not asking a direct priming question on firearm use) is unlikely sufficient to combat such a differential, especially in light of strong social incentives to embellish, fabricate, or mis-remember defensive gun use details.
While NCVS defensive gun use estimate totals are within the realm of plausibility, the foundational problem of false positives in surveys of rare events, combined with the empirical evidence, indicate that the NCVS also produces an overestimate.
Comparing DGU Numbers to NCVS Crime Rates
The brief filed by the state of Arizona quotes a fragment of a 2013 National Research Council Report stating that: “[a]lmost all national survey estimates indicate that defensive gun uses by victims are at least as common as offensive uses by criminals, with estimates of annual uses ranging from about 500,000 to more than 3 million … in the context of about 300,000 violent crimes involving firearms in 2008.” Yet the brief excludes the very next two sentences of the report, which state: “On the other hand, some scholars point to a radically lower estimate of only 108,000 annual defensive uses based on the National Crime Victimization Survey…. The variation in these numbers remains a controversy in the field.”
Further, the claim that “defensive gun uses by victims are at least as common as offensive uses by criminals is completely false, as the following chart from Dr. David Hemenway and Dr. Mary Vriniotis reveals:
As they explain:
“For both types of surveys, the number of estimated criminal gun uses are far higher than the number of self-defense gun uses. Indeed, no survey that has used the same methodology for estimating both criminal and self-defense (i.e., comparing Box A with Box B or Box C with Box D) has found anywhere near the number of self-defense gun uses compared to criminal gun uses.”
“So how can anyone claim that there are more self-defense gun uses than criminal gun uses? They do so by comparing different types of surveys. They compare the results of Box B with Box C! However, it is completely inappropriate to compare estimates which come from two radically different survey methodologies. An appropriate assessment of the data is that the overwhelming evidence from both types of surveys is that guns in the United States are used far more in crime than in self-defense.”
And indeed, comparing NCVS results to NCVS results yields a very different picture to what the brief argues—that more than nine times as many people are victimized by guns than protected by them.
Respondents in two Harvard surveys experienced more than three times as many offensive gun uses against them as defensive gun uses. Another study focusing on adolescents found 13 times as many offensive gun uses. Yet another study focusing on gun use in the home found that a gun was more than 6 times more likely to be used to intimidate a family member than in a defensive capacity. The evidence is unanimous that gun crimes outnumber defensive gun uses, as long as one compares likes to likes.
Gun Violence Archive and 2,000
The Gun Violence Archive (GVA) was founded in 2012 with the purpose of comprehensively tracking gun violence incidents. Utilizing more than 2,000 media and police sources across the country, GVA tracks, in near real-time, all forms of gun violence except suicides. (Suicide is tallied after the CDC publishes its data and has a lag of more than a year.) GVA classifies defensive gun uses as: “The reported use of force with a firearm to protect and/or defend one’s self or family. Only verified incidents are reported.” The totals, for each year since GVA’s founding:
2014: 1,478
2015: 1,396
2016: 1,993
2017: 2,119
2018: 1,875
2019: 1,598
2020: 1,478
These verified defensive gun use totals are necessarily an undercount of the actual prevalence of overall DGUs. As GVA itself notes under its methodology: “There are sometimes questions about Defensive Gun Uses which are not reported to police. GVA can ONLY list incidents which can be verified. Our policies do not take into account stories not reported, ‘I can’t believe this happened to me’ scenarios or extrapolations from surveys. Our position is that if an incident is significant enough that a responsible gun owner fears for their life and determines a need to threaten lethal force it is significant enough to report to police so law enforcement can stop that perpetrator from harming someone else.”
Proponents of the widespread DGU theory could contend that police and media reports won’t capture the overwhelming majority of DGUs. Therefore the total verified DGUs count is just as misleading and limited as the FBI’s count of justifiable homicides when trying to determine the number of overall DGUs. While this explanation directly contradicts the results contained within Kleck’s survey and the NCVS – both of which find that more than 60% of respondents claim that police found out about their DGU – but if those surveys are decisively flawed, then the ratio of reported to non-reported DGUs is unknowable.
However, unless police reports miss more than 99.9% (which is necessary for Kleck’s estimate to be accurate) or 96% (for the NCVS estimate to be accurate) of DGUs, the widespread DGU theory lacks any empirical support. Under reasonable assumptions about the ratio of unreported to reported DGUs, GVA data will provide a better benchmark than surveys.
It is important to note that GVA’s tally is not a total of socially desirable gun uses. Many of the cases involve shootouts where neither side can reasonably claim the mantle of “good guy,” or cases where one drug dealer may break into another drug dealer’s home and is fought off with a gun.
However, it is dubious to suggest significant numbers of socially desirable DGUs aren’t captured by this tally. Failure to report incidents to the police suggest one of two scenarios: either the DGU was probably illegal or the DGU was justified but the defendant was irresponsible by not reporting a dangerous criminal to police. Therefore, arguing that a massive number of DGUs aren’t reported is synonymous with an admission that defensive gun use, in the aggregate, is likely not beneficial for society. Indeed, Kleck’s own research finds that between 36-64% of the DGUs were likely illegal. This concession is mirrored in research by Harvard researchers, who found in a series of surveys that a majority of reported DGUs are likely illegal.
GVA’s data represents the first comprehensive attempt to track DGUs nationally. An earlier study in 2004 attempted to track DGU incidents in the Phoenix metropolitan area over the course of three and a half months using newspaper reports, supplemented by police and court records. The results: “Two DGUs involving killing assailants and one involving firing at an assailant were found. The three DGUs stemmed from cases of ‘mutual combat’ or exposed bystanders to gunfire.” As the authors concluded: “These findings cast doubt on rates of DGUs reported in an influential study by Kleck and Gertz, which predict that the police should have known about 98 DGU killings or woundings and 236 DGU firings at adversaries during the time the newspaper was surveyed. The findings reported here were closer to predictions based on the National Crime Victimization Survey, which suggest that the police should have known about eight DGU killings or woundings and 19 DGU firings at adversaries.”
GVA’s verified DGU numbers are a significant improvement over this early attempt both in scope and sophistication, but confirm the same hypothesis: both NCVS and one-shot surveys overestimate DGU prevalence.
When survey data and empirical data fundamentally contradict each other, the best course is to rely on the hard data, particularly when there is significant evidence that the surveys in question suffer from foundational flaws. While GVA’s verified DGU count of approximately 2,000 annually is an undercount of overall DGUs, the best available evidence and logic dictates that it is closer to the true number than one-shot surveys and the NCVS. Further, the higher the ratio of unreported to reported DGUs, the more likely it is that the sum of these firearm incidents are actually detrimental to society.
Are DGUs beneficial?
A 2000 study by Lawrence Southwick in the Journal of Criminal Justice found that more victims being armed “would reduce both losses and injuries from crime as well as both the criminals’ incentives to commit violent crimes and to be armed.” A 2006 study by Gary Kleck and Jongyeon Tark argued that “A variety of mostly forceful tactics, including resistance with a gun, appeared to have the strongest effects in reducing the risk of injury…”
Yet more recent analyses have cast doubt on these findings. As Phillip Cook and Kristin Goss point out in their book The Gun Debate: What Everyone Needs to Know, Kleck and Tark’s study actually finds that using any weapon other than a firearm is more likely to reduce injury than a defensive gun use (see the chart below). Further, none of the results are dramatically different from each other, meaning that even if using a gun was more effective than alternative means of self defense, the impact is minor.
A 2015 study by David Hemenway and Sara Solnick published in The Journal of Preventive Medicine provides a more updated look at the question. Using NCVS data, the study found that in incidents where a victim used a gun in self-defense, the likelihood of suffering an injury was 10.9 percent. Had the victim taken no action at all, the risk of injury was virtually identical: 11 percent. Having a gun also didn’t reduce the likelihood of losing property: 38.5 percent of those who used a gun in self-defense had property taken from them, compared to 34.9 percent of victims who used another type of weapon, such as a knife or baseball bat.
What’s more, the study found that while the likelihood of injury after brandishing a firearm was reduced to 4.1 percent, the injury rate after those defensive gun uses was similar to using any other weapon (5.3 percent), and was still greater than if the person had run away or hid (2.4 percent) or called the police (2.2 percent). Overall, the evidence demonstrates that on average there is no major benefit from using a gun in self-defense.
As Hemenway notes, however, the one time having a gun does significantly lower injury rates is before the individual takes a defensive action. This seemingly bizarre result can be explained in one of two ways: People carrying a gun could be more vigilant and aware of their surroundings, and therefore better able to avoid an initial attack. Alternatively, it could mean that the types of crime stopped by a defensive gun use are substantively different from the types of crime stopped when some other protective action is taken. Rather than circumstances where a victim is attacked by surprise, and thus more likely to be injured before taking protective action, incidents where a gun is used in self-defense could mainly involve mutually hostile confrontations that end in verbal or physical aggression.
Further, the reliance on Kleck’s surveys on defensive gun use actually cut against the briefs’ other argument that permit holders are overwhelmingly law-abiding. A pair of private surveys conducted by Hemenway in 1996 and 1999 asked respondents to describe DGUs in their own words. These responses were then submitted to a panel of 5 judges to determine whether the actions were more likely legal or illegal, while taking the respondents’ descriptions at face value. The panel concluded in a majority of cases that the defensive gun uses were illegal, casting severe doubt on their social benefit. Across these two large national samples of randomly selected telephone numbers, the conclusion was overwhelming: “Guns are used to threaten and intimidate far more often than they are used in self-defense. Most self-reported self-defense gun uses may well be illegal and against the interests of society.”
The surveys also found that when someone uses a gun in self defense, it is often part of an escalating hostile interaction — one in which both participants are likely to be responsible for the event that initially prompted the DGU. One male respondent who reported a defensive gun use described an incident as follows: “I was watching a movie and he interrupted me. I yelled at him that I was going to shoot him and he ran to his car.” Another respondent pulled out a gun to resolve a conflict with his neighbor: “I was on my porch and this man threw a beer in my face so I got my gun.”
This means that if one-off survey estimates of millions of DGUs are accurate, it actually means right-to-carry laws expand the use of firearms in a crime-increasing fashion. Lott emphasizes that “…state police agencies would have to be missing around 99.4% to 99.83% of violent crimes committed by permit holders” to see crime increases of a sufficient magnitude. Yet Kleck’s estimates require those very agencies to be missing 99.9% of DGUs, more than half of which are likely illegal based on a review of survey participants’ responses. The briefs’ claims on defensive gun use and the law-abiding nature of permit holders directly contradict each other and render their overall argument suspect.
Exhibit A: List of Academic Research on Concealed Carry
Reduce Violent Crime
- John Lott and David Mustard, “Crime, Deterrence, and Right-to-Carry Concealed Handguns,” Journal of Legal Studies, 1997
- William Bartley and Mark Cohen, “The Effect of Concealed Weapons Laws: An Extreme Bound Analysis,” Economic Inquiry, 1998
- Stephen Bronars and John Lott, “Criminal Deterrence, Geographic Spillovers, and Right-to-Carry Concealed Handguns,” American Economic Review, 1998
- John Lott, “The Concealed‐Handgun Debate,” Journal of Legal Studies, 1998
- William Bartley, “Will Rationing Guns Reduce Crime?” Economics Letters, 1999
- John Lott, “More Guns, Less Crime: A Response to Ayres and Donohue,” SSRN, 1999
- John Lott and William Landes, “Multiple Victim Public Shootings, Bombings, and Right-to-Carry Concealed Handgun Laws: Contrasting Private and Public Law Enforcement,” Chicago Working Paper, 2000
- Carlisle Moody, “Testing for the Effects of Concealed Weapons Laws: Specification Errors and Robustness,” Journal of Law and Economics, 2001
- David Mustard, “The Impact of Gun Laws on Police Deaths,” Journal of Law and Economics, 2001
- Florenz Plassmann and T. Nicolaus Tideman, “Does the Right to Carry Concealed Handguns Deter Countable Crimes? Only a Count Analysis Can Say,” Journal of Law and Economics, 2001
- John Lott and John Whitley, “Measurement Error in County-Level UCR Data,” Journal of Quantitative Criminology, 2003
- Eric Helland and Alexander Tabarrok, “Using Placebo Laws to Test ‘More Guns, Less Crime,’” Advances in Economic Analysis and Policy, 2004
- John Lott, “Right-to-Carry Laws and Violent Crime Revisited: Clustering, Measurement Error, and State-by-State Break Downs,” SSRN, 2004
- Carlisle Moody and Thomas Marvell, “The Debate on Shall-Issue Laws,” Econ Journal Watch, September 2008
- Carlisle Moody and Thomas Marvell, “The Debate on Shall Issue Laws, Continued,” Econ Journal Watch, 2009
- Mark Gius, “An examination of the effects of concealed weapons laws and assault weapons bans on state-level murder rates,” Applied Economics Letters, 2014
- Mehdi Barati, “New evidence on the impact of concealed weapons laws on crime,” International Review of Law and Economics, 2016
- Wei Shi and Lung-fei Lee, “The effects of gun control on crimes: a spatial interactive fixed effects approach,” Empirical Economics, 2017
No/Mixed Effect on Violent Crime
- Dan Black and Daniel Nagin, “Do Right-to-Carry Laws Deter Violent Crime?” The Journal of Legal Studies, 1998
- Hashem Dezhbakhsh and Paul Rubin, “Lives saved or lives lost? The effects of concealed-handgun laws on crime,” The American Economic Review, 1998
- Jens Ludwig, “Concealed-Gun-Carrying Laws and Violent Crime: Evidence from State Panel Data,” International Review of Law and Economics, 1998
- Ian Ayres and John Donohue, “Review: Nondiscretionary Concealed Weapons Laws: A Case Study of Statistics, Standards of Proof, and Public Policy,” American Law and Economics Review, 1999
- MV Hood and Grant Neeley, “Packin’ in the Hood?: Examining Assumptions of Concealed-Handgun Research,” Social Science Quarterly, 2000
- Mark Duggan, “More Guns, More Crime,” Journal of Political Economy, 2001
- Grant Duwe, Carlisle Moody, and Tomislav Kovandzic, “The Impact of Right-to-Carry Concealed Firearm Laws on Mass Public Shootings,” Homicide Studies, 2002
- Tomislav Kovandzic and Thomas Marvell, “Right-to-Carry Concealed Handguns and Violent Crime: Crime Control Through Gun Decontrol?” Criminology & Public Policy, 2003
- Paul Rubin and Hashem Dezhbakhsh, “The effect of concealed handgun laws on crime: beyond the dummy variables,” International Review of Law and Economics, 2003
- Lisa Hepburn et al., “The effect of nondiscretionary concealed weapon carrying laws on homicide,” The Journal of Trauma, 2004
- Firearms and Violence: A Critical Review, National Research Council, 2005
- Tomislav Kovandzic, Thomas Marvell, and Lynne Vieraitis, “The Impact of ‘Shall-Issue’ Concealed Handgun Laws on Violent Crime Rates: Evidence From Panel Data for Large Urban Cities,” Homicide Studies, 2005
- Matthew Rosengart et al., “An evaluation of state firearm regulations and homicide and suicide death rates,” Injury Prevention, 2005
- Robert Martin Jr. and Richard Legault, “Systematic Measurement Error with State-Level Crime Data: Evidence from the ‘More Guns, Less Crime’ Debate, Journal of Research in Crime and Delinquency, 2005
- Jeff Strnad, “Should Legal Empiricists Go Bayesian?,” American Law and Economics Review, 2007
- Carlisle Moody, Thomas Marvell, Paul Zimmerman, and Fasil Alemante, “The Debate on Shall-Issue Laws,” Review of Economics & Finance, 2014
- Mark Gius, “The effects of state and Federal gun control laws on school shootings,” Applied Economics Letters, 2018
- Mark Hamill, et al., “State Level Firearm Concealed-Carry Legislation and Rates of Homicide and Other Violent Crime,” Journal of the American College of Surgeons, 2018
- Carlisle Moody and Thomas Marvell, “The Impact of Right-to-Carry Laws: A Critique of the 2014 Version of Aneja, Donohue, and Zhang,” Econ Journal Watch, 2018
- Carlisle Moody and Thomas Marvell, “Do Right to Carry Laws Increase Violent Crime? A Comment on Donohue, Aneja, and Weber,” Econ Journal Watch, March 2019
- William English, “The Right to Carry Has Not Increased Crime,” SSRN, 2021
Increase Violent Crime
- Ian Ayres and John Donohue, “Shooting Down the ‘More Guns, Less Crime’ Hypothesis,” Stanford Law Review, 2003
- Ian Ayres and John Donohue, “The Latest Misfires in Support of the ‘More Guns, Less Crime’ Hypothesis,” Stanford Law Review, 2003
- John Donohue, “Guns, Crime, and the Impact of State Right-to-Carry Laws,” Fordham Law Review, 2004
- Benjamin French and Patrick J Heagerty, “Analysis of longitudinal data to evaluate a policy change,” Statistics in Medicine, 2008
- Patricia Grambsch, “Regression to the Mean, Murder Rates, and Shall-Issue Laws,” The American Statistician, 2008
- Ian Ayres and John Donohue, “More Guns, Less Crime Fails Again: The Latest Evidence from 1977 – 2006,” Econ Journal Watch, 2009
- Ian Ayres and John Donohue, “Yet Another Refutation of the More Guns, Less Crime Hypothesis – With Some Help From Moody and Marvell,” Econ Journal Watch, 2009
- Abhay Aneja, John Donohue, and Alexandria Zhang, “The Impact of Right-to-Carry Laws and the NRC Report: Lessons for the Empirical Evaluation of Law and Policy,” American Law and Economics Review, 2011
- Abhay Aneja, John Donohue, and Alexandria Zhang, “The Impact of Right to Carry Laws and the NRC Report: The Latest Lessons for the Empirical Evaluation of Law and Policy,” Stanford Law and Economics Olin Working Paper, 2014
- Paul Zimmerman, “The deterrence of crime through private security efforts: Theory and evidence,” International Review of Law and Economics, 2014
- Steven Durlauf, Salvador Navarro, and David Rivers, “Model Uncertainty and the Effect of Shall Issue Right-to-Carry Laws on Crime,” European Economic Review, 2016
- Jeremy Carter and Michael Binder, “Firearm Violence and Effects on Concealed Gun Carrying: Large Debate and Small Effects,” Journal of Interpersonal Violence, 2016
- Marjorie McElroy and Will Wang, “Seemingly Inextricable Dynamic Differences: The Case of Concealed Gun Permit, Violent Crime and State Panel Data,” SSRN, 2017
- Cassandra Crifasi, Molly Merrill-Francis, Alexander McCourt, and Jon Vernick, “Association between Firearm Laws and Homicide in Urban Counties,” Journal of Urban Health, 2018
- John Donohue, Abhay Aneja, and Kyle Weber, “Right-to-carry laws and violent crime: A comprehensive assessment using panel data and a state-level synthetic control analysis,” Journal of Empirical Legal Studies, 2019
- John Donohue, Abhay Aneja, and Kyle Weber, “Right-to-Carry laws increase violent crime: Moody and Marvell have missed the target,” Econ Journal Watch, 2019
- Michael Siegel et al., “The Impact of State Firearm Laws on Homicide and Suicide Deaths in the USA, 1991–2016: a Panel Study,” Journal of General Internal Medicine, 2019
- Mitchell Doucette, Cassandra Crifasi, and Shannon Frattaroli, “Right-to-carry laws and firearm workplace homicides: A longitudinal analysis,” American Journal of Public Health, 2019
- Mark Gius, “Using the synthetic control method to determine the effects of concealed carry laws on state-level murder rates,” International Review of Law and Economics, 2019
- Anita Knopov et al., “The impact of state firearm laws on homicide rates among the black and white populations in the United States,” Health and Social Work, 2019
- Michael Siegel et al., “The Impact of State Firearm Laws on Homicide Rates in Suburban and Rural Areas Compared to Large Cities in the United States, 1991-2016,” The Journal of Rural Health, 2020
- Emma Fridel, “Comparing the impact of household gun ownership and concealed carry legislation on the frequency of mass shootings and firearms homicide,” Justice Quarterly, 2020
- Erika Sabbath et al., “State-Level Changes in Firearm Laws and Workplace Homicide Rates: United States, 2011-2017,” American Journal of Public Health, 2020
- Terry Schell et al., “Changes in firearm mortality following the implementation of state laws regulating firearm access and use,” PNAS, 2020
- Stephen B. Billings, “Smoking Gun? Linking Gun Ownership to Neighborhood Crime,” SSRN, 2021
- Jonathan Colmer and Jennifer Doleac, “Do Right-To-Carry Laws Mitigate or Exacerbate the Temperature-Homicide Relationship?” Under Review, 2021
Exhibit B: List of Concealed Carry Authors
Overall Decrease | Overall No Effect | Overall Increase |
Barati | Alemante | Aneja |
Bartley | Ayres | Ayres |
Bronars | Azrael | Baum |
Cohen | Bailey | Billings |
Gius | Black | Binder |
Helland | Boruch* | Carter |
Landes | Cottler* | Cefalu |
Lee | Crutchfield* | Colmer |
Lott | Cummings | Crifasi |
Marvell | Dezhbakhsh | Cronin |
Moody | Donohue | Doleac |
Mustard | Duggan | Donohue |
Olsen | Duwe | Doucette |
Shi | English | Durlauf |
Tabarrok | Gius | Fleegler |
Tideman | Hamill | Frattaroli |
Whitley | Heagerty | French |
Wilson* | Hemenway | Fridel |
Hepburn | Gius | |
Hernandez | Grambsch | |
Hood | Griffin | |
Horowitz* | Hawkins | |
Johnson* | Heagerty | |
Kovandzic | Hemenway | |
Legault | Knopov | |
Levitt* | McCourt | |
Ludwig | McElroy | |
Maier | Merrill-Francis | |
Martin | Morral | |
Marvell | Navarro | |
Matos | Pahn | |
Miller | Rivers | |
Moffitt* | Rothman | |
Moody | Sabbath | |
Murphy* | Schell | |
Nagin | Siegel | |
Nathens | Smart | |
Neeley | Solomon | |
Norberg* | Vernick | |
Reuter* | Wang | |
Rivara | Weber | |
Rosenfeld* | Webster | |
Rosengart | Wintemute | |
Rubin | Xuan | |
Schiller | Zhang | |
Strnad | Zimmerman | |
Vieraitis | ||
Waldfogel* | ||
Wellford* | ||
Winship* | ||
Zielinski | ||
Zimmerman |
*Author is part of National Research Council Report