The researchers provide here an insight into whether perfect law enforcement is a good thing, and dwell on related issues from an automation perspective.
Woody Hartzog: Some of the big questions, and I think the one that goes to the heart of our talk today, is whether we want perfect enforcement of the law. And I would like to go ahead and say now that we need to dispel this notion that the goals of law should be to achieve perfect enforcement. I think that to be effective laws need only be enforced to the extent that violations are kept to a socially acceptable level. We don’t enforce jaywalking 100%, we maybe enforce it 0.1% of the time, and we’re ok with that. We know that it’s a rule, and as long as everyone more or less keeps it together, we’re fine with that. The goal shouldn’t be perfect enforcement, and that’s one thing we’d like to make clear in this talk.
Besides, another question about perfect enforcement is: many times, particularly, for example, with minor violations, we might violate the law 7 to 10 different times. So, for example, if you’re speeding, and the speed limit is 55, you may go 57, and then drive back down to 53, and then go 60, all in the same trip. And so, if you violate the speeding limit 17 different times in one trip, do you get 17 tickets, or do you get one ticket? And these are difficult decisions that have to be made, particularly if the goal is to perfectly enforce the law.
Greg Conti: Woody, I’d add that we will violate the law, we could try scrupulously to not violate the law, but it’s certainly just a function of minutes, perhaps, an hour, you’ll do something wrong.
Woody Hartzog: Absolutely, I mean, we’re all violators.
Greg Conti: Even with the best intentions.
Woody Hartzog: Absolutely. Another problem that comes with automated enforcement is the loss of human discretion. So, while Greg talked about the fact that discretion can be bad because it can lead to unjust results, discretion can be also very good. It allows us to be compassionate, it allows us to follow the spirit of the law instead of the letter of the law, and it allows law enforcement officers to prioritize enforcement. I’m not going to investigate the case of the low sneakers real hard, because we’ve got some murder over here, and so we prioritize where we want to spend all of our energies. And when you take discretion out of it, I think there’re some significant problems.
It also leads to the phenomenon known as automation bias. There’s a fair amount of research out there that shows that we as humans, as a group, tend to irrationally trust judgments made by computers, even when we have reasons to potentially doubt that. The idea is: “Well, that didn’t look like the guy, but the computer says that’s the guy, so that’s probably the guy.” And there’s a fair amount of that in the literature, so if you’re going to automate the system, you’ve got to find a way to combat automation bias.
It’s one thing to exercise your opinion and your right to freedom of expression, and it’s another thing to do it when there’s a government camera right in your face. With the ubiquity of sensors and surveillance around, I fear that there will be some serious chilling effects to the freedom of expression in the US, and that’s precisely what the First Amendment was created against, and I think that any automated system should take measures to make sure that there are no undue chilling effects on speech and our First Amendment rights.
Also, imagine if, let’s say, the use of speeding violations increases 700% when you automate the system, we all decided to appeal simultaneously; we would crash the system. You have to make sure before you implement any system that there is a mechanism that the infrastructure can handle both the burden of the initial violations being issued and the appeals process that comes after it.
And finally, there’s the issue of societal harm. Automating a law enforcement system to achieve perfect enforcement says two things, but it says one thing in particular: “We don’t trust you. We don’t trust you to do what’s right, and we’re going to go ahead and enforce the law automatically, particularly when you engage in preemptive enforcement.” And so that risk is eroding the necessary trust between the citizens and the governments. That’s critical for any kind of effective governance.
Also there are some moral implications of doing our best to make sure that nobody can violate any crime with preemptive enforcement. What does it say about a society that takes away all accountability for violations? “Don’t worry, you can do anything you want, because if it was bad for you, we wouldn’t let you do it in the first place.” And so I think those are the significant questions that have to be answered in any automated law enforcement scheme.
So, what can we do about it? Well, first of all we can ensure that there are procedural safeguards; we can ensure that the basic fundamental due process rights are respected, the rights to notice and a hearing, so we need to know when we have violated the law and we need to have an opportunity to be heard. Privacy rights: we need better Fourth Amendment jurisprudence, we need to solve the problem of privacy in public, which I think that we’re headed towards a conflict over that sooner rather than later, we need better electronic surveillance laws.
The necessity defense: all of us probably understand that if you’re headed to the emergency room, probably it’s ok to speed. It’s fine that you need to go 75-80 mph to get to the emergency room that will let you pass on this one. There are many instances that you can imagine where we need to go ahead and violate the law, because the costs of not violating the law are greater. Transparency: who sees the source code? Is it going to be a trade secret or do we all get to see it? And we can look to a lot of the e-voting disputes to learn from this. But I think that open source and transparency in the code is absolutely critical in any automated law enforcement system.