New ordinance lets police robots use lethal force
San Francisco's police department is getting closer to being able to use robots for lethal force against criminal suspects after the city's Board of Supervisors yesterday voted 8-3 to approve an ordinance on the matter.
What's happening: The legislation, if enacted, would allow police to use remote-controlled robots for deadly force "when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available," per the draft policy.
- During the more than two-hour discussion on the topic at the Board of Supervisors meeting, SFPD assistant chief David Lazar evoked the mass shooting in Las Vegas in October 2017 as an example of a potential use case for remote-controlled robots using lethal force.
- The lengthy discussion led to two amendments that clarified police must try or consider alternative force methods before using robots for lethal force, and that only the police chief, assistant chief of operations or deputy chief of special operations can authorize the use of robots as a deadly force option.
- Supervisors Dean Preston, Hillary Ronen, and Shamann Walton were the three dissenting votes.
- The ordinance also defines how SFPD is allowed to use assault rifles, machine guns and other military-style weapons.
Context: As part of a state bill signed into law last year, the Board of Supervisors has the authority to reject or accept rules annually around how SFPD can use military-style weapons.
- SFPD currently has 12 human-controlled robots for purposes such as gaining situational awareness, diffusing potential bombs or helping in hostage negotiations.
- The Board of Supervisors' Rules Committee, chaired by Aaron Peskin, recommended the ordinance for approval earlier this month.
- Yes, but: He told Axios, "This should not be construed as a green light to have robots wantonly kill people," conceding they “could be misused.”
What they're saying: The SFPD says it does not own or operate "robots outfitted with lethal force options," and doesn't have any plans to augment with any "firearm," Allison Maxie, a spokesperson with the SFPD, told Axios via email.
- Reality check: If the ordinance is fully approved, the SFPD would be authorized to equip existing robots with explosives "to breach fortified structures containing violent, armed, or dangerous subjects or used to contact, incapacitate, or disorient" a suspect who poses "a risk of loss of life to law enforcement or other first responders," Maxie said.
- The SFPD considers explosives "an intermediate force option," but acknowledges they "could potentially cause injury or be fatal," she said.
The other side: Outside of the public safety questions, there are racial and ethical implications for these new policing tools.
- Yoel Haile, director of the criminal justice program at the American Civil Liberties Union of Northern California, does not think police departments should "have killer robots" in part because police kill Black people at more than twice the rate of white people, per a Washington Post analysis.
- "Remote-controlled killing machines will not make San Francisco safer," Haile said. "This is a terrible idea."
Preston, in a statement to Axios, said, "It is shocking that just two years after the nation collectively recognized that police were using unjustified deadly force against people, we are having a conversation about letting SFPD adopt a policy that would allow them to use robots to kill."
- Ronen said at yesterday's meeting that using robots for lethal force "creates this false distance that makes killing the individual easier."
By the numbers: SFPD's budget grew to about $714 million for fiscal year 2023.
- The department spent more than $10.5 million to acquire the robots and expects to spend about $1,445 per year to maintain the robots, per the draft policy.
- SFPD, however, may spend up to $10 million on the replacement of equipment without the approval of the Board of Supervisors.
The big picture: The idea of weaponized robots is controversial and sparks images of a more dystopian society.
- In October, robot maker Boston Dynamics signed an open letter alongside other firms arguing the weaponization of robots "raises new risks of harm and serious ethical issues."
- One ethical issue: who is at fault if a robot accidentally kills someone?
- The Electronic Frontier Foundation argues the weaponization of robots would push society toward "letting autonomous artificial intelligence determine whether or not to pull the trigger."
Zoom in: This is the second controversial ordinance SFPD has put before the Board of Supervisors for consideration in recent months.
- The first, passed in September, allowed police to access live surveillance technology from private parties.
- Haile said the passage of that ordinance "encouraged [police] to come back with more and more extreme asks."
Zoom out: Oakland's police department asked Oakland's city council to allow the department to use robots for lethal force.
- OPD, however, last month decided not to seek approval for that use case, Mission Local reports.
What's next: The Board of Supervisor still must give the legislation final approval, but that process is typically perfunctory.
- Mayor London Breed, a sponsor of the measure, would then have to sign off within 10 days.
More San Francisco stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios San Francisco.