1 big thing: Secret government AI
The criminal justice system has eagerly taken up AI tools for surveillance, policing and sentencing — software that can track people's faces, deploy patrols where crime appears most likely, and recommend whether to grant bail.
Kaveh reports: But these tools are often cloaked in secrecy, so that it can be impossible to judge their accuracy, or even know where and how they are being used. Critics say this opens the door to misuse and discrimination.
Driving the news: San Francisco yesterday approved the most restrictive government surveillance regulations in the U.S.
- The new measure, if it is passed a second time next week, entirely bans official facial recognition in the city — though it does not apply to federal agencies — and requires every department that wants to use surveillance technology to apply for permission.
- At the other extreme, across the Pacific, is China. It is implementing the most Orwellian surveillance system on the planet, leaning especially hard on facial recognition to identify and track its Uighur minority.
Why it matters: When poorly coded or deployed, AI systems can make huge mistakes or harm some groups more than others. But where faulty facial recognition in Snapchat might mean some people can't use a fun filter, flawed police software can land the wrong people in jail.
- Because these systems are tightly guarded, outside experts can't check them for bias and accuracy, and the public doesn't know how well they perform.
- Read this: London police, responding to a freedom of information request, said this month that its facial recognition system misidentified people as criminals a whopping 96% of the time.
- What's more, experts and watchdogs say they don't actually know where such systems have been deployed around the United States, and defendants are often in the dark about whether advanced surveillance tech was used against them.
"You can't meaningfully build up a criminal defense, or change policies, if you don't know how these tools are being used," says Alice Xiang, a researcher at the Partnership on AI.
San Francisco will soon have its first-ever complete public list of surveillance technology currently in use, says Lee Hepner, legislative aide to San Francisco Supervisor Aaron Peskin, who introduced the measure.
- "Communities have a right to know whether their governments use dangerous surveillance technology to track their daily lives," says Matt Cagle, an attorney at the ACLU of Northern California who advocated the measure.
- Several other cities — including Oakland and Somerville, a city in the Boston area — are considering similar legislation.
The big picture: The uptake of AI in criminal justice mirrors a broad push to automate difficult or sensitive decisions, like hiring and diagnosing diseases from medical scans. But they are often implemented without proper safeguards, says Peter Eckersley, research director at the Partnership on AI.
- The predictive systems used by nine police departments may have relied on biased data focused disproportionately on minority populations, according to a March report from AI now and New York University. If the report is accurate, this data may be enshrined in new predictive policing systems.
- Last month, the Partnership on AI studied risk-assessment tools used to inform bail decisions and found that every system currently in use is flawed and should not be used.
What's next: Facial recognition is the most publicly controversial of the various AI tools governments use, and it's the one most likely to be regulated. Companies have asked the federal government to put rules in place for law enforcement use of the technology.
2. The Uber harbinger
As we reported before Uber's massive IPO, the company's two business areas — ride-hailing and Uber Eats — are both experiencing slowing growth and drained coffers.
- Uber intended to slash labor costs by using autonomous cars to eliminate drivers, but the technology is proving very difficult to develop — let alone commercialize.
- Drivers across the country are striking against low wages and alleged mistreatment, and they're suggesting that Uber could burn through the entire pool of workers willing to drive for it.
The company had a disappointing debut. And now, four days into trading, investors are still in the red — today Uber shares were trading 10% below the open price on IPO day.
Axios' Dan Primack reports: That has got to worry other money-losing "unicorns" that operate in similar sectors.
In descending order of fear factor:
- Micro-mobility: Bird, Lime
- Judging by the treatment of Lyft and Uber, the public market appears to be skeptical of the "ride" story.
- Other ride-hail: Didi Chuxing, Grab, Ola Cabs
- Some of these foreign companies have significant diversification into other areas (such as Grab Finance).
- On-demand delivery: DoorDash, Instacart, Postmates
- These companies insist they have better unit economics than ride-hail, because their marketplaces are three-sided instead of two. Their big question is how much value the public markets are ascribing to Uber EATS.
The bottom line: If these companies' most recent financings were benchmarked to Uber's valuation, at least in part, then we could be in for a series of high-profile down-rounds. Or orphaned unicorns. These things have a tendency to feed on themselves.
Go deeper: Shorting Uber
3. Mailbox: Superforecasters
Several readers wrote in about our post on the U.S. intelligence community's effort to find new "superforecasters." Here is one:
If you can’t make useful predictions, you aren’t really an expert. That said, I think results like this are overstated (and I love Philip Tetlock’s work). Policymakers aren’t asking what price gold will be or even how many missile strikes will happen. Essentially that tests gambling skill, or the ability to find a central tendency among many future outcomes. That seems to me to be a more useful skill for policymakers who need to judge risk than intelligence analysts. I never saw any questions that were job relevant during previous prediction contests, at least in my field (science and technology).
Prediction in intelligence work requires identifying key drivers that policymakers can affect, or predicting really specific outcomes.
Counterargument: This work is great for impersonal epidemics, possibly to include criminal cyber at scale.— Christopher Porter, CTO, Global Cybersecurity Policy, FireEye, Reston, Va.
4. Worthy of your time
5. 1 fun thing: The newest U.S. embassies
If you're vacationing in Vienna and you lose your passport, don't panic. Just go get a Big Mac.
Erica writes: Per a new agreement with the State Department, McDonald's is becoming a quasi-U.S. Embassy. All 194 locations in Austria have been given special access to a 24-hour embassy hotline, Fast Company reports.
- The idea came from U.S. Ambassador to Austria Trevor Traina, who told BBC that the goal is to increase the number of ways Americans can get in touch with the homeland.
- Now McDonald's quite literally symbolizes America.