January 11, 2019
Welcome back to Future. Thanks for subscribing.
Situational awareness: U.S. retailers today lost $34 billion in market value after Macy's issued a profit warning, FT reports. Macy's shares plunged by 17.7%.
Consider inviting your friends and colleagues to sign up. And if you have any tips or thoughts on what we can do better, just hit reply to this email or message me at [email protected]. Email my colleagues Kaveh Waddell at [email protected] and Erica Pandey at [email protected].
Okay, let's start with ...
1 big thing: AI's accountability gap
Applicants usually don't know when a startup has used artificial intelligence to triage their resume. When Big Tech deploys AI to tweak a social feed and maximize scrolling time, users often can't tell, either. The same goes when the government relies on AI to dole out benefits — citizens have little say in the matter, Kaveh Waddell reports.
What's happening: As companies and the government take up AI at a delirious pace, it's increasingly difficult to know what they're automating — or hold them accountable when they make mistakes. If something goes wrong, those harmed have had no chance to vet their own fate.
Why it matters: AI tasked with critical choices can be deployed rapidly, with little supervision — and it can fall dangerously short.
The big picture: Researchers and companies are subject to no fixed rules or even specific professional guidelines regarding AI. Hence, companies have tripped up but suffered little more than a short-lived PR fuss.
- Last February, MIT researchers found that facial recognition systems often misidentified the gender of women of color. Some of the companies involved revised their software.
- In October, Amazon pulled an internal AI recruiting tool when it found that the system favored men over women.
- "Technology is amplifying the inequality built into the current market," says Frank Pasquale, an expert on AI law at the University of Maryland.
The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University. In addition, many AI systems and the companies that make them are opaque. "Technocratic smokescreens have made it difficult or intimidating for a lot of people to question the implications of these technologies," Whittaker tells Axios.
One result of these and other tech sector behaviors is to raise some people’s suspicions.
- The industry has made matters worse by testing rough-around-the-edges products on unsuspecting people: pedestrians in the case of autonomous vehicles, patients in the case of health care AI, and students in the case of educational software.
- In 2016, Cambridge Analytica quietly used Facebook data to sway Americans’ political opinions.
- In 2012, Facebook researchers quietly manipulated some users' newsfeeds — emphasizing positive posts for one group and negative ones for another — and monitored for an emotional response.
"This is a repeated pattern when market dominance and profits are valued over safety, transparency, and assurance," write Whittaker and her co-authors in an AI Now report published last month.
2. Who really wants AI?
AI is mushrooming in business and government, but several groups of Americans aren't so excited about it, Kaveh reports.
- Overall, 41% of American adults support the development of AI (chart), according to a new survey.
- But that leaves a lot of other people opposing it — a lot of women, low-wage earners, people without a college education and people without coding experience. The same goes for Republicans and people 73 or older.
- Essentially, those unsupportive of AI are those least likely to be involved in designing it — and the most likely to be adversely affected.
Why it matters: Without broad buy-in, what AI’s supporters hope for — better health care, safer cars — could be delayed, mired in a legitimacy crisis, or even result in a public revolt.
The data comes from a nationwide survey released Tuesday. It was carried out last June by a pair of social scientists — Yale's Baobao Zhang and Oxford's Allan Dafoe — and published by the Center for the Governance of AI.
- The division in support for AI may reflect the accountability gap (see 1 Big Thing above): AI makers — mostly rich, educated men — may not be very attuned to the perceptions and preferences of the people their creations will affect.
- "These people are least likely to have had experiences that would lead them to be wary of the actions of for-profit companies," Whittaker of the AI Now Institute tells Axios.
"It's in the public interest to build AI well, but everyone has to be convinced," says Dafoe.
- "Consensus isn't there, and there's a real risk that there could be a political backlash against the development and deployment of AI."
3. The Dark Overlord is hiring
When the luster wears off and the first wave of talent packs up, a once-dazzling tech startup looks for dependable, experienced new workers. It's the story of countless late-stage startups — not to mention a few less official outfits.
Axios' Joe Uchill writes: A recent job listing for a "goal-oriented" candidate with a "winning attitude" who is "used to objectives and achieving them" is not for a Bay Area tech firm but rather a cybercriminal collective called The Dark Overlord.
- The group is known for targeting flashy, high-profile victims. It famously threatened to leak "Orange is the New Black" and "Game of Thrones" content if networks didn't pay up. More recently it has begun leaking insurance documents related to 9/11.
- The job ad, archived by the threat intelligence firm Digital Shadows, is more "The Office" than "Mr. Robot."
"If you saw that ad pop up on Indeed, you’d think it was an average tech company," said Rick Holland, chief information security officer and vice president of strategy at Digital Shadows.
- The job requires 10 years' experience in software design, network management or systems administration, with 5 years working in a "team-based cooperation environment."
- Applicants will submit to certification and skills testing. ("We do that too!" notes Holland.)
- The Dark Overlord offers either a salary or commission payment structure, with the potential for a raise after a year — if the candidate passes a 90-day probationary period.
Save for the higher figures (the job pays a starting salary of as much as 50,000 British pounds a month), the salary option would fit in any office park.
The bottom line: Holland says business has been down for The Dark Overlord. Deposits into the groups cryptocurrency accounts have dwindled; meanwhile, there has been turnover at the office.
- Digital Shadows notes that the group allegedly lost some of its talent to arrests recently — a problem less common, though not entirely unheard of, in the more conventional tech industry.
4. Worthy of your time
5. 1 disruption thing: The newest bot jobs
Robots are getting incrementally sharper. Once gimmicky and clumsy, robo-vacuums are hoovering up hard-to-reach kitchen corners, and warehouse bots can move around massive amounts of merchandise.
Erica Pandey writes: Just this week, we have news of two more complex tasks: