Situational awareness: Pinterest has set IPO terms that value the company lower than its most recent investment round, Axios' Dan Primack reports. It plans to begin trading late next week on the New York Stock Exchange.
Congrats to the Baylor Lady Bears who won the NCAA women's hoops championship and to "Josh_Boulder" who edged me out to win this year's Login women's bracket challenge.
Josh, if you are reading, drop me an email — we have some Axios swag for you.
1 big thing: Microsoft out-savvies Google on AI ethics
Illustration: Sarah Grillo/Axios
While Google's AI ethics outreach efforts are mired in controversy, Microsoft has managed to engender significantly less animosity through a more systematic approach.
Microsoft's approach: The company began by soliciting a wide range of input, laid out its principles in a book, and is now incorporating those principles into its product development process.
CEO Satya Nadella penned an op-ed back in 2016 talking about shared responsibilities around AI.
A few months later, at the company's Build developer conference, he laid out the potential for an Orwellian future if AI isn't handled right.
That summer, Microsoft created Aether, an internal committee to advise and evaluate on AI ethics issues. The group, which includes more than 100 Microsoft employees, is led at the executive level by president Brad Smith and AI and research head Harry Shum.
Microsoft has stood fast against internal and external critics and defended its work with the U.S. government, including the military, while still pledging to evaluate each project to make sure it meets the company's ethical standards.
With some of the thorniest issues, such as facial recognition, the company has also called on legislators to create rules of the road.
Most recently, Microsoft has moved to make sure ethical considerations are incorporated into product release cycles in the same way that the company added security and privacy reviews in the past.
Google's approach: The company has taken what appears to be a more case-by-case approach despite the fact it, too, has published AI guidelines.
For example, the company agreed to take part in Project Maven — a facial recognition project for the U.S. military, only to agree to drop the contract amid an employee outcry.
Similarly, the company appointed an outside advisory committee only to disband it a week later, following protests, in particular over the inclusion of the president of the Heritage Foundation, someone known for views perceived as anti-transgender, anti-gay and anti-immigrant.
The process of coming up with the committee itself was flawed, some insiders say, with many of the company's own experts not consulted in the group's formation.
Meanwhile, Google actually has an internal committee to advise on AI issues, but it has kept a far lower profile than Microsoft's Aether. (Bloomberg had to do a story reminding people that it exists.)
The bottom line: It's not clear that Google's positions are any more controversial than Microsoft's, but Google's haphazard execution has hampered its AI ethics effort. By stating its principles and sticking to them, even when taking some unpopular stances, Microsoft has displayed more political savvy.
2. U.K. unveils sweeping plan to rein in Big Tech
A new plan for regulations released by the U.K. government Monday puts legal responsibility on tech companies for any harmful or unlawful content that appears on their properties, Axios' Sara Fischer writes.
Why it matters: This means tech giants could face big fines if they don't remove things like terrorist videos or hate speech in a timely fashion.
If passed, the proposed laws would force tech companies to operate with much more rigor when policing content on their properties.
While the law only extends to the treatment of content within the U.K., it could have major implications for how tech companies operate and are regulated globally.
Details: The proposed regulations would apply to any company that allows users to share or discover user-generated content or interact with each other online.
That means the rules would applytomost of the internet's biggest players, including social media sites like Facebook, public discussion forums like Reddit, messaging services like WhatsApp, and search engines like Google or Bing.
The plan includes a mandatory "duty of care" provision, which requires companies to take reasonable steps to tackle illegal and harmful activity on their services.
It recommends even stricter requirements for companies to take tougher action around content related to terrorism, child sexual exploitation and abuse.
The plan calls for a new, independent regulator with powerful enforcement tools to oversee the enforcement of the new laws.
Be smart: Such sweeping measures, if passed, would likely set an international standard for how tech companies should be policed.
Right now, a similar dynamic is playing out globally around privacy. Europe passed a major privacy law in May 2018 that the U.S. and other countries are looking at when building models for their own privacy policies.
The EU recently passed a major copyright law that requires sites like Facebook and Google to pay a fee when they summarize news stories and link to them.
Yes, but: If the rules are as far-reaching as proposed, it's conceivable that some global internet companies would simply write off their U.K. presence rather than comply.
What's next: A 12-week consultation on the proposals begins Monday, and will be followed by final proposals for actual legislation.
We've already looked at what Google, Facebook and Tesla know about you. Now my colleague Gerald Rich has a look at the details widely available from various "people finder" sites, such as White Pages and Spokeo.
Not every site contains the same detailed information. But a quick search can reveal more information about you than you’re comfortable sharing — like:
Court records (like marriage, divorce, or arrest records)
Relatives (based on shared last names) and their info
Roommates (based on shared addresses) and their info
Additional information from data breaches. Check your addresshere.
How it works: There are more than a dozen people-finder sites that act as data brokers, vacuuming up public and private records, like court and motor vehicle records, or the Postal Service's change-of-address database.
These sites can glean information from subscriptions to magazines or groups and brands you’ve engaged with on social media.
In addition to the larger companies, between 100 and 200 smaller ones scrape that data from more prominent people finder sites.
The backstory: People-finder sites, which have been publicly available since the 1990s, make money various way: ads, subscribers and wholesale data brokerage.
They operate in a shadowy legal area similar to credit reporting agencies, but without as much regulation.
Go deeper: Gerald has more here, including what you can do to reduce your footprint.
4. Teespring's rough ride to profitability
Illustration: Aïda Amer/Axios
On-demand merchandise startup Teespring is profitable (and has been for a year) after 7 years in business, but it didn’t come easy, Axios' Kia Kokalitcheva reports.
The path included layoffs, CEO changes and a recapitalization that slashed its valuation to a mere $11 million.
The big picture: Teespring's saga inverts the usual garage-band-to-billionaires startup story. The company began as a Silicon Valley darling, attracting the early backing of Y Combinator, other top investors and media attention. Then the trouble began.
Millennials face a tough economic landscape — they're working for relatively low pay, are often saddled student debt, and will be the first to confront the full impact of the new age of automation. (Axios)