The Republican-controlled Senate Judiciary Committee has said it won't accept the witness Google offered for a Wednesday subcommittee hearing on free speech online, according to multiple sources.
Why it matters: Facebook and Twitter will be represented at the hearing, chaired by Sen. Ted Cruz (R-Texas), but Google may not be there to respond to expected criticism that it is biased against conservatives.
The race is on — between American and Chinese companies alike — to get U.S. consumers to pay for everything with their phones. But as the chart below makes plain, that market barely exists.
Adapted from CivicScience; Chart: Andrew Witherspoon/Axios
Anti-Semitic and Islamophobic comments and other hateful speech caused YouTube to disable its official live chat for a Tuesday House Judiciary Committee hearing on the role of social media in the rise of white nationalism and related hate crimes.
The big picture: These YouTube comments underscore Big Tech's difficulty in policing what happens on their platforms. Comments continued in unofficial live chats even after YouTube disabled its official stream of the hearing. During the hearing, representatives from Facebook and Google discussed how the companies are addressing white nationalist content, such as the real-time videos of the Christchurch massacre.
"Today’s Democratic Party is increasingly perceived as dominated by its 'woke' left wing. But the views of Democrats on social media often bear little resemblance to those of the wider Democratic electorate," write Nate Cohn and Kevin Quealy of the N.Y. Times.
"The outspoken group of Democratic-leaning voters on social media is outnumbered, roughly 2 to 1, by the more moderate, more diverse and less educated group of Democrats who typically don’t post political content online, according to data from the Hidden Tribes Project."
Why it matters: The more moderate group "has the numbers to decide the Democratic presidential nomination in favor of a relatively moderate establishment favorite, as it has often done in the past."
Sens. Mark Warner (D-Va.) and Deb Fischer (R-Neb.) will debut a measure Tuesday that cracks down on manipulative design features in major web platforms like Google, Facebook and Amazon meant to capture users’ consent or data.
Why it matters: Lawmakers are trying to put checks on the fundamental design choices that Silicon Valley uses to attract and retain users. Those “dark patterns” targeted by the new legislation can get users to agree to data collection or other practices they would not consent to if they understood that’s what they were doing.
The EU has published a set of ethical guidelines for "trustworthy AI" — a long wishlist of idealistic principles, many still technically out of reach, meant to keep unwanted harms from the powerful technology at bay.
Why it matters: It's an early, earnest attempt to get countries to buy into general ethical principles. But without an enforcement tool, it is unlikely to result in safe AI.
Twitter has reduced the maximum number of accounts a user can follow per day from 1,000 to 400, it said on Monday.
Why it matters: Spammers commonly follow hundreds of accounts in an attempt to get them to follow back and juice their follower count, often by writing software scripts that automate the behavior. Twitter is under pressure to limit the extent of fraud, but slowing down the bots won't eradicate the behavior.
While Google's AI ethics outreach efforts are mired in controversy, Microsoft has managed to engender significantly less animosity through a more systematic approach.
Driving the news: Google appointed a controversial outside advisory board, drew an onslaught of protest and disbanded the group a week later, succeeding only in antagonizing people of many different perspectives.
A new plan for regulations released by the U.K.. government Monday puts legal responsibility on tech companies for any harmful or unlawful content that appears on their properties. This means tech giants can face big fines if they don't remove things like terrorist videos or hate speech in a timely fashion.
Why it matters: If passed, the proposed laws would force tech companies to operate with much more rigor when policing content on their properties. While the law only extends to the treatment of content within the U.K., it could have major implications for how tech companies operate and are regulated globally.