Meta and Microsoft join AI standards group on "synthetic media"
Meta and Microsoft joined a group working on a framework to promote responsible practices in the development, creation and sharing of media created by AI, per an announcement Wednesday shared first with Axios.
Driving the news: The two tech giants, both pushing forward with respective generative AI projects, are joining the Partnership on AI group working on the framework, with plans to meet later this month to discuss recommendations and case studies.
- The group is working on technical, legal and social implications of AI-generated work.
- "Meta is excited to join the cohort of supporters of Partnership on AI’s Responsible Practices for Synthetic Media and to work with PAI on developing this into a nuanced approach to educating people about generated media,” Nick Clegg, president of global affairs at Meta, said in a statement.
- "We're optimistic about the developments in this space and about using this technology to bring more tools for creative expression to our community."
What they're saying: "Meta and Microsoft reach billions of people daily with creative content that is rapidly evolving," Claire Leibowicz, head of AI and media integrity at the Partnership on AI, said in a statement.
- "These companies have both the expertise and the access needed to reach users all around the world and help them learn to discern AI-generated images, video, and other media as synthetic media’s prevalence grows."
Be smart: Founding members of the framework, first launched in February, include Adobe, Bumble, OpenAI, TikTok, BBC, the Canadian Broadcasting Company and WITNESS, a human rights and technology group.
- Adobe has its own Content Authenticity Initiative (CAI), launched four years ago, that allows the provenance of an image to be tracked over time, including how and when it was altered, Axios previously reported.
- CAI and the new framework from Partnership on AI are "separate but complementary initiatives," said Aimee Bataclan, spokesperson for the Partnership on AI, with the framework giving recommendations for anyone creating and distributing media.
- "It incorporates several recommendations related to CAI in its sections on disclosure. However, the framework’s interventions go beyond disclosure — including an emphasis on responsible and harmful use cases for synthetic media, an emphasis on informed consent, and broader transparency."
Our thought bubble: Tech industry groups generally gather together to establish best practices and guidelines to head off the imposition of more formal regulation.
- The companies involved here are well aware that regulation will likely come for AI, and in most cases they're embracing that. But they're also grabbing a chance to push their own ideas forward in the name of responsibility while lawmakers decide what to do.