Meta, OSI tussle over definition of open source AI
Add Axios as your preferred source to
see more of our stories on Google.
/2024/10/29/1730165441587.gif?w=3840)
Illustration: Lindsey Bailey/Axios
Meta took issue with a new definition of open source AI that would require model creators to detail their sources of training data, among other rules, to meet the standard.
Why it matters: Meta makes its Llama models freely available for others to use, but doesn't provide full disclosure of all of the elements that go into them.
Driving the news: The Open Source Initiative Monday published its definition of what constitutes a truly open source AI model, outlining four characteristics that it says should apply to AI just as they do to software in order to be considered open source.
The organization says people should be able to:
- Use the system for any purpose, without needing permission;
- Study how the system works and see its components;
- Modify the system for any purpose and without restrictions;
- Freely share the system, again with or without restriction and for any purpose.
The other side: Meta, for its part, said it disagrees with the OSI's definition.
- "There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today's rapidly advancing AI models," a spokesperson told The Verge, adding that the company will continue to work with the OSI and other industry groups.
Between the lines: The OSI can't stop anyone else from calling its product "open source," but its new definition gives ammo to advocates of fuller disclosure of the weights and data that differentiate one model from another.
Go deeper: "Open" software needs an AI rethink
Editor's note: This story has been corrected with the name of the Open Source Initiative.
