Meta is changing the way it labels AI-generated images across its platforms, with the previous ‘Made with AI’ label being replaced by an ‘AI info’ label, after users complained that the previous label was wrongly applied to their media.
Meta in April announced its intention to begin more widely flagging AI-generated content and/or manipulated media on its platforms, apart from asking people who used AI to disclose their use of the technology.
However, several users were shocked and frustrated to find that even works such as lightly edited photographs and digital art were labelled as being made with AI. Others speculated that the label was a result of using third-party photo editing tools to modify posts before uploading them.
Meta acknowledged a lack of alignment when it came to classifying some kinds of media, but said it would continue to work on this in order to give users the context they needed.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
“Like others across the industry, we’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context,” admitted Meta in an update to an earlier statement about flagging AI content.
“For example, some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled “Made with AI.” While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the “Made with AI” label to “AI info” across our apps, which people can click for more information,” said the social media giant.
While AI art has taken off after the release of numerous text-to-image generators and photo editors, many artists and creators also boycott such content over data scraping concerns.
Meta’s ‘Made with AI’ label led to a reputation crisis for some creators who claimed their works were largely or wholly human-made.