Meta is tagging real photos as ‘Made with AI,’ say photographers | TechCrunch
Earlier in February, Meta said that it would start labeling photos created with AI tools on its social networks. Since May, Meta has regularly tagged some photos with a “Made with AI” label on its Facebook, Instagram and Threads apps.
But the company’s approach of labeling photos has drawn ire from users and photographers after attaching the “Made with AI” label to photos that have not been created using AI tools.
There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web.
Plenty of other photographers have raised concerns over their images having been wrongly tagged with the “Made with AI” label. Their point is that simply editing a photo with a tool should not be subject to the label.
Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label.
“What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.
Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged.
In a February blog post, Meta said it utilizes metadata of images to detect the label.
“We’re building industry-leading tools that can identify invisible markers at scale — specifically, the “AI generated” information in the C2PA and IPTC technical standards — so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools,” the company said at that time.
As PetaPixel reported last week, Meta seems to be applying the “Made with AI” label when photographers use tools such as Adobe’s Generative AI Fill to remove objects.
While Meta hasn’t clarified when it automatically applies the label, some photographers have sided with Meta’s approach, arguing that any use of AI tools should be disclosed.
For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it. For users, it might be hard to understand how much AI was involved in a photo. Meta’s label specifies that “Generative AI may have been used to create or edit content in this post” — but only if you tap on the label.
Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them. With U.S. elections to be held in a few months, social media companies are under more pressure than ever to correctly handle AI-generated content.