Meta to Label AI-Generated Images on Facebook and Instagram

Meta Platforms plans to label AI-generated images across its platforms with invisible markers, signaling their digital origin. The labeling system will also apply to content from OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Google.

Meta will start labelling AI generated images from third-party providers, the company said Tuesday.
(Image Credit: Terry Schmitt/UPI)
Meta will start labelling AI generated images from third-party providers, the company said Tuesday.
(Image Credit: Terry Schmitt/UPI)

Meta to Label AI-Generated Images: Tackling Deepfakes and Misinformation

The growing power of artificial intelligence (AI) brings both excitement and concern. Among its capabilities, AI can generate incredibly realistic images, blurring the lines between reality and fiction. This poses a challenge for social media platforms like Meta (formerly Facebook), which aim to provide a trustworthy and authentic user experience. Recognizing this potential pitfall, Meta announced a significant step towards transparency: starting this year, they will label AI-generated images shared on Facebook, Instagram, and Threads.

This initiative, explained by Meta’s president of global affairs Nick Clegg, utilizes “invisible markers” embedded within AI-generated images. These markers, developed by various tech companies including Meta, Google, OpenAI, and Adobe, enable the platform to identify and subsequently label such content. The label, likely resembling existing disclaimers for edited photos, informs users that the image is not a natural photograph but a digital creation.

Meta’s move addresses a vital need in the digital age. With AI-generated imagery becoming increasingly sophisticated, differentiating them from real photos can be a significant challenge for even tech-savvy users. This can fuel the spread of misinformation and “deepfakes,” manipulated videos used to make someone appear to say or do something they never did. The company’s effort builds upon existing practices of collaborating with other platforms to tackle shared challenges like child exploitation and violent content.

However, concerns remain. While Clegg expressed confidence in labeling images, acknowledging limitations with audio and video remains a hurdle. Further, written content generated by AI, like the popular ChatGPT, currently lacks viable labeling mechanisms. Additionally, questions still surround Meta’s encrypted messaging service, WhatsApp, which might not be included in the labeling initiative despite similar risks of misinformation.

The recent rebuke by Meta’s independent oversight board regarding their policy on misleadingly doctored videos also highlights the challenges. The board argued that labeling, rather than complete removal, should be the approach for such content. Clegg acknowledged this need for improvement and sees the upcoming labeling system as a step towards addressing the board’s concerns.

Overall, Meta’s initiative represents a proactive approach to navigating the complex landscape of AI-generated content. While limitations and unanswered questions remain, this step is crucial in fostering trust and transparency on social media platforms. As AI continues to evolve, so too must our methods for responsible usage and clear communication of its outputs. Whether other platforms follow suit and how effectively the labeling system functions will be key to mitigating the potential harms of synthetic media in the digital age.

Google News Icon

Add Slash Insider to your Google News Feed

The information above is curated from reliable sources and modified for clarity. Slash Insider is not responsible for its completeness or accuracy. We strive to deliver reliable articles but encourage readers to verify details independently.