Adobe wants to make it easier for artists to blacklist their work from AI scraping
Content credentials are based on C2PA, an internet protocol that uses cryptography to securely label images, video, and audio with information clarifying where they came from—the 21st-century equivalent of an artist’s signature.
Although Adobe had already integrated the credentials into several of its products, including Photoshop and its own generative AI model Firefly, Adobe Content Authenticity allows creators to apply them to content regardless of whether it was created using Adobe tools. The company is launching a public beta in early 2025.
The new app is a step in the right direction toward making C2PA more ubiquitous and could make it easier for creators to start adding content credentials to their work, says Claire Leibowicz, head of AI and media integrity at the nonprofit Partnership on AI.
“I think Adobe is at least chipping away at starting a cultural conversation, allowing creators to have some ability to communicate more and feel more empowered,” she says. “But whether or not people actually respond to the ‘Do not train’ warning is a different question.”
The app joins a burgeoning field of AI tools designed to help artists fight back against tech companies, making it harder for those companies to scrape their copyrighted work without consent or compensation. Last year, researchers from the University of Chicago released Nightshade and Glaze, two tools that let users add an invisible poison attack to their images. One causes AI models to break when the protected content is scraped, and the other conceals someone’s artistic style from AI models. Adobe has also created a Chrome browser extension that allows users to check website content for existing credentials.