Adobe Plans Robots.txt-Like Feature to Protect Images from AI Models

Adobe-robots-indicator-AI-training
0
0

Adobe has launched a new web-based tool aimed at giving content creators more control over how their images are used, especially in the age of artificial intelligence. Drawing inspiration from the long-standing robots.txt file used by websites to guide web crawlers, Adobe wants to establish a similar standard for images that tells AI companies which files should not be used to train their models.

However, getting AI firms to follow this standard could be a significant hurdle, especially as many existing AI crawlers often ignore robots.txt rules.

The tool is built around “content credentials” — metadata embedded in image files that helps verify the authenticity and ownership of the content. This approach aligns with the broader goals of the Coalition for Content Provenance and Authenticity (C2PA), an industry standard for digital content integrity.

Adobe’s new platform, known as the Adobe Content Authenticity App, allows creators to attach these credentials to their images, even if the files weren’t created using Adobe’s software. The app enables users to tag up to 50 JPG or PNG images at a time with personal details like their name and social media profiles. Additionally, users can opt to include a signal requesting that the images not be used for AI model training.

Adobe has also partnered with LinkedIn to strengthen the verification process. This integration allows users to prove that they are using a verified name via their LinkedIn profile. While Instagram and X (formerly Twitter) profiles can also be attached, there is currently no verification support for these platforms.

Despite the tool’s promising features, Adobe hasn’t yet secured formal agreements with any AI companies to comply with the standard. According to the company, discussions are underway with major AI developers to encourage adoption.

The success of this initiative largely depends on whether AI companies respect the embedded metadata. Without industry-wide compliance, creators may still find their work used in AI training datasets against their wishes.

This issue is not new. Last year, Meta (formerly Facebook) faced backlash after it automatically labeled certain edited photos with a “Made with AI” tag, prompting concerns from photographers. Although Meta later softened the label to “AI info,” the controversy exposed inconsistencies among companies that are part of the C2PA group — including Adobe and Meta.

Andy Parsons, Senior Director of Adobe’s Content Authenticity Initiative, emphasized that the app was developed based on feedback from creators. With global copyright and AI training regulations still fragmented, the tool is Adobe’s way of giving artists a clear voice in how their work is used.

“Content creators want a straightforward method to indicate that their work should not be used for generative AI training,” Parsons told TechCrunch. “We’ve heard from independent artists and agencies who are asking for more control over how their content is used.”

In addition to the web app, Adobe is rolling out a Chrome extension that can detect content credentials on supported platforms. This extension uses digital fingerprinting, cryptographic metadata, and open-source watermarking to ensure that credentials remain embedded — even if the image is edited or modified. If present, users will see a small “CR” (Content Credentials) symbol on the image, even on platforms like Instagram that don’t natively support the standard.

While the tool currently supports images, Adobe has announced plans to expand it to include video and audio content in the future.

As debates around the intersection of AI and digital art intensify, Adobe’s initiative could play a pivotal role in preserving artists’ rights and ensuring proper attribution — even if it doesn’t automatically guarantee copyright protection.