C2PA: Photoshop Developer Adobe's Plan to Counter Visual Misinformation
Attribution information is typically embedded in the metadata of digital assets, where it can easily be changed.
Adobe has released three open source tools intended to make it easier to verify the authenticity of visual content and trace where it came from.
These tools are part of Adobe's Content Authenticity Initiative (CAI), first announced in 2019, to counter the spread of visual misinformation (which is often produced through Adobe's own photo and video editing software Photoshop and Premiere Pro).
The company hopes that these tools will help developers embed this technology into their applications. The widespread adoption of this standard might also help the people who create original visual content.
What's Adobe's plan? How does this technology work? Here's all you need to know.
What's Adobe's Plan?
While efforts to tackle visual misinformation have largely focused on using AI to detect deepfakes and other altered media, Adobe is thinking about the problem from another angle.
It wants to make it easy for the public to find out who created the original visual media, and how these photos and videos were altered over time.
To this end, a group of companies led by Adobe, Microsoft, and BBC, together established a technical standard to provide publishers, creators, and consumers the ability to trace the origin of different types of media.
This standard, called Coalition for Content Provenance and Authenticity, or C2PA, will also potentially make it easier for creative professionals, artists, and photojournalists to receive credit for their work.
The tools released by Adobe are a way to push more developers to adopt this technology.
What Would C2PA Look Like in Action?
Attribution information – the who, what, and how of asset creation and modification – is typically embedded in the metadata of digital assets, where it can easily be changed or erased.
C2PA data, in contrast, "is cryptographically sealed and verifiable by an individual or organization along the path from creation to consumption".
If you view a photo or a video through an app or website which has C2PA technology, you would easily be able to see "content credentials", that is, data about the creation and alteration of the file.
In practice, using the tools provided by Adobe, a social media platform could let its users easily see the content credentials all of its images and videos by just hovering their mouse over an icon.
What Are These Tools?
Adobe’s open source tools are:
C2PA Tool – A command line utility that would help developers let their application interact with the C2PA standard, to create, verify, and explore content credentials.
Rust SDK – A toolkit to help developers build custom apps that create, verify, and display content credentials directly via Adobe's library of pre-compiled code.
Often app makers use software development kits (SDKs), which are essentially ready-made software kits from third party developers, to cut down on time and effort.
Adobe told Techcrunch that the C2PA standard is receiving a "surprising amount of inbound interest" from companies which produce synthetic images and videos, like deepfakes.
Deepfake technology involves using artificial intelligence (AI) to generate convincing images or videos of made-up or real people. It is surprisingly accessible and has been put to various uses, including in entertainment, misinformation, harassment, propaganda and pornography.
A recent study published in the Proceedings of the National Academy of Science found that people have just a 50 percent chance of guessing correctly whether a face was generated by artificial intelligence. AI synthesised faces were found to be indistinguishable from real faces and, somehow, more trustworthy.
Subscribe To Our Daily Newsletter And Get News Delivered Straight To Your Inbox.