Technology

Hugging Face empowers users with deepfake detection tools

a collage representing digital identity in the form of a woman with a pixelated rectangle hovering over her face

Hugging Face wants to help users fight back against AI deepfakes.

The company that develops machine learning tools and hosts AI projects also offers resources for the ethical development of AI. That now includes a collection called “Provenance, Watermarking and Deepfake Detection,” which includes tools for embedding watermarks in audio files, LLMs, and images, as well as tools for detecting deepfakes.

The widespread availability of generative AI technology has led to the proliferation of audio, video, and image deepfakes. Not only does the deepfake phenomenon contribute to the spread of misinformation, it leads to plagiarism and copyright infringement of creative works. Deepfakes have become such a threat that President Biden’s AI executive order specifically mandated the watermarking of AI-generated content. Google and OpenAI have recently launched tools for embedding watermarks in images created by their generative AI models.

The resources were announced by Margaret Mitchell, researcher and chief ethics scientist at Hugging Face and a former Google employee. Mitchell and others focusing on social impact created a collection of what she called pieces of “state-of-the-art technology” to address “the rise of AI-generated ‘fake’ human content.”

Some of the tools in the collection are geared towards photographers and designers which protect their work from being used to train AI models, like Fawkes, which “poisons,” or limits the use of facial recognition software on publicly available photos. Other tools like WaveMark, Truepic, Photoguard, and Imatag protect the unauthorized use of audio or visual works by embedding watermarks that can be detected by certain software. A specific Photoguard tool in the collection makes an image “immune” to generative AI editing.

Adding watermarks to media created by generative AI is becoming critical for the protection of creative works and the identification of misleading information, but it’s not foolproof. Watermarks embedded within metadata are often automatically removed when uploaded to third-party sites like social media, and nefarious users can find workarounds by taking a screenshot of a watermarked image.

Nonetheless, free and available tools like the ones Hugging Face shared are way better than nothing.

Mashable