Google just open-sourced its AI text detection tool for everyone

“PCWorld is on a journey to delve into information that resonates with readers and creates a multifaceted tapestry to convey a landscape of profound enrichment.” That drivel may sound like it was AI-generated, but it was, in fact, written by a fleshy human — yours truly.

Truth is, it’s hard to know whether a particular chunk of text is AI-generated or actually written by a human. Google is hoping to make it easier to spot by open-sourcing its new software tool.

Google calls it SynthID, a method that “watermarks and identifies AI-generated content.” Previously limited to Google’s own language and image generation systems, the company has announced that SynthID is being released as open-source code that can be applied to other AI text generation setups as well. (If you’re more comp-sci literate than me, you can check out all the details in the prestigious Nature journal.)

But in layman’s terms — at least to the degree that this layman can actually understand them — SynthID hides specific patterns in images and text that are generally too subtle for humans to spot, with a scheme to detect them when tested.

SynthID can “encode a watermark into AI-generated text in a way that helps you determine if text was generated from your LLM without affecting how the underlying LLM works or negatively impacting generation quality,” according to a post on the open-source machine learning database Hugging Face.

The good news is that Google says these watermarks can be integrated with pretty much any AI text generation tool. The bad news is that actually detecting the watermarks still isn’t something that can be nailed down.

While SynthID watermarks can survive some of the basic tricks used to get around auto-detection — like “don’t call it plagiarism” word-swapping — it can only indicate the presence of watermarks with varying degrees of certainty, and that certainty goes way down when applied to “factual responses,” some of the most important and problematic uses of generative text, and when big batches of text go through automatic translation or other re-writing.

“SynthID text is not designed to directly stop motivated adversaries from causing harm,” says Google. (And frankly, I think even if Google had made a panacea against LLM-generated misinformation, it would be hesitant to frame it as such for liability reasons.) It also requires the watermark system to be integrated into the text generation tool before it’s actually used, so there’s nothing stopping someone from simply choosing not to do that, as malicious state actors or even more explicitly “free” tools like xAI’s Grok are wont to do.

And I should point out that Google isn’t exactly being gregarious here. While the company is pushing its own AI tools on both consumers and businesses, its core Search product is in danger from a web that seems to be rapidly filling up with auto-generated text and images. Google’s competition, like OpenAI, might elect not to use these kinds of tools simply as a matter of doing business, hoping to create a standard of their own to drive the marketplace towards their own products.

Geef een reactie

Uw e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *