Google open-sourced its watermarking tool for AI-generated text

โ€”

by

in


An LLM generates text one token at a time. These tokens can represent a single character, word or part of a phrase. To create a sequence of coherent text, the model predicts the next most likely token to generate. These predictions are based on the preceding words and the probability scores assigned to each potential token.

For example, with the phrase โ€œMy favorite tropical fruits are __.โ€ The LLM might start completing the sentence with the tokens โ€œmango,โ€ โ€œlychee,โ€ โ€œpapaya,โ€ or โ€œdurian,โ€ and each token is given a probability score. When thereโ€™s a range of different tokens to choose from, SynthID can adjust the probability score of each predicted token, in cases where it wonโ€™t compromise the quality, accuracy and creativity of the output.

This process is repeated throughout the generated text, so a single sentence might contain ten or more adjusted probability scores, and a page could contain hundreds. The final pattern of scores for both the modelโ€™s word choices combined with the adjusted probability scores are considered the watermark.



Source link


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *