The lawsuit alleges OpenAI crawled the web to amass huge amounts of data without people’s permission.

  • EnglishMobster@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    The way generative AI works is by using things called “tokens”. Usually 1 word == 1 token, but compound words would be 2 tokens, punctuation would be a token, things like “-ed” or “-ing” could be tokens, etc.

    When you give an AI a prompt, it breaks your response down into tokens. It then finds what tokens were statistically most likely to appear near that content and gives them as a response.

    This has been the approach for a while, but the modern breakthroughs have come from layering AIs inside of each other. So in our example, the first AI would give an output. Then a second AI would take that output and apply some different rules to it - this second AI could have a different idea of what a “token” is, for example, or it could apply a different kind of statistical rule. This could be passed to a third AI, etc.

    You can “train” these AI by looking at their output and telling it if it was good or bad. The AIs will adjust their internal statistical models accordingly, giving more weight to some things and less weight to others. Over time, they will tend towards giving results that the humans overseeing the AI say are “good”. This is very similar to how the human brain learns (and it was inspired by how humans learn).

    Eventually, the results of all these AI get combined and given as an output. Depending on what the AIs were trained to do, this could be a sentence/response (ChatGPT) or it could be a collection of color values that humans see as a picture (DALL-E, Midjourney, etc.).

    Because there are so many layers of processing, it’s hard to say “this word came from this source.” Everything the AI did came from a collection of experiences, and generally as long as the training data was sufficiently large you can’t really pinpoint “yeah it was inspired by this.” It’s like how when you think of a dog, you think of all the dogs you’ve experienced in your lifetime and settle on one idea of “dog” that’s a composite of all those dogs.


    Interestingly, you can sometimes see some artifacts of this process where the AI learned the “wrong” thing. One example: if you asked an AI what 3 + 4 is, it knows from its experiences that statistically it should say “7”. Except people started doing things like asking for what “Geffers + HippoLady” was, and the bot would reply “13”, consistently.

    It seemed there were these random tokens that the bot kept interpreting as numbers. Usually they were gibberish, but sometimes you could make out individual words being treated as 1 token despite being 2 separate words.

    It turned out that if you googled these words, you’d get redirected to a subreddit - specifically /r/counting. The tokens were actually the usernames of people who contributed often to /r/counting. This is one way it was determined that the bot was training on Reddit’s data - because these usernames appeared near numbers a lot, the bot assumed they were numbers and treated those tokens accordingly.