The lawsuit alleges OpenAI crawled the web to amass huge amounts of data without people’s permission.

  • Hick@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Scraping social media posts and reddit posts doesn’t sound like stealing, they’re public posts.

    • sudneo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Here is not just scraping though, it is also using that data to create other content and to potentially also re-publish that data (we have no way of knowing whether chatGPT will spit out any of that nor where did it take what is spitting out).

      The expectation that social media data will be read by anybody is fair, but the fact is that the data has been written to be read, not to be resold and published elsewhere too.

      It is similar for blog articles. My blog is public and anybody can read it, but that data is not there to be repackaged and sold. The fact that something is public does not mean I can do whatever I want with it.

      • seasick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        I could read your blog post and write my own blog post, using yours as inspiration. I could quote your post, add a link back to your blog post and even add affiliate links to my blog post.I could be hired to do something like that for the whole day

        • sudneo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          ChatGPT doesn’t get inspired, the process is different and it could very well spit verbatim the content. You can do all the rest (depending on the license) without issues, but once again this is not what chatGPT does, as it doesn’t provide attribution.

          It’s exactly the same with software, in fact.

    • SamB@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I doubt it’s only about some Reddit posts. The scrapping was done on the whole web, capturing everything it could. So besides stealing data and presenting it as its own, it seems to have collected some even more problematic data which wasn’t properly protected.

      • zekiz@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        But that really isn’t OpenAI’s fault. Whoever was in charge of securing the patients data really fucked up.

        • krellor@kbin.social
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          Leaving your front door open isn’t prudent but doesn’t grant permission to others to enter and take/copy your belongings or data.

          The security teams may have royally screwed up, but OpenAI has a legal obligation to respect copyright and laws regarding data ownership.

          Likewise, they could have scraped pages that included terms of use, copyright, disclaimers, etc., and failed to honor them.

          All parties can be in the wrong for different reasons.

          • Dran@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            But does leaving your front door open allow one to legally take a picture of the inside from across the street? I’d say scraping is more akin to that than it is theft. Nothing is removed in scraping, just copied

            • BradleyUffner@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              Bad analogy. This is like leaving your couch out on the sidewalk, then complaining when someone takes a picture of it.

          • zekiz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It’s more like leaving an important letter in the open for everyone to read. It’s certainly your fault for leaving it that open.

          • conditional_soup@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I think it’s a little closer to being mad that the Google street car drove by and snapped a picture of the front of your house, tbh.

        • Apathy Tree@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s certainly their fault that they used it, though.

          If they cared, they could have ensured they weren’t using sensitive or otherwise highly problematic information, but they chose not to. That’s on them.

      • tallwookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        if it was unsecured it’s basically public. whomever put that data on a publicly accessible server is at fault

        • priapus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          That’s not necessarily true. Even if a company makes the mistake of not securing data correctly, those that make use of this data can still be at fault.

          If a company leaves a server wide open, you still can’t legally steal information from it.

      • sik0fewl@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        Just because something is posted online doesn’t mean it can be taken a resold. Copyright law prevents that. Of course, copyright law and generative AI is new and gray area.

    • Protegee9850@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      The worst rush to legislation is done in the name of stopping terrorists and saving the children. Always.

  • nH95sp@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Likely an unpleasant or possibly infeasible thing to implement, but designing the AI to always be able to “show the receipts” for how it’s formulating any given response could potentially be helpful. Suppose that could result in like a micro-royalties sort of industry to crop up for sourced data being used, akin to movies or TV using music and paying royalties

    • EnglishMobster@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      The way generative AI works is by using things called “tokens”. Usually 1 word == 1 token, but compound words would be 2 tokens, punctuation would be a token, things like “-ed” or “-ing” could be tokens, etc.

      When you give an AI a prompt, it breaks your response down into tokens. It then finds what tokens were statistically most likely to appear near that content and gives them as a response.

      This has been the approach for a while, but the modern breakthroughs have come from layering AIs inside of each other. So in our example, the first AI would give an output. Then a second AI would take that output and apply some different rules to it - this second AI could have a different idea of what a “token” is, for example, or it could apply a different kind of statistical rule. This could be passed to a third AI, etc.

      You can “train” these AI by looking at their output and telling it if it was good or bad. The AIs will adjust their internal statistical models accordingly, giving more weight to some things and less weight to others. Over time, they will tend towards giving results that the humans overseeing the AI say are “good”. This is very similar to how the human brain learns (and it was inspired by how humans learn).

      Eventually, the results of all these AI get combined and given as an output. Depending on what the AIs were trained to do, this could be a sentence/response (ChatGPT) or it could be a collection of color values that humans see as a picture (DALL-E, Midjourney, etc.).

      Because there are so many layers of processing, it’s hard to say “this word came from this source.” Everything the AI did came from a collection of experiences, and generally as long as the training data was sufficiently large you can’t really pinpoint “yeah it was inspired by this.” It’s like how when you think of a dog, you think of all the dogs you’ve experienced in your lifetime and settle on one idea of “dog” that’s a composite of all those dogs.


      Interestingly, you can sometimes see some artifacts of this process where the AI learned the “wrong” thing. One example: if you asked an AI what 3 + 4 is, it knows from its experiences that statistically it should say “7”. Except people started doing things like asking for what “Geffers + HippoLady” was, and the bot would reply “13”, consistently.

      It seemed there were these random tokens that the bot kept interpreting as numbers. Usually they were gibberish, but sometimes you could make out individual words being treated as 1 token despite being 2 separate words.

      It turned out that if you googled these words, you’d get redirected to a subreddit - specifically /r/counting. The tokens were actually the usernames of people who contributed often to /r/counting. This is one way it was determined that the bot was training on Reddit’s data - because these usernames appeared near numbers a lot, the bot assumed they were numbers and treated those tokens accordingly.

  • Eggyhead@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    The lawsuit alleges OpenAI crawled the web to amass huge amounts of data without people’s permission.

    So who exactly is keeping people’s data where it can be easily accessed by a trawler without anybody’s permission? Maybe we should be paying attention to that just as well.

  • Protegee9850@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    Scraping is protected. GPT and the line are more akin to fair use machines than plagiarism machines. This is a lot of hot air to go nowhere. Rage bait

  • Craynak_Zero@kbin.social
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    I would rather an AI have that data, instead of any of the Demons of Google, Twitter, Youtube, etc. The AI won’t abuse me the way those companies do on a daily basis.

    • SkierniewiceBoi@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      @Craynak_Zero
      But that’s why they need this ai trained with enormous amount of data. Such ai can be much better in understanding how to keep your engagement how to convince you to buy something etc. As long as it’s (not so)open ai connected with Microsoft I’d say it’s exactly the same
      @L4s

      • Craynak_Zero@kbin.social
        link
        fedilink
        arrow-up
        0
        arrow-down
        2
        ·
        1 year ago

        Can’t convince me of anything. I’m completely immune to such manipulation. Only subhumans are vulnerable, and I don’t really care about subhumans or what happens to them, except to laugh at their misery.

        What I do care about though, is money being wasted on frivolous lawsuits. The article seems pretty vague on how started this lawsuit too, probably to protect themselves from public scrutiny. The cowards.