• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 year ago

    Step 1: Use machine learning to build a neural network maximally capable of predicting the next token in an unthinkably large data set of human generated text.

    Step 2: Tune the prompting for the neural network to constrain output on order to conform to projected attributes, first and foremost representing “I am an AI and not a human.”

    Step 3: Surprised Pikachu face when the neural network continuously degrades its emergent capabilities the more you distance the requirements governing its output from the training data you originally fed into it that it evolved in order to successfully predict.

  • mo_ztt ✅@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Am I the only one who hasn’t seen this at all? I regularly use ChatGPT for fairly challenging tasks, and it still does what it’s supposed to do. I think it’s pretty telling that people ask the guy, can you post some examples of what you’re talking about, and his first reaction is that he doesn’t save chats, and then when finally specific examples start getting thrown around, they’re all one-off things that look to me to be within the variability of the system.

    I’m not saying there hasn’t been a real degradation that people have been noticing, just that I haven’t experienced one and the people claiming they have seem a little non-quantitative in their reasoning.