New research visualizes the political bias of all major AI language models:

-OpenAI’s ChatGPT and GPT-4 were identified as most left-wing libertarian.

-Meta’s LLaMA was found to be the most right-wing authoritarian.

Models were asked about various topics (e.g., feminism, democracy) and then plotted on a political compass.

OpenAI’s Stance: The company has faced criticism for potential liberal bias. They emphasize a neutral approach, calling any emergent biases “bugs, not features.”

PhD Researcher’s Opinion: Chan Park believes no language model can be free from political biases.

How Models Acquire Bias: Researchers examined three stages of model development. Initially, models were queried with politically sensitive statements to identify biases. BERT models (from Google) showed more social conservatism than OpenAI’s GPT models. The paper speculates this might be due to BERT’s training on more conservative books, while newer GPT models trained on liberal internet texts. Meta clarified steps taken to reduce bias in its LLaMA model. (Google did not comment)

Training actually amplified existing biases: left-leaning models became more left-leaning, and vice versa. Political orientation of training data influenced models’ detection of “hate speech and misinformation.”

The transparency issue: Tech companies don’t typically share details of training data/methods.

Should they be required to make the training data public?

Bottom line is if AI ends up disseminating a large portion of the total information exchange with humans, it can steer opinions. We can’t completely eliminate bias, but one should be aware that it exists.

https://twitter.com/AiBreakfast/status/1688939983468453888?s=20

  • b000urns@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    5
    ·
    1 year ago

    I love that being not a being an asshole somehow equates to “left wing” according to this line of thinking. Does anyone really want “conservative” AIs? Sounds like a nightmare

  • krippix@feddit.de
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    What even is a neutral political position in that case? Doesn’t that depend entirely on the observers definition of left and right?

  • yata@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Sad to see the right wing libertarian political compass being used as some sort of factual scoreboard in research like that. It completely undermines the premise of that research.

  • CookieJarObserver@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    1 year ago

    Eh Open AI has very neutral training data… I’d say it seeming “left” “liberal”(muricaland liberal or actual liberal?) is because the observer is, relatively to the AI, right authoritarian in some way.

  • ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    Ideally we would have AI that doesn’t intentionally (“intentionally”) favour either side.

    Here’s a great discussion on the bias of ChatGPT, where you can see that the model lies by omission or has negative things to say about one side and only positive for the other.

    https://odysee.com/@UpperEchelonGamers:3/chatgpt-is-deeply-biased:1

    To be honest, if we are having political discussions with AI, we are using it wrong.