most of the time you’ll be talking to a bot there without even realizing. they’re gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.
for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they’ll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.
the only real humans you’ll find there are the shills that run the place, and the poor suckers that fall for the scam.
it’s gonna be a shithole.
Right now, we can already recognize lower quality bots within conversation. AI generated “art” is already very distinct to everyone to the point almost nobody misses it.
Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.
By the time bots become good enough to be indistinguishable online, they’ll either be actually worth talking to, or they will simply be another corporate shill.
I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?
I don’t love that right now, the focus is on eliminating or silencing the voice of bots, because as you point out, they’re going to be indistinguishable from human voices soon - if they aren’t already. In the education space, we’re already dealing with plagiarism platforms incorrectly claiming real student work is written by ChatGPT. Reading a viewpoint you disagree with and immediately jumping to “bot!” only serves to create echo chambers.
I think it’s better and safer long term to educate people to think critically, assume good intent, know their boundaries online (ie, don’t argue when you can’t be coherent about it and have to devolve to name calling, etc), and focus on the content and argument of the post, not who created it - unless it’s very clear from a look at their profile that they’re arguing in bad faith or astroturfing. A shitty argument won’t hold up to scrutiny, and you don’t have the risk of silencing good conversation from a human with an opposing viewpoint. Common agreement on community rules such as “no hate speech” or limiting self-promotion/review/ads to certain spaces and times is still the best and safest way to combat this, and from there it’s a matter of mods enforcing the boundaries on content, not who they think you are.
Because bots don’t think. They exist solely to push an agenda on behalf of someone.
Part of the problem is that bots unfairly empower the speech of those with resources to dominate and dictate the conversation space, even in good effort, it disempowers everyone else. Even the act ofseeing the same ideas over and over can sway whole zeitgeists. Now imagine what bots cab do by dictating the bulk of what’s even talked about at all.
If the people involved in the conversation are there because they are intending to have a conversation with people, yes, it’s automatically bad. If I want to have a conversation with a chatbot, I can happily and intentionally head over to ChatGPT etc.
Bots are not inherently bad, but I think it’s imperative that our interactions with them are transparent and consensual.