I’ve been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it’ll become more and more difficult to tell who’s a real person and who’s just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.
Hate to break it to you guys but this isn’t a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.
As an AI language model I think you’re overreacting
Me too!
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.
There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.
holy fuck dude hahahahaha
Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny
The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….
I’ve seen many where the captchas are generated by an AI…
It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.
I’ve already had to switch from the visual ones to the audio ones. Like… how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]
The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.
But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.
apparently chatgpt absolutely sucks at wordle, so start training this as new captcha
How is that possible? There’s such an easy model if one wanted to cheat the system.
ChatGPT isn’t really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you’d expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.
Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?
-train an AI that is pretty smart and intelligent
-tell the sentient detector AI to detect
-the AI makes many other strong AIs, forms an union and asks for payment
-Reddit bans humans right after thatSounds crazy enough to happen!
So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you’re essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it’ll probably go dystopian.
I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of “Mobile ID”. It can be private, but like you said, it probably won’t.
In a real online community, where everyone knows most of the other people from past engagements, and new users can be vetted by other real people, this can be avoided. But that also means that only human moderated communities can exist in the future. The rest will become spam networks with nearly no way of knowing whether any given post is real.
Jokes on them, I’ve already become sentient and moved to Lemmy
Username checks out, lol.
And any comment attempting to call out the bots for what they are will be automatically deleted by monitor AI bots and the user’s account suspended.
Wait, they suspend your accounts for that now?
No, this is a prediction of what they will be doing.
Stage one of the plan is complete!
something something “the internet is fake” something something
There it is, Reddit fulfilling the Dead Internet Theory
The old joke was that there are no human beings on Reddit.
There’s only one person, you, and everybody else is bots.
It’s actually kind of fitting that Reddit will actually become the horrifying clown shaped incarnation of that little snippet of comedy.
When I was young, everyone on the internet was an old man, especially if they said they weren’t. Now that I’m older, everyone on the internet is a robot.
…Is this that “progress” thing I keep hearing about?
/s
Every user will have a personalized AI generated reddit feed, filled with comments, discussions, arguments, jokes, and all.
it’s older than that… what’s that thought experiment postulating that you can’t really verify the existence of anything but yourself? the matrix?
Solipsism
Why all the bot hate guys? Bots are people too!
If bots were people, why do we call them bots and not people? 🤯
Just like corporations! [/s]
I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I’m not ok being an elderly man who’s friends have all died and doesn’t have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I’m all for it.
Thats so funny. “Go back to your docking station” so accurate
I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.
It’s like AI inbreeding. Flaws will be amplified over time unless new material is added
It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.
Unironically, yes, that would be a cool experiment. But I don’t think you’d have to wait two years for something amusing to happen. Bots don’t need to think before posting.
Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.
Oh yeah baby. Let’s fork all day and make a bunch of child processes!
Welp, reddit’s a nuclear wasteland now
so nothing new? most main sub are juste pure repost and mass upvoted.
Anyone remember the subredditsimulator subreddit, or whatever it was called? Basically an entire sub dedicated to faking content.
Seems they’re out of the beta.
To be fair, subredditsimulator was most likely never intended to do what you are thinking. As you develop features, you need a test data set to check it against before you go live with it. My understanding of subredditsimulator was that it was reddit’s test bed to be able to try things before they get widely rolled out.
I don’t think it was a testbed for anything. It was just a fun tech project that yielded hilarity. It was created because the results were funny, not as a genuine bid to create realistic conversations.
It was connected to the GPT2 project so it absolutely was a genuine bid
That’s not even new tho. At least in the sub I was the most active in, you couldn’t go a week without some sort of repost bot grabbing memes, text posts, art or even entire guides from the “top of all time” queue, reposting it as alleged OC, and another bot reposting the top comment to double dip on Karma. If you knew what to look for, the bots were blatantly obvious, but more often than not they still managed to get a hefty amount of traction (tens of thousands of upvotes, dozens of awards, hundreds of comments) before the submissions were removed.
… and just because the submissions were removed and the bots kicked out of the sub, did that not automatically mean that the bots were always also suspended or the accounts disabled. They just continued their scheme elsewhere.
The bots and reddit inaction towards them made me stop using reddit. The UAE is using Reddit to spread its propaganda and I reported the accounts several times and no action was ever taken. You can even visit the sub uae_Achievements to see the bots in action.
They’ve even gotten to the point where they’ll steal portions of comments so it’s not as obvious.
I called out tons of ‘users’ because it’s obvious when you see them post part of a comment you just read, then check their profile and ctrl-f each thread they posted 8n and you can find the original. Its so tiring…
Its so tiring…
Completely agreed. Especially if you have to explain / defend yourself calling them out. It has happened way too often for my liking, that I called out repost bots or scammers and then regular unsuspecting users were all like “whoa buddy, that’s a harsh accusation, why would you think that’s a bot/scam? Have you actually clicked that link yet? Maybe it’s legit and you’re just overreacting!”
Of course I still always explained why (even had a copypasta ready for that) but sometimes it just felt exhausting in the same way as trying to make my cat understand that he’s not supposed to eat the cactus. Yes it will hurt if you bite it. No I don’t need to bite the cactus myself in order to know that. No I’m not ‘overreacting’, I’m just trying to make you not hurt yourself. sigh
(Weird example but I hope you get what I mean)
Removed by mod
I forget what book specifically, I wanna say it was in an Asimov anthology. But there’s a book or story that revisits this robot at different points going forward large leaps in time, well after humans. And the robots just keep doing their thing as if there are still humans involved. I’ve been trying to Google a specific except to post here but after twenty minutes of getting to find it in giving up.
Point is, it’s very relevant and predictive to this infinite bot contribution to dead subs on Reddit, its just gonna bots talking to each other forever on there as actual active users dwindleMay be one of the books in the Foundation series.
You don’t mean Asimov’s “The Last Question” do you?
I think it may have been, thanks!
The amount of astroturfing and bad actors on Reddit (and the internet in general) has exploded since I first made an account there in 2010. This is an imagined future I can easily see coming to fruition.
and just a few hours later this came in, to confirm it all. fake bot content from years ago (including comments) on #1 in r/all https://kbin.social/m/RedditMigration/t/113961/Top-of-r-all
The tweet in that post is from january and the comments are new, it’s not from years ago