Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'
Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'
Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'
Paywall bypass: https://archive.is/oWcIr
Wales’s quote isn’t nearly as bad as the byline makes it out to be:
That being said, it still reeks of “CEO Speak.” And trying to find a place to shove AI in.
More NLP could absolutely be useful to Wikipedia, especially for flagging spam and malicious edits for human editors to review. This is an excellent task for dirt cheap, small and open models, where an error rate isn’t super important. Cost, volume, and reducing stress on precious human editors is. It's a existential issue that needs work.
…Using an expensive, proprietary API to give error prone yet “pretty good” sounding suggestions to new editors is not.
Wasting dev time trying to make it work is not.
This is the problem. Not natural language processing itself, but the seemingly contagious compulsion among executives to find some place to shove it when the technical extent of their knowledge is occasionally typing something into ChatGPT.
It’s okay for them to not really understand it.
It’s not okay to push it differently than other technology because “AI” is somehow super special and trendy.
I think you mean reeks, which means to stink, having a foul odor.
Those homophones have reeked havoc for too long!
Waves hands "You didn't see anything."
Thank you. Glad to know I am not the only one that got triggered, lol.
This is another reason why I hate bubbles. There is something potentially useful in here. It needs to be considered very carefully. However, it gets to a point where everyone's kneejerk reaction is that it's bad.
I can't even say that people are wrong for feeling that way. The AI bubble has affected our economy and lives in a multitude of ways that go far beyond any reasonable use. I don't blame anyone for saying "everything under this is bad, period". The reasonable uses of it are so buried in shit that I don't expect people to even bother trying to reach into that muck to clean it off.
This bubble's hate is pretty front-loaded though.
Dotcom was, well, a useful thing. I guess valuations were nuts, but it looks like the hate was mostly in the enshittified aftermath that would come.
Crypto is a series of bubbles trying to prop up flavored pyramid schemes for a neat niche concept, but people largely figured that out after they popped. And it's not as attention grabbing as AI.
Machine Learning is a long running, useful field, but ever since ChatGPT caught investors eyes, the cart has felt so far ahead of the horse. The hate started, and got polarized, waaay before the bubble popping.
...In other words, AI hate almost feels more political than bubble fueled. If that makes any sense. It is a bubble, but the extreme hate would still be there even if it wasn't.
So... I actually proposed a use case for NLP and LLMs in 2017. I don't actually know if it was used.
But the usecase was generating large sets of fake data that looked real enough for performance testing enterprise sized data transformations. That way we could skip a large portion of the risk associated with using actual customer data. We wouldn't have to generate the data beforehand, we could validate logic with it, and we could just plop it in the replica non-prodiction environment.
At the time we didn't have any LLMs. So it didn't go anywhere. But it's always funny when I see all this "LLMs can do x" because I always think about how my proposal was to use it... For fake data.
I don't see how this is "shoved in." Wales identified a situation where Wikipedia's existing non-AI process doesn't work well and then realized that adding AI assistance could improve it.
Neither did Wales. Hence, the next part of the article:
It doesn't mean the original process isn't problematic, or can't be helpfully augmented with some kind of LLM-generated supplement. But this is like a poster child of a troublesome AI implementation: where a general purpose LLM needs understanding of context it isn't presented (but the reader assumes it has), where hallucinations have knock-on effects, and where even the founder/CEO of Wikipedia seemingly missed such errors.
Don't mistake me for being blanket anti-AI, clearly it's a tool Wikipedia can use. But the scope has to be narrow, and the problem specific.
Adding AI assistance to any review process only ever worsens it, because instead of having to review one thing, now the reviewer has to review two things, one of which is defo hallucinated but it's hard to justify the "why", and the reviewer is also paid far less in exchange and has his entire worker class threatened.