Wait, which list of filtered IPs are you even talking about? The list in the article is a list of unique kernel versions, not IPs.
Wait, which list of filtered IPs are you even talking about? The list in the article is a list of unique kernel versions, not IPs.
I’m not sure why you say it’s “artificially” inflated. Non-linux systems are also affected.
this will affect almost nobody
Is that really true? From https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/
Full disclosure, I’ve been scanning the entire public internet IPv4 ranges several times a day for weeks, sending the UDP packet and logging whatever connected back. And I’ve got back connections from hundreds of thousands of devices, with peaks of 200-300K concurrent devices.
Is Yubico actually claiming it is more secure by not being open source?
there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.
Of course you can. Why would you not, just because it is non-deterministic? Non-determinism does not mean complete randomness and lack of control, that is a common misconception.
Again, obviously you can’t teach an LLM about morals, but you can reduce the likelyhood of producing immoral content in many ways. Of course it won’t be perfect, and of course it may limit the usefulness in some cases, but that is the case also today in many situations that don’t involve AI, e.g. some people complain they “can not talk about certain things without getting cancelled by overly eager SJWs”. Society already acts as a morality filter. Sometimes it works, sometimes it doesn’t. Free-speech maximslists exist, but are a minority.
Well, I, and most lawmakers in the world, disagree with you then. Those restrictions certainly make e.g killing humans harder (generally considered an immoral activity) while not affecting e.g. hunting (generally considered a moral activity).
So what possible morality can you build into the gun to prevent immoral use?
You can’t build morality into it, as I said. You can build functionality into it that makes immmoral use harder.
I can e.g.
Society considers e.g hunting a moral use of weapons, while killing people usually isn’t.
So banning ceramic, unmarked, silenced, full-automatic weapons firing armor-piercing bullets can certainly be an effective way of reducing the immoral use of a weapon.
While an LLM itself has no concept of morality, it’s certainly possible to at least partially inject/enforce some morality when working with them, just like any other tool. Why wouldn’t people expect that?
Consider guns: while they have no concept of morality, we still apply certain restrictions to them to make using them in an immoral way harder. Does it work perfectly? No. Should we abandon all rules and regulations because of that? Also no.
They can, and are being made. E.g. the state of accessibility on Gome.
Yes, and what I’m saying is that it would be expensive compared to not having to do it.
Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
How many billion times do you generally do that, and how is battery life after?
Cryptographically signed documents and Matrix?
I think you are replying to the wrong person?
I did not say it helps with accuracy. I did not say LLMs will get better. I did not even say we should use LLMs.
But even if I did, non of your points are relevant for the Firefox usecase.
Wikipedia is no less reliable than other content. There’s even academic research about it (no, I will not dig for sources now, so feel free to not believe it). But factual correctness only matters for models that deal with facts: for e.g a translation model it does not matter.
Reddit has a massive amount of user-generated content it owns, e.g. comments. Again, the factual correctness only matters in some contexts, not all.
I’m not sure why you keep mentioning LLMs since that is not what is being discussed. Firefox has no plans to use some LLM to generate content where facts play an important role.
At horrendous expense, yes. Using it for OCR makes little sense. And compared to just sending the text directly, even OCR is expensive.
The issue is not sending, it is receiving. With a fax you need to do some OCR to extract the text, which you then can feed into e.g an AI.
What do you mean “full set if data”?
Obviously you can not train on 100% of material ever created, so you pick a subset. There is a a lot of permissively licensed content (e.g. Wikipedia) and content you can license (e.g. Reddit). While not sufficient for an advanced LLM, it certainly is for smaller models that do not need wide knowledge.
I’d say the main differences are at least
You would be vulnerable on Windows, if you were running CUPS, which you probably are not. But CUPS is not tied to Linux, and is used commonly on e.g. BSDs, and Apple has their own fork for MacOS (have not heard anything about it being vulnerable though).