• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle


  • itsnotlupus@lemmy.worldtoLinux@lemmy.mlraw man files?
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    1 year ago

    You can list every man page installed on your system with man -k . , or just apropos .
    But that’s a lot of random junk. If you only want “executable programs or shell commands”, only grab man pages in section 1 with a apropos -s 1 .

    You can get the path of a man page by using whereis -m pwd (replace pwd with your page name.)

    You can convert a man page to html with man2html (may require apt get man2html or whatever equivalent applies to your distro.)
    That tool adds a couple of useless lines at the beginning of each file, so we’ll want to pipe its output into a | tail +3 to get rid of them.

    Combine all of these together in a questionable incantation, and you might end up with something like this:

    mkdir -p tmp ; cd tmp
    apropos -s 1 . | cut -d' ' -f1 | while read page; do whereis -m "$page" ; done | while read id path rest; do man2html "$path" | tail +3 > "${id::-1}.html"; done
    

    List every command in section 1, extract the id only. For each one, get a file path. For each id and file path (ignore the rest), convert to html and save it as a file named $id.html.

    It might take a little while to run, but then you could run firefox . or whatever and browse the resulting mess.

    Or keep tweaking all of this until it’s just right for you.





  • More appropriate tools to detect AI generated text you mean?

    It’s not a thing. I don’t think it will ever be a thing. Certainly not reliably, and never as a 100% certainty tool.

    The punishment for a teacher deciding you cheated on a test or an assignment? I don’t know, but I imagine it sucks. Best case, you’d probably be at risk of failing the class and potentially the grade/semester. Worst case you might get expelled for being a filthy cheater. Because an unreliable tool said so and an unreliable teacher chose to believe it.

    If you’re asking what’s the answer teachers should know to defend against AI generated content, I’m afraid I don’t have one. It’s akin to giving students math homework assignments but demanding that they don’t use calculators. That could have been reasonable before calculators were a thing, but not anymore and so teachers don’t expect that to make sense and don’t put those rules on students.




  • I was watching the network traffic sent by Twitter the other day, as one does, and apparently whenever you stop scrolling for a few seconds, whatever post is visible on screen at that time gets added to a little pile that then gets “subscribed to” because it generated “engagement”, no click needed.
    This whole insidious recommendation nonsense was probably a subplot in the classic sci-fi novel Don’t Create The Torment Nexus.

    Almost entirely unrelated, but I’ve been playing The Algorithm (part of the Tenet OST, by Ludwig Göransson) on repeat for a bit now. It’s also become my ring tone, and if I can infect at least one other hapless soul with it, I’ll be satisfied.


  • Last I checked, there were at least 3 subreddits where cryptocurrency is being handed out regularly to active participants.
    They’re called “Community Points”, and get a custom name for each sub (“moons” in /r/cryptocurrency, “donuts” in /r/ethtrader, and “bricks” in /r/fortniteBR.)

    I don’t know how the other subs fared, but /r/cryptocurrency became noticeably gamed by actors attempting to maximize their financial gains.

    So… I guess it’s gonna be awesome.



  • Presumably because they don’t have a single delivery employee. They just provide “tech” that lets drivers and customers find each others.

    Of course if those companies were to become responsible for providing a living wage to their “gig workers”, then it becomes harder to still call them mere “tech” companies (and some might argue that an article using that label to describe them is in fact implicitly picking a side in that lawsuit.)


  • The term AI was coined many decades ago to encompass a broad set of difficult problems, many of which have become less difficult over time.

    There’s a natural temptation to remove solved problems from the set of AI problems, so playing chess is no longer AI, diagnosing diseases through a set of expert system rules is no longer AI, processing natural language is no longer AI, and maybe training and using large models is no longer AI nowadays.

    Maybe we do this because we view intelligence as a fundamentally magical property, and anything that has been fully described has necessarily lost all its magic in the process.
    But that means that “AI” can never be used to label anything that actually exists, only to gesture broadly at the horizon of what might come.



  • I’ll note that there are plenty of models out there that aren’t LLMs and that are also being trained on large datasets gathered from public sources.

    Image generation models, music generation models, etc.
    Heck, it doesn’t even need to be about generation. Music recognition and image recognition models can also be trained on the same sort of datasets, and arguably come with similar IP right questions.

    It’s definitely a broader topic than just LLMs, and attempting to enumerate exhaustively the flavors of AIs/models/whatever that should be part of this discussion is fairly futile given the fast evolving nature of the field.