Skip Navigation

User banner
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)H
Posts
1
Comments
556
Joined
2 yr. ago

  • Certainly the latter.

    I have pretty decent insurance through work, but if I’m picking up a prescription, it’s cheaper for me to say I don’t have insurance and use a free discount card (like GoodRx) than to use my insurance. We’re talking $150-$200 for one prescription (a one month supply) with insurance vs $30 without.

    To be fair, I have an HDHP with an HSA so my insurance is only supposed to negotiate a discount until I hit the deductible, rather than paying for it. Full price is $200-$250, I think? (I get generics and each generic variant has a slightly different price.) So technically they’re providing a discount, just not a very good one.

    Insurance also likes to require a “prior authorization,” which was always a fun surprise after making it through the pharmacy line. That normally takes a couple days to resolve, at minimum, and sometimes longer. If you’re not familiar with prior auths, it’s basically when the insurance company says “Hey doc, can you justify why you’re prescribing this and answer these eight questions?” and then they have someone without a medical degree review the answer and see if it’s good enough.

    The only downside to paying out of pocket with a discount card is that the $30 doesn’t go toward my deductible. But since my deductible is multiple thousands of dollars, unless something else happens during the year, I won’t hit my deductible off the $150-$200 prescriptions + regular doctor visits alone. But that’s at most $360 out of pocket that wouldn’t have gone toward the deductible, assuming I had a health crisis in December, vs $1440-$2040 saved if I don’t.

    X-rays are even worse, because you’re not told the price ahead of time.

  • Illegal vote suppression elected Trump, but even if it hadn’t, you should blame Democrats before blaming people who voted for third party candidates. Now, if you’re talking about people who “protest voted” by voting for Trump (in both the primaries and the election), then sure. Those people did, in fact, play an instrumental part in electing him.

    Why blame Democrats? Well, beyond just kinda being Republican-lites:

    • for opposing ranked choice voting (and alternatives)
    • for not rallying around progressive candidates
    • for not choosing Kamala via primary elections in 2024

    Democrats are the bare minimum “harm reduction” party, and I don’t bare any ill will toward people who voted for them rather than a party that would actually try to effect change, but the opposite mindset - blaming third party voters for not voting for Democrats - is very shortsighted. And as third party voters have never had the power to enact RCV or STAR voting or otherwise improve the system, blaming them instead of the Democrats who have had that power is inane.

    I’ve voted for a Democrat every single presidential election that I’ve been able to, but I honestly wish I hadn’t. I’d much rather there be more visibility for third parties, and for more people to feel empowered to vote for third party candidates.

  • Ass Ads

    Jump
  • Glaring doesn't imply a negative meaning. In this case it's used to mean "obvious".

    Unless you’re suggesting that “glaring” means “obviously staring” (it doesn’t - that would be “glaringly staring”) this doesn’t make any sense.

    “[He’s] glaring at [direct object]” is an example of a sentence that uses the present participle form of the verb “glare,” which explicitly communicates anger or fierceness.

    If you’re not convinced, read on.

    —————

    The verb form that takes an object is:

    Glare (verb with object): to express with a glare. They glared their anger at each other

    The noun form the above definition references is:

    Glare (noun): a fiercely or angrily piercing stare.

    “Glaring” can be an adjective and one of those definitions does mean “obvious” or “conspicuous,” but the use of that form of the word doesn’t make sense in her sentence. Think about a comparable sentence like “The undercover operative is conspicuous at the bar,” where the bar is the location. (Even then, most people wouldn’t use “glaring” in that sentence, as “conspicuous” or “obvious” are much less ambiguous; the operative could be staring piercingly or angrily at the bar rather than being glaring while being at the bar.) Another example that makes a bit more sense is “The effect of the invasive plants is glaring at the park.”

    But for that interpretation to be valid here, you’d have to:

    • believe that the dude is trying to hide/blend in, or otherwise explain how he - not what he’s doing, but the dude himself - is conspicuous
    • believe that the woman’s referring to her own ass as a location
    • assume that she isn’t commenting on how the guy is looking at her ass, even though the joke depends on giving him something different to look at

    That’s a bit of a stretch.

  • This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile

     
        
    x-logging:
      &default-logging
      options:
        max-size: '10m'
        max-file: '5'
      driver: json-file
    
    services:
      mongo:
        image: mongo:4.4
        volumes:
          - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
        logging: *default-logging
    
      nightscout:
        image: nightscout/cgm-remote-monitor:latest
        container_name: nightscout
        restart: always
        depends_on:
          - mongo
        logging: *default-logging
        ports:
          - 1337:1337
        environment:
          ### Variables for the container
          NODE_ENV: production
          TZ: [removed]
    
          ### Overridden variables for Docker Compose setup
          # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
          # and manage TLS certificates
          INSECURE_USE_HTTP: 'true'
    
          # For all other settings, please refer to the Environment section of the README
          ### Required variables
          # MONGO_CONNECTION - The connection string for your Mongo database.
          # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
          # The default connects to the `mongo` included in this docker-compose file.
          # If you change it, you probably also want to comment out the entire `mongo` service block
          # and `depends_on` block above.
          MONGO_CONNECTION: mongodb://mongo:27017/nightscout
    
          # API_SECRET - A secret passphrase that must be at least 12 characters long.
          API_SECRET: [removed]
    
          ### Features
          # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
          # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
          ENABLE: careportal rawbg iob
    
          # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
          # When readable, anyone can view Nightscout without a token. Setting it to denied will require
          # a token from every visit, using status-only will enable api-secret based login.
          AUTH_DEFAULT_ROLES: denied
    
          # For all other settings, please refer to the Environment section of the README
          # https://github.com/nightscout/cgm-remote-monitor#environment
    
    
      
  • To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,

     
        
    services:
      nightscout:
        ports:
          - 3000:3000
    
      

    You can remove the labels as those are used by Traefik, as well as the Traefik service itself.

    Then just point Nginx to that port (e.g., 3000) on your local machine.

    —-

    Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.

  • JustWatch is still useful if you want to act like you watched it legitimately, e.g., if a coworker asks where they can watch it. Even if your coworker also pirates, they might not have an account on your private tracker, Usenet, etc..

    I may be wrong, as I haven’t actually torrented anything substantial since Demonoid was still a thing, but it all feels less accessible than it used to be.

  • It’s not “dark green,” that’s for sure.

  • There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.

    Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”

    Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).

    In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).

    https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.

    The author also quotes Jeff Hawkins’ book On Intelligence:

    Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …

    The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.

    If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.

    Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.

  • We are. Why do you think we stopped?

  • PSTN is wiretapped.

    It’s a good thing that the website itself supports sending and receiving alerts, then.

  • Current generation iPad Pros and Airs have the same processing power as Apple Silicon Macs. That’s more than enough for Blender. Even the base iPad and the iPad Mini likely have enough processing power - though I don’t think the base iPad has enough RAM.

  • Does mirroring a screen (or adding a screen) from a computer or connecting to a computer via remote desktop count?

  • if everyone thought like you no one would create digital media

    This is obviously incorrect.

  • I thought Hue bulbs used Zigbee?

  • The up arrow moves through the letters, e.g., A->B->C. The down arrow moves to the next character in the sequence, e.g., C->CA->CAA. If you click past the correct letter, you’ll have to click all the way through again. And if you submit the wrong letter, you have to start all over (after it takes twenty seconds attempting to connect with the wrong password and then alerts you that it didn’t work, of course).

  • Fair point, I should have asked about commercial games in general

    That said I didn’t mean that the game studio itself would do the AI training and own their models in-house; if they did, I’d expect it to go just as poorly as you would. Rather, I’d expect the model to be created by an organization specialized in that sort of thing.

    For example, “Marey” is one example I found of a GenAI model that its creators are saying was trained ethically.

    Another is Adobe Firefly, where Adobe says they trained only on licensed and public domain content. It also sounds like Adobe is paying the artists whose content was used for AI training. I believe that Canva is doing something similar.

    StabilityAI is also doing something similar with Stable Audio 2.0, where they partnered with a music licensing company, AudioSparx, to ensure that artists are compensated, AI opt outs are respected, etc..

    I haven’t dug into any of those too deep, but they seem to be heading in the right direction at the surface level, at least.

    One of the GenAI scenarios that’s the most terrifying to me is the idea of a company like Disney using all the material they have copyright for to train their own, proprietary GenAI image, audio, and video tools… not because I think the outputs would be bad, but because of the impact that would have on creators in that industry.

    Fortunately, as long as copyright doesn’t apply to purely AI generated outputs, even if trained entirely on your own content, then I don’t think Disney specifically will do this.

    I mention that as an example because that usage of AI, regardless of how ethically the model was trained, would still be unethical, in my opinion. Likewise in game creation, an ethically trained and operated model could still be used unethically to eliminate many people’s jobs in the interest solely of better profits.

    I’d be on board with AI use (in game creation or otherwise) if a company were to say, “We’re not changing the budget we have for our human workforce, including for contractors, licensed art, and so on, other than increasing it as inflation and wages increase. We will be using ethical AI models to create more content than we otherwise would have been able to.” But I feel like in a corporate setting, its use is almost always going to result in them cutting jobs.

  • Are you okay with AAA studios using GenAI that was trained only on licensed works?

  • Deleted

    Permanently Deleted

    Jump
  • Depends on your e-reader! If you have a Kindle, Kobo, or Nook, yes, that’s true. However:

    Boox has e-readers that run Android and you can install Hoopla. The Palma 2 is phone sized which is great. The Page, Leaf2, and Go 7 are all in the 7” form factor, plus they have 6” versions. And they have tablet sizes, too. They have both traditional black&white and color e-ink displays.

    I have the Boox Air 3C and the original Palma and both are great. I’ll likely get a Boox as my next standard sized e-reader, too (whenever I replace my Kindle Oasis). Though unless the technology drastically improves before then, it’ll be one with a black and white screen. (The color is nice in the tablet sizes, though, especially for comics from Hoopla.)

    Some other options that I’m less familiar with include:

    • Bigme has Android 7” color e-readers, as well as tablets and e-ink smartphones.
    • Meebook has e-readers that run Android (and Android e-ink tablets)
    • The MuSnap Aura C is a 10” Android e-ink tablet
    • XPPen has an 11” Android e-ink tablet
  • One thing Ubuntu users should know is that the change will only provide performance boosts when GPUs are handling workloads running the OpenCL framework or the OneAPI Level Zerointerface. That likely means that people using games and similar apps will see no benefit.

  • Copyright applies to unfinished works, too. There are many reasons it might not protect an unfinished work, but those reasons are still relevant even for finished works.

    If someone steals your physical drawing, that’s theft. If they take a picture of it, then use the picture - or your picture + modifications - without your permission, particularly in a commercial work, then that’s copyright infringement, but not theft. If they steal your physical drawing and then take a picture and so on, then it’s both theft and copyright infringement.

    Most likely this wasn’t considered copyright infringement because the allegedly copied art isn’t copyrightable, e.g., game mechanics; or the plaintiff didn’t own the copyrights themselves and thus couldn’t sue (possibly the arts were still copyrighted by the original artists, having never been purchased; possibly they were stock assets that were re-purchased by the defendant). There are any number of reasons. However, “the work wasn’t published” isn’t one of them.

    On the other hand, it’s quite likely they were able to sue for theft of trade secrets for that very reason. And they might have chosen to do that simply because proving copyright infringement is much more difficult.