• 0 Posts
  • 80 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • OpenAI on that enshittification speedrun any% no-glitch!

    Honestly though, they’re skipping right past the “be good to users to get them to lock in” step. They can’t even use the platform capitalism playbook because it costs too much to run AI platforms. Shit is egregiously expensive and doesn’t deliver sufficient return to justify the cost. At this point I’m ~80% certain that AI is going to be a dead tech fad by the end of this decade because the economics just don’t work now that the free money era has ended.



  • One of my grandfathers used to work for Nortel. One of the projects he worked on was the Trans Canada Microwave, which was a microwave relay system built in the '50s to carry television and telephone signals across Canada. The towers were installed all over Canada in remote locations and high elevations. Maintenance on the system could be required even when the weather was bad. My grandfather told me that the engineers who worked on the towers would sometimes stick their hands in front of the microwave emitters to warm them up. It’s anecdotal, but I’m relatively confident that it’s theoretically possible to warm people with microwaves.

    Big caveat, though. Those engineers knew how powerful the emitters were, they knew that microwaves are not ionizing radiation and thus posed no cancer risk, they knew roughly what percentage of their hands was composed of water, and thus how much heat energy their hands would absorb from the emitters at a given power level. That’s the only reason they were willing to do it, well that and they were probably the kind of people who got a kick out of doing something that would appear insane to most of the populace.

    It seems very unlikely to me that a microwave system could be turned into a safe people-heating system for at least the following reasons:

    • Feedback loops. All modern HVAC works on feedback loops. Your thermostat detects that the temperate is cold, it fires up your furnace / heat pump / electric baseboard / whatever and produces more heat. When the thermostat detects that the temperature has reached the set-point, it shuts off the heat. Current thermostats would not be able to detect the effect of microwave heating, which prevent the establishment of a feedback control loop.
    • Uneven heating. Things with more water will heat up a lot faster than things with little water. This is usually fine when microwaving food since most of our food is water, in varying concentrations. If you’re heating up a burger in the microwave, you can put the patty in by itself for a minute, then put the bun in for 15 seconds, then reassemble a burger that doesn’t have a cold patty or a stiff overcooked bun. If you’re heating up a person, you can’t ask them to take out their almost-entirely-water eyeballs to ensure they don’t overheat.
    • Failure conditions. If your heat gets stuck in the on setting, the maximum result is probably that your house will get sweltering hot but not hot enough to kill you in a moderate timeframe. Depending on power levels, a microwave heating system could internally cook people in their sleep if it entered a failure mode where the heating got stuck in the on setting.
    • Efficiency. It takes a considerable amount of power to run even a small microwave, and that’s blasting microwaves into a relatively tiny cubic area. Trying to heat people would require microwaving a much larger volume, and said volume would also be moving around. Trying to emit microwaves in even a house-sized volume would probably be prohibitively costly.
    • Interactions with metal and other objects. Microwaves can create intense electrical fields around metal objects, and those can become intense enough to create plasma and electrical arcs. Hell, you can create plasma in your microwave with two grapes. Blasting microwaves into a large volume with unknown contents would be a great way to create an unexpected fire.

  • Not the person who made the comment, but here’s my understanding. A “third place” is somewhere you spend a lot of time when you’re not at home (the first place) or school/work (the second place). Third places such as community centers were vital to the civil rights movement in the 60s, it was where much of the movement’s meeting, debating and organizing took place.

    The Reagan administration systematically defunded any of these politically active third places that were receiving federal funds, probably because they were worried that they’d be infiltrated by those scary communists. They were so worried about what the organized people might do in the future that they did everything they could to kick the financial struts out from under these community organizations. In many cases this destroyed some or all of the local community benefits that those organizations were actually providing.

    This trend cut across the political spectrum too. The Clinton administration did its own wave of defunding, though I suspect that was more for economic (i.e. neoliberal) than political ideology. Combine the lack of community investment with the rise of the internet, and you arrive at the situation we have today where third places are becoming increasingly scarce. It’s hard for communities to develop and maintain a cohesive identity when there’s no longer any metaphorical “town squares” where the people in that community gather.


  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.




  • It depends who you’re comparing. For the average US or Canadian citizen, I’m sure you’re correct. If you look at income levels I bet it’s a different story. The poor and middle class (whatever’s left of it) have to wait, the rich have the option of paying out of pocket. If I wanted to have a whole-body MRI scan done, I could get one next week for $3200. Wouldn’t even need to be sick! Requires a referral, but you can “obtain one virtually from (their) physician partners” and you know their “physician partners,” aren’t going to turn away business.


  • As a Canadian, I’ll be the first to say that our system isn’t perfect. If you’ve got a chronic but not life-threatening condition, like a need for knee or hip surgery, you could spend a long time on a waiting list. There are certainly lots of affluent Canadians who opt to step out of that line to get treatment at private for-profit clinics, both domestically and abroad. There’s always a shortage of something. Qualified doctors, nurses, family practitioners, CT or MRI machines, etc.

    That being said, if you do have a life-threatening condition, the Canadian healthcare system can work pretty well. My step father had pneumonia Nov./Dec. last year, chest xray revealed something concerning beyond the pneumonia, by early January biopsies has been done, by February he’d started radiation, six or so weeks of that, then monitoring for a while and now he’s in remission. Everything moved fast, because he had a time-critical condition. Total cost to my family: zero dollars (setting aside costs for gas, parking, snacks for stress-eating, etc.). I couldn’t imagine a family going through the same situation in the US.


  • The issue is that I have a 4k monitor and my current card can barely handle my desktop, never mind a game.

    Try running games at 1080p (1920 x 1080), which is exactly 1/4 of 4K UHD (3840 x 2160). Your graphics card will only need to do 25% of the work but you shouldn’t get any resolution scaling blurriness because everything divides evenly. This isn’t so much for your current card, which probably just can’t keep up with newer titles. What you can do is look at 1080p performance of current cards, decide how much performance you need and how much you’re willing to spend, and that’ll narrow down the selection a lot.

    Coming from a GTX 760, almost anything current gen or current gen minus 1 is going to be a massive upgrade. It’s hard to recommend a specific card without some info on your budget. For example if you had a budget of $300 US I’d recommend an Nvidia RTX 4060 since it has the best 1080p performance within that budget, or alternately a Radeon RX 7600 if you’d prefer not Nvidia (e.g. if you’re on Linux, the Radeon driver story is a bit better).




  • It’s likely CentOS 7.9, which was released in Nov. 2020 and shipped with kernel version 3.10.0-1160. It’s not completely ridiculous for a one year old POS systems to have a four year old OS. Design for those systems probably started a few years ago, when CentOS 7.9 was relatively recent. For an embedded system the bias would have been toward an established and mature OS, and CentOS 8.x was likely considered “too new” at the time they were speccing these systems. Remotely upgrading between major releases would not be advisable in an embedded system. The RHEL/CentOS in-place upgrade story is… not great. There was zero support for in-place upgrade until RHEL/CentOS 7, and it’s still considered “at your own risk” (source).


  • All due respect to Michelle Obama otherwise, but I think she was flat out wrong when she said ‘When they go low, we go high’. It’s the paradox of tolerance applied to the political realm. How do you ensure a tolerant society in the face of intolerant people? It’s impossible if you’re not allowed be intolerant of intolerant people. How do you ensure that political discourse sticks to concrete policies and objective facts when your opponent refuses to engage with either but instead stoops to conspiracy theories and personal attacks? Also impossible if you’re stuck talking about difficult concepts and nuanced facts while your opponent is free to sling personal insults and cognitively sticky memes that may have absolutely nothing to do with reality.

    The solution is to apply social contract theory. Tolerance doesn’t have to be a rule that you’re not allowed to break. It can be a social contract instead, so when someone breaks the social contract by being intolerant you are no longer bound by the contract, freeing you to not tolerate their behavior in return. Similarly, sticking to policy- and fact-based political debate doesn’t have to be a rule you’re not allowed to break, it can be a social contract between political opponents. If the other candidate won’t debate policy or facts then you’re free of the contract, which means you’re free to say they’re weird. Which they very much fucking are. Once you get most of the figurative children out of the room, you can go back to making actual progress amongst the contract-adhering adults who remain.



  • Anything that pushes the CPUs significantly can cause instability in affected parts. I think there are at least two separate issues Intel is facing:

    • Voltage irregularities causing instability. These could potentially be fixed by the microcode update Intel will be shipping in mid-August.
    • Oxidation of CPU vias. This issue cannot be fixed by any update, any affected part has corrosion inside the CPU die and only replacement would resolve the issue.

    Intel’s messaging around this problem has been very slanted towards talking as little as possible about the oxidation issue. Their initial Intel community post was very carefully worded to make it sound like voltage irregularity was the root cause, but careful reading of their statement reveals that it could be interpreted as only saying that instability is a root cause. They buried the admission that there is an oxidation issue in a Reddit comment, of all things. All they’ve said about oxidation is that the issue was resolved at the chip fab some time in 2023, and they’ve claimed it only affected 13th gen parts. There’s no word on which parts number, date ranges, processor code ranges etc. are affected. It seems pretty clear that they wanted the press talking about the microcode update and not the chips that will have the be RMA’d.