• 0 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle



  • People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.

    Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.

    Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.


  • On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

    The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

    ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.






  • Advertising is like the Kudzu vine: neat and potentially useful if maintained responsibly, but beyond capable of growing out of control and strangling the very landscape if you don’t constantly keep it in check. I think, for instance, that a podcast or over-the-air show running an ad-read with an affiliate link is fine for the most part, as long as it’s relatively unobtrusive and doesn’t put limitations on what the content would otherwise go over.

    The problem is that there needs to be a reset of advertiser expectations. Right now, they expect the return on investment that comes from hyper-specific and invasive data, and I don’t think you can get that same level of effectiveness without it. The current advertising model is entrenched, and the parasitic roots have eroded the foundation. Those roots will always be parasitic because that’s the nature of advertising, and the profit motive in general when unchecked.


  • Here’s an article with a graph. Given that the figures are measured per 100,000 live births, one would need to find how many births occurred in the state of Texas in the years graphed. I was able to find that exact figure for 2022 in a document from the CDC (on page 8) which comes out to 389,533. Since the overall Texas maternal mortality rate was about 28.5 per 100k births for that year (per the news article), the total number of mortalities would have been about 110.

    If you want I could go find the figure for earlier years as well, but this should serve as a general ballpark the numbers are in. Keep in mind that the maternal mortality rate only factors in live births, so does not include any pregnancy complications involving an unviable fetus.



  • This limerick has to be forced to have a decent flow. Who emphasizes the first syllable in “assured”? Lines 3 and/or 4 would probably flow better with an extra syllable, but are otherwise ok and have consistent feet.

    Final verdict: 5/10, needs work

    Also, the little logo at the bottom is trying way too hard




  • Texas is gerrymandered to shit, and employs pretty nasty voter suppression tactics in populous (see: blue) counties by having very few polling stations per capita in those areas and making it a crime to give water/food to people waiting in line to vote. Big Texas cities are blue for the most part (maybe a few exceptions in the DFW area)

    If you look at pretty much any of the cities within Texas on the latest map, you can see that they consolidate the core of the city into one or two solid blobs, then split the rest out to be diluted by rural areas. See Dallas/Tarrant County, Travis County, Bexar County, and Harris County for the most obvious cases of these.

    https://redistricting.capitol.texas.gov/docs/88th_Senate_Tabloid_2024_05_20.pdf

    On a population level, Texas is basically a blue state held hostage by a red state administration.


  • I’d also consider myself pretty tech-savvy, but that came from plenty of mistakes growing up including putting malware on the family computer at least twice (mostly ads for these “Pokemon MMOs” back in the mid aughts that were too enticing for my kid brain to refuse 😅).

    It’s very easy for me to forget how much of an outlier my tech experience is among most folks around my age. I had an acquaintance in the first year of college I helped by giving essay advice, and was very surprised to see that the only thing they really knew how to do was basic use of apps on their iPhone. They got a laptop for school, but no computer experience, no keyboard typing experience, and even just the iPhone Settings app was a scary place to be avoided for the most part. To this person, Microsoft Word was a new thing they had to learn on top of everything else. In college. It was also in the South so I don’t know if I should be that surprised unfortunately.

    Regardless, it was pretty wild to me, but a very real reminder that not everyone has access to the same resources education, and/or experience to draw on.