User feedback, such as when a user marks a certain email as spam or signals they want a sender’s emails in their inbox, is key to this filtering process, and our filters learn from user actions.
Maybe a lot of people just mark it as spam for some reason, wonder why that could be? Could it be because they simply don't like your emails and think they feel spammy? No, that couldn't be it, it has to be that the same company that kissed up to Trump also just hates republicans now for some reason! /s
I have a feeling it'll simply grow more in popularity, since stable release will probably make a lot more people feel more comfortable recommending it to people, myself included.
Right now, I don't treat it as if it's a backup in any way due to its beta nature, and I hope that can change.
Here's the official Android Developer page on the developer verification program. Bottom of the page, green square on the right labeled "Do you have any additional questions or feedback?"
B-b-but think of the PATRIOTS!!!!! They don't want their feewings hurt by you burning a piece of fabric with the shapes they like way, way too much, is it too much to ask??/??//?? /s
Just checked the contributor's page, the crawled privacy policy being referenced is stated to be 4 months out of date, but the policy on Nebula's website hasn't been changed since Aug 31 2023, so I think TOSDR might be a little bugged, and just doesn't have all the current policy's points available for contributors to tag. The current privacy policy is much more lengthy to cover local state privacy regulations, the scope of what they now offer, etc.
Still, it's all pretty boilerplate, and nothing about it is really out of the ordinary or super harmful. Extremely basic attribution might be used if you click onto Nebula from an ad, and they might share a non-identifying hashed ID with that company. They'll collect aggregate statistics to determine the impact of marketing campaigns, they sometimes email you, they collect data on your device that most webservers would by default in logs. All very standard.
If they update any part of the policy about how they collect/use/share your data, they'll notify you,
They even explicitly say to not provide them with info on your race/politics/religion/health/biometrics/genetics/criminality or union membership. You are given an explicit right to delete your account regardless of local privacy laws, and they give you a single email to contact specifically regarding any requests related to the privacy policy.
None of this is crazy, and I have no clue why artyom would call it a "shithole" based on that.
Except for these people, it almost definitely is. They have staff, an office, inventory to manage, etc. Most YouTubers nowadays aren't just operating on their own, and thus have financial expenses outside of just paying themselves for their own labor, that can't just keep going if their revenue stream goes down, or even just takes a large enough cut.
It's unfortunate, but that's just how a lot of the content creation industry works right now, especially on YouTube.
Most third places have either disappeared, or been replaced with ones that you can only really enjoy if you're able to spend money every time you go there (e.g. bars, theaters, cafes, clubs, etc).
Many small towns are only getting smaller, leaving people that still live in them with less and less people to talk to.
Economic circumstances are consistently getting worse across the board, meaning people are spending more time at work just to stay alive, rather than being able to easily arrange to spend time somewhere with people.
It's not like it's impossible, obviously, but the state of the world is actively discouraging prosocial behavior through both cost and just circumstance.
Fucking finally. Dems always pull the "we won't stoop to their level" BS all the time, and all it ends up doing is letting the right slowly seize more and more power.
I hope this will become a growing trend, but sadly I'm not very optimistic given how the party has acted prior to this.
How is it not a shift? It would be an expansion if they were increasing their overall coverage by region in addition to Switzerland, but they're actually moving their infrastructure out of Switzerland, and not hosting it there after the switch is completed, (other than what I'd assume would be things like a VPN endpoint for those who want it, or any services for Swiss customers that want data to remain entirely within Switzerland) because this law would put them under too much scrutiny.
They even state at the very bottom of this blog post about their new AI features that "Proton is moving most of its physical infrastructure out of Switzerland"
Sadly though, I agree with you on the chat control part. I don't think they'll easily be able to escape this no matter where they go. Any still standing bastions of privacy seem to be falling right before our eyes.
That's not an extension cable, but an adapter, thus it's not a problem in this case. It's a cable that can convert the data from an audio jack to something that can go through USB-C, not a cable that simply extends a USB-C cable. The cable can almost certainly handle any amount of power and data that an audio jack would pass through it, no problem, even if it were a USB-C to USB-C extension cable, and not an adapter.
The problem arises when someone tries using a higher-spec USB-C cable with a lower-spec USB-C extension cable, such as using a 240W charger with the lower-spec USB-C extension cable in the middle that can only do 120W. In that case, it would pass more electricity through than the lower-spec cable could handle, and it would overheat.
The amount of data and power from an audio jack is simply too small to overwhelm practically any USB-C cable or adapter that exists, thus it's not an issue.
Most of these AI crawlers are from major corporations operating out of datacenters with known IP ranges, which is why they do IP range blocks. That's why in Codeberg's response, they mention that after they fixed the configuration issue that only blocked those IP ranges on non-Anubis routes, the crawling stopped.
For example, OpenAI publishes a list of IP ranges that their crawlers can come from, and also displays user agents for each bot.
Perplexity also publishes IP ranges, but Cloudflare later found them bypassing no-crawl directives with undeclared crawlers. They did use different IPs, but not from "shady apps." Instead, they would simply rotate ASNs, and request a new IP.
The reason they do this is because it is still legal for them to do so. Rotating ASNs and IPs within that ASN is not a crime. However, maliciously utilizing apps installed on people's devices to route network traffic they're unaware of is. It also carries much higher latency, and could even allow for man-in-the-middle attacks, which they clearly don't want.
While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it "wrong" more often than not.
Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.
The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don't care about others, it's that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.
It's like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you'd always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.
Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.
We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI's models was on "OpenAI Playground," where you'd have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn't word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.
Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was "chatting" with them in an interface similar to how they might text with a friend.
And now, ChatGPT's homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. "deep research") and people have begun treating it more like a source of truth rather than just a thing talking to them.
And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.
The tools might not be "evil" on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.
To be fair, the article does mention that he's considering using that as a way to fund his current startup if it ever needs a cash injection, rather than turning straight to more VCs, which is probably good in the long run in terms of reducing how much the company could have to cater toward predatory investors over customers, so I'd consider at least part of his remaining wealth just a means to fund a separate venture from his private life, but still, he probably does have more than enough even after that.
Also, I'd never heard of the Russian Nobleman paradox before, but I looked it up and it seems interesting. Thanks for sharing it.
the sticker, for anyone curious.