More, but not way more - they would be licensing window IoT, not a full blown OS, and they wouldn’t be paying OTC retail rates for it.
More, but not way more - they would be licensing window IoT, not a full blown OS, and they wouldn’t be paying OTC retail rates for it.
I haven’t used dual shock so I can’t speak to that, but as far as Xbox 1/S controllers, there is no 1st party support - literally all the drivers are from some non-MS affiliated GitHub page. 360 controllers required the xpad driver as well - that isn’t 1st party support. Yes they work out of the box with steam if you are using a wired connection, but that’s because it’s going through steaminput (not 1st party either), and making the controls of the submarine dependent on being launched through steam is even more absurd. Gen 2 series 1/S controllers didn’t work via Bluetooth for a long time after they (silently) launched on most LTS Linux OSs due to the kernel missing requisite BLE functionality
That’s only assuming the sub was running windows, where Xbox controllers work out of the box. On Linux there are no first party drivers, and Bluetooth support on the 1/S controllers simply didn’t exist at the time this happened. If it was an embedded system there would be no support whatsoever.
https://www.theverge.com/2017/9/19/16333376/us-navy-military-xbox-360-controller
US Army used to spend $38,000 per controller until they found out Xbox controllers were better
lol. Did this in my old building - the dryer was on an improperly rated circuit and the breaker would trip half the time, eating my money and leaving wet clothes.
It was one of the old, “insert coin, push metal chute in” types. Turns out you could bend a coat hanger and fish it through a hole in the back to engage the lever that the push-mechanism was supposed to engage. Showed everyone in the building.
The landlord came by the building a month later and asked why there was no money in the machines, I told him “we all started going to the laundromat down the street because it was cheaper”
Top 91.9% means that only 8.1% of people are dumber than you
It’s not to be confused with being “in the 91.9th percentile”, which is the literal opposite
AI isn’t supposed to be creative, it’s isn’t even capable of that. It’s meant to min/max it’s evaluation criterion against a test dataset
It does this by regurgitating the training data associated with a given input as closely as possible
These people aren’t placing bets on who they want to win, they are placing bets where the house odds differ from the actual expected outcome. The people throwing big money on this are doing it based on actual data (amalgamating polls, etc), not just gut feelings.
If I think Kamala has a 45% chance of winning the election and the bookie is giving her implied odds of 40%, I should take that bet, because even though I think she will lose, I stand to make a 12.5% ROI on my bet. I can then hedge that bet on another bookmaker giving a 48% implied odds, and if enough people do this the bookmakers odds will converge on 44%
but either way I don’t think this “market” knew more than the mainstream media was telling us.
No, but it is a culmination of all the available public information (and some private information you won’t find elsewhere) in a single metric. If you read a single article you would assume there is either a 100% Biden drops out or a 0% chance - if you read every single news article in existence, aggregated all social media buzz, polls, etc, into a statistical likelihood, you would likely come out with a number that closely matches the odds.
Biden was only going to drop out once, so you can’t say how closely these odds matched the actual likelihood on this specific measure, but if you analyze hundreds of predictive markets like this, the implied odds pretty strongly correlate with the actual binomial outcomes
Tesla still sells nearly 10x the number of EVs (BEVs) to the next most popular brand (globally).
Tesla only sold 4% more EVs than BYD last quarter
Too bad it still can’t figure out how to do dead ass simple things the old assistant had no problem with, like setting a reminder or alarm
You can just point your domain at your local IP, e.g. 192.168.0.100
The feature is explicit sync, which is a brand new graphics stack API that would fix some issues with nvidia rendering under Wayland.
It’s not a big deal, canonical basically said ‘this isn’t a bug fix or security patch, it’s not getting backported into our LTS release’ - so if you want it you have to install GNOME/mutter from source, switch operating systems, or just wait a few months for the next Ubuntu release
GNOME said this update is a minor bug fix (point release)
Canonical said this is actually a major feature update, and doesn’t want to backport it into its LTS repositories
Reddit has way more data than you would have been exposed to via the API though - they can look at things like user ARN (is it coming from a datacenter), whether they were using a VPN, they track things like scroll position, cursor movements, read time before posting a comment, how long it takes to type that comment, etc.
no one at reddit is going to hunt these sophisticated bots because they inflate numbers
You are conflating “don’t care about bots” with “don’t care about showing bot generated content to users”. If the latter increases activity and engagement there is no reason to put a stop to it, however, when it comes to building predictive models, A/B testing, and other internal decisions they have a vested financial interest in making sure they are focusing on organic users - how humans interact with humans and/or bots is meaningful data, how bots interact with other bots is not
Not with 64gb ram and 16+ cores on that budget
To compare every comment on reddit to every other comment in reddit’s entire history would require an index
You think in Reddit’s 20 year history no one has thought of indexing comments for data science workloads? A cursory glance at their engineering blog indicates they perform much more computationally demanding tasks on comment data already for purposes of content filtering
you need to duplicate all of that data in a separate database and keep it in sync with your main database without affecting performance too much
Analytics workflows are never run on the production database, always on read replicas which are taken asynchronously and built from the transaction logs so as not to affect production database read/write performance
Programmers just do what they’re told. If the managers don’t care about something, the programmers won’t work on it.
Reddit’s entire monetization strategy is collecting user data and selling it to advertisers - It’s incredibly naive to think that they don’t have a vested interest in identifying organic engagement
Look at the picture above - this is trivially easy. We are talking about identifying repost bots, not seeing if users pass/fail the Turing test
If 99% of a user’s posts can be found elsewhere, word for word, with the same parent comment, you are looking at a repost bot
I know everyone here likes to circle jerk over “le Reddit so incompetent” but at the end of the day they are a (multi) billion dollar company and it’s willfully ignorant to infer that there isn’t a single engineer at the company who knows how to measure string similarity between two comment trees (hint: import difflib
in python)
This is only true for steam keys sold on other platforms afaik