How are they getting my metadata?
How are they getting my metadata?
I would agree that scientific practice is far from the ideal in the post, but it doesn’t claim that researchers aren’t susceptible to those biases. That is why there are processes in place like peer review.
Have you even read the article?
I mean, your suggestive question at least helps me understand your mindset a bit better. If I would see the situation the way you characterize it, I would probably sound the same.
I can only encourage you to try to see tbrough the business bullshit that is undoubtedly there and recognize that there is an actual underlying technological breakthrough with the chance of redefining how we interact with machines.
I’m running a local LLM that I use daily at work to help me brainstorm and the fact that I can run perfect speech to text in real time on my laptop was simply not possible a few years ago.
The AI hate on Lemmy never fails to amaze me
The world is getting better in the long run by almost every metric, don’t get sucked into the doomer mentality.
Still fucked up ruling though.
You keep posting that but it is wrong. Ignoring that disabling installation of unsigned extensions is not censoring, you can install signed extensions via file in every version of Firefox, not only the developer one.
Stupid artificial outrage
Weird responses here so far. I’ll try to actually answer the question.
I’m using copilot for 9 months at work now and it’s crazy how it accelerates wiring code. I am writing class c code in C++ and rust, and it has become a staple tool like auto formatting. That being said, it cannot really do more abstract stuff like this architecture decisions.
Just try it for some time and see if it fits your use case. I’m hoping the local code models will catch up soon so I can get away from Microsoft, but until then, copilot it is.
I’m not convinced about the “a human can say ‘that’s a little outside my area of expertise’, but an LLM cannot.” I’m sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don’t see why it would require an “understanding” for that specifically. I would suspect that better human reinforcement would make such answers possible.
What a cold thing to say about humans.
Spoken like a true keyboard warrior
I don’t know. My washing machine beeps three times in increasing intervals, so it isn’t that intrusive. The display shows me unique error codes that I can look up when someone happens. And I can set the machine to finish in a set amount of hours, so it will start just in time to be done when I’m back. All without WiFi
TL;DR?
Always a great sign if posts asking for evidence are getting downvoted.
Having built commerical drones, it’s mainly two things, obstacles and ground effect.
Women amirite
So what is intelligence in your general, all-purpose understanding?
Are newborns intelligent? How about dogs? Ants?
You may argue that current AI is still behind an average human adult and therefore not intelligent, but academia is a bit more nuanced.