• 2 Posts
  • 185 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • The same people who didn’t understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.

    If people don’t understand the tools they are using and don’t double check the information from single sources, I think it’s kinda on them. I have a dietician friend, and I usually get back to him after doing my “Google research” for my diets… so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn’t a scholar or research search engine.


  • It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they’ll get a good overview.

    I usually see the results as quick summaries from an untrusted source. Even if they aren’t exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.

    Today I searched something like “Are owls endangered?”. I knew I was about to get a great overview because it’s a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn’t trust it.

    It has improved my search experience… But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.








  • You can have human-level conversations with a tool. I don’t get your point. Just because it doesn’t have full human intelligence doesn’t mean it isn’t good at conversation. It is better at conversation than most humans. It is obviously not smarter than a human, but it is more eloquent and has more general knowledge than the average human.

    We are the first humans who can have a human level conversation with something that is not a human. What do you think human conversations look like? They are not very deep in general.

    Pretty funny you think you convinced me it is a tool when I just showed you like 8 different ways I use it as a tool.

    I literally started this thing saying that even if it isn’t perfect, it is pretty good and it a crazy achievement. You just keep saying it is worthless shit, but that’s not true. Just because the tool isn’t perfect it doesn’t mean that it is worthless or it isn’t an achievement.

    But whatever man… You just want to be right, so take your imaginary trophy and walk away.


  • You just inflated my initial statement by assuming I meant that Chatgpt is smarter than humans. I said that Chatgpt is more advanced than the average human at linguistics, and I stand by it. Show me where I said “Chatgpt is smarter than a human” or “this is real simulated human intelligence”. You just wanted to be angry at someone so you made your narrative in your mind.

    I even said that it doesn’t need to be real intelligence in order to be capable of having a conversation.

    You’ll probably keep creating your imaginary narrative, so there’s no point in arguing with you.

    Good bye.








  • I’m already using Copilot every single day. I love it. It helps me save so much time writing boilerplate code that can be easily guessed by the model.

    It even helps me understand tools faster than the documentation. I just type a comment and it autocompletes a piece of code that is probably wrong, but probably has the APIs that I need to learn about. So I just go, learn the specific APIs, fix the details of the code and move on.

    I use chatgpt to help me improve my private blog posts because I’m not a native English speaker, so it makes the text feel more fluent.

    We trained a model with the documentation of our company so it automatically references docs when someone asks it questions.

    I’m using the AI from Jira to automatically generate queries and find what I want as fast as possible. I used to hate searching for stuff in Jira because I never remembered the DSL.

    I have GPT as a command line tool because I constantly forget commands and this tool helps me remember without having to read the help or open Google.

    We have pipelines that read exceptions that would usually be confusing for developers, but GPT automatically generates an explanation for the error in the logs.

    I literally ask Chatgpt questions about other areas of technology that I don’t understand. My questions aren’t advanced so I usually get the right answers and I can keep reading about the topics. Chatgpt is literally teaching me how to do front ends, something that I hated my whole career but now feels like a breeze.

    Maybe you should start actually figuring out how to use the tool instead of complaining about it in this echo chamber.