• 0 Posts
  • 40 Comments
Joined 2 months ago
cake
Cake day: July 28th, 2025

help-circle








  • I don’t know, real world data maybe? Your one, or 2, or even 10 experiences are very insignificant statistically speaking. And of course it’s not a rare story, people who talk online about a product are most usually people with a bad experience, complaining about it, it kinda introduces a bias that you have to ignore. So you go for things like failure rates, which you can find online.

    By the way, it’s almost never actually a fault from AMD or Nvidia, but the actual manufacturer of the card.

    Edit: Not that I care about Internet points, but downvoting without a rebuttal is… Not very convincing





  • I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

    About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

    In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

    (Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)