• Optional
    cake
    link
    fedilink
    English
    435 months ago

    Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.

    It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

    Introduced factual errors

    Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.

    • @chud37@lemm.ee
      link
      fedilink
      English
      95 months ago

      that’s the core problem though, isn’t it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems

      • Optional
        cake
        link
        fedilink
        English
        35 months ago

        Well, “we” arent’ but there’s a hype machine in operation bigger than anything in history because a few tech bros think they’re going to rule the world.

    • @devfuuu@lemmy.world
      link
      fedilink
      English
      7
      edit-2
      5 months ago

      I’ll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.

      • @fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        75 months ago

        I don’t necessarily dislike “AI” but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.

        Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.

    • desktop_user [they/them]
      link
      fedilink
      English
      -35 months ago

      alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.

      • @Nalivai@lemmy.world
        link
        fedilink
        English
        45 months ago

        It’s easy, it’s quick, and it’s free: pouring river water in your socks.
        Fortunately, there are other possible criteria.

      • @fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        25 months ago

        If it doesn’t work then quick cheap and easy I’d pointless.

        I’ll make you dinner every night for free but one night a week it will make you ill. Maybe a little maybe a lot.

    • @MDCCCLV@lemmy.ca
      link
      fedilink
      English
      -35 months ago

      Is it worse than the current system of editors making shitty click bait titles?

    • @Rivalarrival@lemmy.today
      link
      fedilink
      English
      -55 months ago

      It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

      How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

      But I’d guess the AI is quite a bit better than, say, the average Republican.

      • Balder
        link
        fedilink
        English
        0
        edit-2
        5 months ago

        I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

        There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.

        • @Rivalarrival@lemmy.today
          link
          fedilink
          English
          35 months ago

          I’m more interested in the technology itself, rather than its current application.

          I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren’t cheering her on: they’re sitting in their recliners, smugly claiming she’s useless. She can’t even participate in a marathon, let alone compete with actual athletes!

          Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.

          • Balder
            link
            fedilink
            English
            1
            edit-2
            5 months ago

            It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.