• Skyrmir@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 day ago

    It’s the difference between an algorithm and actually thinking about a problem. An LLM isn’t trying to solve the problem, it’s giving the most likely response words. An LLM will never be able to make that leap to actually thinking.

    In the mean time, it can save a lot of typing by spitting out boiler plate code blocks. Not that typing is in any way to biggest part of the job.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      10 hours ago

      Boiler plate is almost always better solved with snippets.

      Imagine if trillions of dollars were spent making thorough libraries of high quality code snippet templates, instead of word salad machine.

    • Spice Hoarder@lemmy.zipOP
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      While I wholeheartedly agree with your statement. I think the reason it spits out thousands of lines of code instead of the obvious answer is because of poor training data and unqualified individuals assuming more code means better.

      • Hawk@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        24 hours ago

        Doesn’t surprise me. The internet is filled with outdated code and personal blogs detailing stuff you should never put in production code.

        AI is just going to regurgitate those instead of the actual optimal answer, so unless you steer it in the right direction, it spits out what it sees most in it’s training data.

      • village604
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Bad prompt formation is also an issue. You can’t just ask a one sentence question and get good results. For best results you need be very specific and get it to prompt you for more details if needed.

        But it’s very much ‘garbage in garbage out’.