• vala@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    20 hours ago

    Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

    Any reasoning human would have understood that question to be referring to the tension in the strings.

    Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

    Once again a reasoning human would assume the question is about the mineral.

    Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      18 hours ago

      I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

      This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

        The one thing they can’t do is verify if what they are talking about is true as it’s all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.

    • postmateDumbass@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      19 hours ago

      Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 hours ago

        The tension of the strings would actually be a pretty miniscule amount of energy too, since there’s very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).

        Compared to burning a piece of wood, which would release orders of magnitude more energy.

    • antonim@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      20 hours ago

      But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.