• GetOffMyLan@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    3 hours ago

    One of LLMs main strengths over traditional text analysis tools is the ability to “understand” context.

    They are bad at generating factual responses. They are amazing at analysing text.

    • knightly the Sneptaur@pawb.social
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      LLMs neither understand nor analyze text. They are statistical models of the text they were trained on. A map of language.

      And, like any map, they should not be confused for the territory they represent.

      If you admit that they have issues with facts, why would you assume that the randomly generated facts their “analysis” produces must be accurate?

      • GetOffMyLan@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        I mean they literally do analyze text. They’re great at it. Give it some text and it will analyze it really well. I do it with code at work all the time.

        Because they are two completely different tasks. Asking them to recall information from their training is a very bad use. Asking them to analyze information passed into them is what they are great at.

        Give it a sample of code and it will very accurately analyse and explain it. Ask it to generate code and the results are wildly varied in accuracy.

        I’m not assuming anything you can literally go and use one right now and see.

        • apotheotic (she/her)@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 minutes ago

          The person you’re replying to is correct though. They do not understand, they do not analyse. They generate (roughly) the most statistically likely answer to your prompt, which may very well end up being text representing an accurate analysis. They might even be incredibly reliable at doing so. But this person is just pushing back against the idea of these models actually understanding or analysing. Its slightly pedantic, sure, but its important to distinguish in the world of machine intelligence.