• Veraticus@lib.lgbtOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    They don’t generate facts, as the article says. They choose the next most likely word. Everything is confidently plausible bullshit. That some of it is also true is just luck.

    • Kogasa@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.