• Kogasa@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.