It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.
It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.