• Cyberspark@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    23 hours ago

    The problem is hallucinations are part of the solution to conversations with LLMs, but they’re destructive in a game environment. An NPC tells you something false and the player will assume they just couldn’t find the secret or that the game is bugged rather than an AI that just made some shit up.

    No amount of training removes hallucinating because that’s part of the generation process. All it does is take your question and reverse engineer what an answer to that looks like based on what words it knows and it’s data set. It doesn’t have any “knowledge”, not to mention that the training data would have to be different for each npc to represent different knowledge sets, backgrounds, upbringing, ideology, experience and culture. And then there’s the issue of having to provide it broad background knowledge of the setting without it adding new stuff or revealing hidden lore.

    That said, I wouldn’t be surprised if we see this attempted, but I expect it to go horribly wrong.