If they used the generative AI to actually generate dialogue on the spot it could be pretty dope assuming it is trained properly so its consistent with the lore of the game and you could actually have a productive conversation with the NPCs without being constantly gaslit.
But that’s not how any major dev has planned to use it. They use it to cheap out on art assets and writing the pre-scripted shit.
As it stands, even the few independent games doing the thing mentioned in my first paragraph are pretty garbage because they can’t even remain consistent enough in their own logic to make the games actually playable.
The problem is hallucinations are part of the solution to conversations with LLMs, but they’re destructive in a game environment. An NPC tells you something false and the player will assume they just couldn’t find the secret or that the game is bugged rather than an AI that just made some shit up.
No amount of training removes hallucinating because that’s part of the generation process. All it does is take your question and reverse engineer what an answer to that looks like based on what words it knows and it’s data set. It doesn’t have any “knowledge”, not to mention that the training data would have to be different for each npc to represent different knowledge sets, backgrounds, upbringing, ideology, experience and culture. And then there’s the issue of having to provide it broad background knowledge of the setting without it adding new stuff or revealing hidden lore.
That said, I wouldn’t be surprised if we see this attempted, but I expect it to go horribly wrong.
If they used the generative AI to actually generate dialogue on the spot it could be pretty dope assuming it is trained properly so its consistent with the lore of the game and you could actually have a productive conversation with the NPCs without being constantly gaslit.
But that’s not how any major dev has planned to use it. They use it to cheap out on art assets and writing the pre-scripted shit.
As it stands, even the few independent games doing the thing mentioned in my first paragraph are pretty garbage because they can’t even remain consistent enough in their own logic to make the games actually playable.
The problem is hallucinations are part of the solution to conversations with LLMs, but they’re destructive in a game environment. An NPC tells you something false and the player will assume they just couldn’t find the secret or that the game is bugged rather than an AI that just made some shit up.
No amount of training removes hallucinating because that’s part of the generation process. All it does is take your question and reverse engineer what an answer to that looks like based on what words it knows and it’s data set. It doesn’t have any “knowledge”, not to mention that the training data would have to be different for each npc to represent different knowledge sets, backgrounds, upbringing, ideology, experience and culture. And then there’s the issue of having to provide it broad background knowledge of the setting without it adding new stuff or revealing hidden lore.
That said, I wouldn’t be surprised if we see this attempted, but I expect it to go horribly wrong.