NVIDIA’s AI NPCs are a nightmare
The Rise of the AI
While the use of generative AI in games seems almost inevitable, as the medium has always played with new methods to make enemies and NPCs appear smarter and more realistic, watching multiple NVIDIA ACE demos in a row made me feel really sick. This wasn’t just slightly more intelligent enemy AI – ACE can create entire conversations out of thin air, simulate voices and try to give NPCs a sense of personality. This work is also done locally on your PC, powered by NVIDIA’s RTX GPUs. While this might sound cool on paper, I hated almost every second of seeing the AI
TiGames’ ZooPunk is a prime example: it relies on NVIDIA ACE to generate dialogue, a virtual voice, and lip sync for an NPC named Buck. But as you can see in the video above, Buck sounds like a stilt robot with a slight country accent. Whether he is supposed to have any relationship to the main character cannot be determined from the depiction.
I think my deep-seated dislike of NVIDIA’s ACE-powered AI comes from this: there’s just nothing compelling about it. No joy, no warmth, no Humanity. Every ACE AI character feels like a developer cutting corners in the worst possible way, like seeing their disdain for the audience expressed by a boring NPC. I’d much rather scroll through a few on-screen texts, then at least I wouldn’t have to talk to scary robot voices.
During NVIDIA’s Editor’s Day at CES, a gathering for media to learn more about the new RTX 5000 series GPUs and the technology involved, I was also disappointed by a demo PUBG’s AI ally. The answers were similar to what you would hear from a pre-recorded phone tree. The ally also couldn’t find a weapon when the player asked for it, which could have been a deadly mistake on a crowded map. At some point it was PUBG The companion also spent about 15 seconds attacking enemies while the demo player shouted at him to get into a car. What good is an AI helper if he plays like a newbie?
Browse NVIDIA’s YouTube channel and you’ll find more disappointing examples of ACE, such as the MMO’s simple speaking animations World of the Jade Dynasty (above) and Alien: Rogue Incursion. I’m sure many developers would like to avoid the task of developing proper lip sync technology or adopting someone else’s, but relying on AI just looks terrible for these games.
To be clear, I don’t think NVIDIA’s AI efforts are all pointless. I’ve enjoyed watching DLSS get better over the years, and I’m excited to see how the multi-frame generation of DLSS 4 could improve 4K and ray tracing performance for demanding games. The company’s neural shader technology also seems compelling, particularly its ability to give materials like silk a realistic sheen or produce the slight transparency seen in skin. To be clear, these aren’t huge visual leaps, but they could help provide a greater sense of immersion.
Now I’m sure some AI boosters will say that the technology will get better from now on and could reach the quality of human ingenuity at some point in the future. Perhaps. But personally, I’m tired of being won over by AI fantasies when we know that the key to great writing and performances is giving human talent the time and resources to hone their craft. And on some level I think I will always have that feeling Director Hayao Miyazakiwho described an early example of an AI CG creature as “an affront to life itself.”
AI, like any new technology, is a tool that can be used in a variety of ways. For things like graphics and gameplay (like the intelligent enemies in FEAR And The Last of Us), it makes sense. But when it comes to communicating with NPCs, writing their dialogue, and crafting their performances, I value human input more than anything else. Replacing that with lifeless AI doesn’t seem like progress in any way.