Artificial intelligence in video games just took a leap forward with this video at CES 2024! What does the future look like for gamers? – jeuxvideo.com

Gaming News Artificial intelligence in video games just took a leap forward with this video at CES 2024! What does the future look like for gamers?

Published on January 14, 2024 at 12:55 p.m

Share:

Nvidia's artificial intelligence project, called ACE, will be revealed in a little more detail through a technical demonstration during CES 2024.

CES 2024 is an opportunity for everyone to showcase their technological capabilities, and artificial intelligence is no exception.. The manufacturer Nvidia decided to carry out a technical demo of ACE (Avatar Cloud Engine) during the fair, its AI that allows improving the realism of NPCs in video games. Through this tool it is possible to chat more or less naturally with characters who are not controlled by players via a microphone. It's not really a surprise considering that artificial intelligence is currently becoming increasingly popular among the general public.

Nvidia tool allows you to create more realistic NPCs. During the demonstration, the public had the opportunity to speak with Nova and a ramen restaurant chef. After asking the question, it will take a few seconds to receive an answer. Each character can be configured to speak multiple languages, such as English or Spanish, and it is the developers who provide the data, usually quite current, to integrate it into the dialogues. NPCs can also sense different moods, which can be guessed by their facial expressions and manner of speaking. Note some slight errors sometimes. A breakthrough that heralds a major development in video games in the future.

A worrying advance?

During this JV Fast, the editorial team wondered how this technology would be used. Nvidia has already announced partnerships with certain major studios such as Ubisoft, miHoYo (which handles Genshin Impact and Honkai: Star Rail), NetEase Games, Inworld and Chinese giant Tencent. ACE seems promising, but it seems complicated to replace the expertise of real professionals at the moment.

Even though studios have had access to it for a few months now, it seems like it's going to be a real headache getting used to this new technology. The interest is to combine human expertise with AI and train employees to use it more easily and naturally. Maybe it's up to the programmers to set limits on their own technologies when we know that Nvidia's artificial intelligence can itself generate animations and lip sync depending on the sentence being spoken. If you are wondering what limits should be placed on AI in order not to take precedence over human expertise, Panthaa and Anagund give you a first answer in this JV Fast.