Bing’s new GPT-4 chat mode shows potential as a game guide writer for the Xbox, but it looks like it’s still a long way from matching a real human writer.
It’s currently unknown if there will be a new Bing AI chat mode, but Xbox Series X , the software is now available in preview for those who have signed up for the waiting list. According to Xbox Wire editor Mike Nelson, “I know a lot about video games.” This has been tested in a way that shows both and some of the problems that an ever-learning algorithm can encounter when working with certain game knowledge.
Detail is full press release (opens in new tab) trying to get information from the Bing site and answer some of the queries I got about it. Best Xbox Series X games One particular example was asking the new GPT-4 AI chat mode to give a detailed recap of what happened in the first 20 hours. The Witcher 3: Wild Hunt, It was quoted from 10 different publications, including our sister site PC Gamer and various YouTube sources.
The information presented was accurate until the first half or so of CD Projekt Red’s work, but the breadth of sources, including from YouTube itself, shows just how thorough Bing AI was when scrutinizing what it was given. Ask exactly if
Not all guides online are created equal, and information about ideal in-game strategies can vary greatly. So pooling from different sources of varying reliability does not always guarantee an accurate answer.
What is the correct answer again?
(Image credit: Blizzard)
My concerns extend to the accuracy displayed in Bing AI GPT-4 chat mode. overwatch 2 question. The test message asks: overwatch 2 character for me To this question, artificial intelligence answers that there are a total of 33 characters to choose from in-game, with damage, tank, and support classes.
Unfortunately, at the time of writing, a total of 36 characters are actually playable in the game, and the information being pulled is outdated. Ramatra is the newest character, added in December for Season 2. So the AI is about 3 months behind.
Of the 10 sources cited, GPT-4 seems to have trouble identifying the most recent answer, as it gives vague answers such as: Instead of offering viable options based on the current meta, we use ‘characters’. Bing was able to create something that sounded fine on the surface, but failed to answer the question in a meaningful way. I liked the AI character Echo, but it didn’t explain anything about the other characters’ playstyles or quirks.
This seems to be the biggest problem with asking questions about Bing games and expecting accuracy. Given that YouTube is repeatedly cited as an authoritative source of information, what prevents people from deliberately spreading misinformation on a subject and adapting it to copy from AI? is it?
It also raises questions about how GPT-4 directly cites existing websites, and to what extent it adapts what is said in the original material. Writers have not received the recognition they deserve and the names and information accompanying the publications may not fully reflect their original intentions.