Microsoft debuts generative AI that can create video game scenes

Like most generative AI models, it creates something after being prompted — in this case a sequence of visuals after being shown a game scene created by a human.
So it's the same as that neural Doom and Minecraft? It's interesting from a research point of view, but not from a practical one, cause you're gonna waste a lot of performance on running the AI that emulates a game engine instead of running the engine directly.
 
Know what's better than spending used car prices on a GPU to hallucinate frames? Big corpo spending used military arms money to hallucinate entire "games"!
 
  • Thunk-Provoking
Reactions: KosherFarms.net
Meh, most frames out your GPU will be AI generated soon anyway.
 
I don't understand the use of this, we already have the game and its not like you can generate other games based off of the data from a single game. This is just to get a headline otherwise they'd be doing shit that isn't a complete waste of time.
 
  • Like
Reactions: Oh Sugar
Paging Null. They also trained Muse to combine Zarya with Roadhog.
1740084266104.png
An artificial intelligence model from Microsoft can recreate realistic video game footage that the company says could help designers make games, but experts are unconvinced that the tool will be useful for most game developers.

Neural networks that can produce coherent and accurate footage from video games are not new. A recent Google-created AI generated a fully playable version of the classic computer game Doom without access to the underlying game engine. The original Doom, however, was released in 1993; more modern games are far more complex, with sophisticated physics and computationally intensive graphics, which have proved trickier for AIs to faithfully recreate.

Now, Katja Hofmann at Microsoft Research and her colleagues have developed an AI model called Muse, which can recreate full sequences of the multiplayer online battle game Bleeding Edge. These sequences appear to obey the game’s underlying physics and keep players and in-game objects consistent over time, which implies that the model has grasped a deep understanding of the game, says Hofmann.

Muse is trained on seven years of human gameplay data, including both controller and video footage, provided by Bleeding Edge’s Microsoft-owned developer, Ninja Studios. It works similarly to large language models like ChatGPT; when given an input, in the form of a video game frame and its associated controller actions, it is tasked with predicting the gameplay that might come next. “It’s really quite mind-boggling, even to me now, that purely from training models to predict what’s going to appear next… it learns a sophisticated, deep understanding of this complex 3D environment,” says Hofmann.

To understand how people might use an AI tool like Muse, the team also surveyed game developers to learn what features they would find useful. As a result, the researchers added the capability to iteratively adjust to changes made on the fly, such as a player’s character changing or new objects entering a scene. This could be useful for coming up with new ideas and trying out what-if scenarios for developers, says Hofmann.

But Muse is still limited to generating sequences within the bounds of the original Bleeding Edge game — it can’t come up with new concepts or designs. And it is unclear if this is an inherent limitation of the model, or something that could be overcome with more training data from other games, says Mike Cook at King’s College London. “This is a long, long way away from the idea that AI systems can design games on their own.”

While the ability to generate consistent gameplay sequences is impressive, developers might prefer to have greater control, says Cook. “If you build a tool that is actually testing your game, running the game code itself, you don’t need to worry about persistency or consistency, because it’s running the actual game. So these are solving problems that generative AI has itself introduced.”

It’s promising that the model is designed with developers in mind, says Georgios Yannakakis at the Institute of Digital Games at the University of Malta, but it might not be feasible for most developers who don’t have so much training data. “It comes down to the question of is it worth doing?” says Yannakakis. “Microsoft spent seven years collecting data and training these models to demonstrate that you can actually do it. But would an actual game studio afford [to do] this?”

Even Microsoft itself is equivocal over whether AI-designed games could be on the horizon: when asked if developers in its Xbox gaming division might use the tool, the company declined to comment.

While Hofmann and her team are hopeful that future versions of Muse will be able to generalise beyond their training data – coming up with new scenarios and levels for games on which they are trained, as well as working for different games – this will be a significant challenge, says Cook, because modern games are so complex.

“One of the ways a game distinguishes itself is by changing systems and introducing new conceptual level ideas. That makes it very hard for machine learning systems to get outside of their training data and innovate and invent beyond what they’ve seen,” he says.
 
This was exactly what I was requesting. Not new interesting games to play, but AI slop being put to use for cutscenes I can skip whilst also enjoying my made-up DLSS frames to jerk off to.
 
  • Like
Reactions: indigoisviolet
Then, to train the AI model, Hofmann’s team collected seven years’ worth of gameplay data from Bleeding Edge, a 2020 multiplayer battle game from Xbox’s Ninja Theory studio.
Ninja Theory
pfffft aHAHAHAHA
game barely even lasted a year too and they wanted to use it
 

I just want something to generate music and 3d models so I wouldn't be dependent on paying trannies/furries for assets.
That's obviously well under way.


I think the next big thing for gaming is going to be using AI to cut down the time needed to build large worlds so we can have 16x the detail. Basically an alternate approach to procedural generation, replacing the need to hand-craft landscapes with an editor (e.g. TES Construction Kit for the Elder Scrolls games). Maybe it can be multi-modal, taking in written descriptions and maps as input, and generate all the 3D rocks, trees, grime textures, clutter, etc. it thinks it needs before assembling them. Then there needs to be editing and fine tuning of a portion of a world that has already been generated. I had a link for something like this too but I lost it.

A holy grail for gaming will be using LLMs to power NPC interactivity either without breaking a semi-linear plot structure, or allowing an infinitely branching story that isn't a complete mess. Replacing or cloning voice actors with AI will be the easy part.
 
What I'm rather curious about is how stable this shit is. Because all it takes for one missed code for your entire gameworld to fuck up. The way I see it, its gonna be used as a development autofill tool ala paint and just have to be finetuned from there.

One thing is for certain. We now have a computer that can finish Yandere Simulator faster than Yandere Dev.
 
  • Like
Reactions: why42
Back