Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But does it need to be frame-based?

What if you combine this with an engine in parallel that provides all geometry including characters and objects with their respective behavior, recording changes made through interactions the other model generates, talking back to it?

A dialogue between two parties with different functionality so to speak.

(Non technical person here - just fantasizing)



In that scheme what is the NN providing that a classical renderer would not? DOOM ran great on an Intel 486, which is not a lot of computer.


> DOOM ran great on an Intel 486

It always blew my mind how well it worked on a 33 Mhz 486. I'm fairly sure it ran at 30 fps in 320x200. That gives it just over 17 clock cycles per pixel, and that doesn't even include time for game logic.

My memory could be wrong, though, but even if it required a 66 Mhz to reach 30 fps, that's still only 34 clocks per pixel on an architecture that required multiple clocks for a simple integer add instruction.


An experience that isn’t asset- but rule-based.


In that case, the title of the article wouldn’t be true anymore. It seems like a better plan, though.


What would the model provide if not what we see on the screen?


The environment and everything in it.

“Everything” would mean all objects and the elements they’re made of, their rules on how they interact and decay.

A modularized ecosystem i guess, comprised of “sub-systems” of sorts.

The other model, that provides all interaction (cause for effect) could either be run artificially or be used interactively by a human - opening up the possibility for being a tree : )

This all would need an interfacing agent that in principle would be an engine simulating the second law of thermodynamics and at the same time recording every state that has changed and diverged off the driving actor’s vector in time.

Basically the “effects” model keeping track of everyones history.

In the end a system with an “everything” model (that can grow overtime), a “cause” model messing with it, brought together and documented by the “effect” model.

(Again … non technical person, just fantasizing) : )


For instance, for a generated real world RPG, one process could create the planet, one could create the city where the player starts, one could create the NPCs, one could then model the relationships of the npcs with each other. Each one building off of the other so that the whole thing feels nuanced and more real.

Repeat for quest lines, new cities, etc, with the npcs having real time dialogue and interactions that happen entirely off screen, no guarantee of there being a massive quest objective, and some sort of recorder of events that keeps a running tally of everything that goes on so that as the PCs interact with it they are never repeating the same dreary thing.

If this were a MMORPG it would require so much processing and architecting, but it would have the potential to be the greatest game in human history.


What you’re asking for doesn’t make sense.


So you're basically just talking about upgrading "enemy AI" to a more complex form of AI :)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: