

e


Sure, I could definitely see situations where it would be useful, but I’m fairly confident that no current games are doing that. First of all, it is a whole lot easier said than done to get real-world data for that type of thing. Even if you manage to find a dataset with positions of various features across various biomes and train an AI model on that, in 99% of cases it will still take a whole lot more development time and probably be a whole lot less flexible than manually setting up rulesets, blending different noise maps, having artists scatter objects in an area, etc. It will probably also have problems generating unusual terrain types, which is a problem if the game is set in a fantasy world with terrain that is unlike what you would find in the real world. So then, you’d need artists to come up with a whole lot of datat to train the model with, when they could just be making the terrain directly. I’m sure Google DeepMind or Meta AI whatever or some team of university researchers could come up with a way to do ai terrain generation very well, but game studios are not typically connected to those sorts of people, even if they technically are under the same company of Microsoft or Meta.
You can get very far with conventional procedural generation techniques, hydraulic erosion, climate simulation, maybe even a model of an ecosystem. And all of those things together would probably still be much more approvable for a game studio than some sort of machine learning landscape prediction.


But it seems like almost every other storefront operates under those margins for digital sales (not just in gaming)
Notable that Epic Games takes only a 12% cut, and 0% of the first $1 million in sales (effectively 0% for the vast majority of indie games). A cynical take is that they’re just doing this to attract developers to their store, which is almost certainly true, but it doesn’t necessarily mean they’ll take a higher cut if they become dominant. Unfortunately the Epic Games platform is missing the majority of extra features that Steam has (built in streaming, family share, input binding, big picture mode, etc)
Tim Sweeney, CEO of Epic Games, is about 80% as wealthy as Gabe Newell, and has done much more philanthropy, although it only represents probably less than one percent of his net worth.


It’s not just risk, you also can’t really target a narrow audience. Indies can afford to make a game that only 1/100th of people will be interested in. Even if the AAA studio was 100% sure they would succeed and gain a loyal fanbase, they won’t do that if the potential fanbase is pulled from too small of a group.


Yeah. AAA higher-ups are very rarely gamers or actually interested in playing video games. They’re just business people who I think mostly want to chase the profitable trends and recreate whatever successes they had in the past under projects with actually decent leadership.
Indie devs also generally aren’t concerned with stretching the runtime out over return limits or in a way that will prevent people from reselling the game.
There’s a pretty long video about why this sort of thing happens. Basically this sort of game is relatively cheap to make and investors think they have a chance of recreating the success of Overwatch or Fortnite or smth
I know that camera hardware does not return hdr values. So something in the actual conversion from/in the sensor (idk how cmos sensors work) would have to be affected by the white balance for changing it in the camera software to do lose a significant amount more information than changing it after the picture was taken. Unless the conversion from a raw image also is a factor, but raw images aren’t hdr either so I don’t really see how that could cause much significant difference.
If the white balance only dims colors and doesn’t brighten them then it couldn’t possibly clip anything and would have the same effect as lowering the exposure originally (with the new white balance) to avoid a clipped highlight.
I’m not a photography guy (just a computer graphics guy) so idk what the software usually does (I suspect it would avoid clipping? You could also brighten something with a gamma curve for example to prevent clipping…) but I can’t find anything online about sensors having hardware support for white balance adjustment.


IMO even a normal flatscreen is more immersive on average than a google cardboard, although that’s partially because a flatscreen hides the flaws in the graphics a lot better.
HLA tho needs 6dof controllers for the intended experience. That mod tries to get around it, but that obviously involves some sacrifices.
IIRC no cardboard ‘headset’ ever had 6dof tracking. It’s about as far as you can get from an immersive VR experience. I say this as someone who bought one before learning about VR and getting a real vr headset.
It’s like VR with all of the downsides, even less apps, and the only advantage over a flatscreen being (limited) depth perception.


I think the only games I’ve played in the last month or so have been Trackmania United Forever and bonk.io


Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great
And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.
It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance


Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower
Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.
I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.


With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.
Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)
See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.


You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.
Decent I think, as long as you don’t want to use XeSS