e
Yeah. AAA higher-ups are very rarely gamers or actually interested in playing video games. They’re just business people who I think mostly want to chase the profitable trends and recreate whatever successes they had in the past under projects with actually decent leadership.
Indie devs also generally aren’t concerned with stretching the runtime out over return limits or in a way that will prevent people from reselling the game.
There’s a pretty long video about why this sort of thing happens. Basically this sort of game is relatively cheap to make and investors think they have a chance of recreating the success of Overwatch or Fortnite or smth
I know that camera hardware does not return hdr values. So something in the actual conversion from/in the sensor (idk how cmos sensors work) would have to be affected by the white balance for changing it in the camera software to do lose a significant amount more information than changing it after the picture was taken. Unless the conversion from a raw image also is a factor, but raw images aren’t hdr either so I don’t really see how that could cause much significant difference.
If the white balance only dims colors and doesn’t brighten them then it couldn’t possibly clip anything and would have the same effect as lowering the exposure originally (with the new white balance) to avoid a clipped highlight.
I’m not a photography guy (just a computer graphics guy) so idk what the software usually does (I suspect it would avoid clipping? You could also brighten something with a gamma curve for example to prevent clipping…) but I can’t find anything online about sensors having hardware support for white balance adjustment.
IMO even a normal flatscreen is more immersive on average than a google cardboard, although that’s partially because a flatscreen hides the flaws in the graphics a lot better.
HLA tho needs 6dof controllers for the intended experience. That mod tries to get around it, but that obviously involves some sacrifices.
IIRC no cardboard ‘headset’ ever had 6dof tracking. It’s about as far as you can get from an immersive VR experience. I say this as someone who bought one before learning about VR and getting a real vr headset.
It’s like VR with all of the downsides, even less apps, and the only advantage over a flatscreen being (limited) depth perception.
I think the only games I’ve played in the last month or so have been Trackmania United Forever and bonk.io
Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great
And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.
It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance
Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower
Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.
I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.
With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.
Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)
See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.
You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.
OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.
Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.
It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.
Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.
This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.
Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)
It’s not just risk, you also can’t really target a narrow audience. Indies can afford to make a game that only 1/100th of people will be interested in. Even if the AAA studio was 100% sure they would succeed and gain a loyal fanbase, they won’t do that if the potential fanbase is pulled from too small of a group.