e
IMO even a normal flatscreen is more immersive on average than a google cardboard, although that’s partially because a flatscreen hides the flaws in the graphics a lot better.
HLA tho needs 6dof controllers for the intended experience. That mod tries to get around it, but that obviously involves some sacrifices.
IIRC no cardboard ‘headset’ ever had 6dof tracking. It’s about as far as you can get from an immersive VR experience. I say this as someone who bought one before learning about VR and getting a real vr headset.
It’s like VR with all of the downsides, even less apps, and the only advantage over a flatscreen being (limited) depth perception.
I think the only games I’ve played in the last month or so have been Trackmania United Forever and bonk.io
Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great
And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.
It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance
Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower
Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.
I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.
With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.
Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)
See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.
You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.
OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.
Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.
It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.
Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.
This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.
Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)
antialiasing and denoising through temporal reprojection (using data from multiple frames)
it works pretty well imo but makes things slightly blurry when the camera moves, it really depends on the person how much it bothers you
its in a lot of games because their reflections/shadows/ambient occlusion/hair rendering etc needs it, its generally cheaper than MSAA (taking multiple samples on the edges of objects), it can denoise specular reflections, and it works much more consistently than SMAA or FXAA
modern upscalers (DLSS, FSR, XeSS) basically are a more advanced form of taa, intended for upscaling, and use the ai cores built into modern gpus. They have all of the advantages (denoising, antialiasing) of taa, but also generally show blurriness in motion.
“the garbage trend is to produce a noisy technique and then trying to “fix” it with TAA. it’s not a TAA problem, it’s a noisy garbage technique problem…if you remove TAA from from a ghosty renderer, you have no alternative of what to replace it with, because the image will be so noisy that no single-shot denoiser can handle it anyway. so fundamentally it’s a problem with the renderer that produced the noisy image in the first place, not a problem with TAA that denoised it temporally”
(this was Alexander Sannikov (a Path of Exile graphics dev) in an argument/discussion with Threat Interactive on the Radiance Cascades discord server, if anyone’s interested)
Anyways, it’s really easier said than done to “just have a less noisy technique”. Most of the time, it comes down to this choice: would you like worse, blobbier lighting and shadows, or would you like a little bit of blurriness when you’re moving? Screen resolution keeps getting higher, and temporal techniques such as DLSS keep getting more popular, so I think you’ll find that more and more people are going to go with the TAA option.
I think modern graphics cards are programmable enough that getting the gamma correction right is on the devs now. Which is why its commonly wrong (not in video games and engines, they mostly know what they’re doing). Windows image viewer, imageglass, firefox, and even blender do the color blending in images without gamma correction (For its actual rendering, Blender does things properly in the XYZ color space, its just the image sampling that’s different, and only in Cycles). It’s basically the standard, even though it leads to these weird zooming effects on pixel-perfect images as well as color darkening and weird hue shifts, while being imperceptibly different in all other cases.
If you want to test a program yourself, use this image:
Try zooming in and out. Even if the image is scaled, the left side should look the same as the bottom of the right side, not the top. It should also look roughly like the same color regardless of its scale (excluding some moire patterns).
9.2 from https://xdaforums.com/t/oneplus-12-gcam-discussion.4672393/