e

  • 1 Post
  • 61 Comments
Joined 2Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

This was the auto white balance, and these images are only very lightly cropped. The paper is fairly light but the lights are warm, so it’s slightly arbitrary which is better.


These have both been taken with the exact same camera from the same location. The one on the left is with the OnePlus camera app, and the one on the right is from a community modification of the Google camera app to work on the OnePlus 12. The Google one looks a lot better because they use super-resolution from multiple short exposures automatically. The Google camera app does not usually look better without zoom (in my short time testing) and also has a harder time focusing.
fedilink

They’re already going to only ship it through Steam. As long as you’re using Steam, they don’t care.


You could use Nsight, it has a Linux version and is very in depth (shows every draw call, also has one that shows very detailed CPU tasks)

Of course harder to use than presentmon


Yeah, it’s probably not something I would have chosen if I had the option but I don’t really care about the curved screen.


Yea, I got the op 12 because it was just $50 more than the r on Amazon at the time.

It’s definitely powerful enough but I’m slightly disappointed by the software, arcore is just completely broken, and hdr is fairly spotty (works in yt app and photos app but doesn’t work in chrome or Google photos)



Degrees of freedom

3dof things usually just track rotation, because that’s easier. But for a full VR experience, better depth perception, and more normal interactions, 6dof devices are used which track position as well.


IMO even a normal flatscreen is more immersive on average than a google cardboard, although that’s partially because a flatscreen hides the flaws in the graphics a lot better.

HLA tho needs 6dof controllers for the intended experience. That mod tries to get around it, but that obviously involves some sacrifices.


IIRC no cardboard ‘headset’ ever had 6dof tracking. It’s about as far as you can get from an immersive VR experience. I say this as someone who bought one before learning about VR and getting a real vr headset.

It’s like VR with all of the downsides, even less apps, and the only advantage over a flatscreen being (limited) depth perception.


If you already have a (lower midrange) PC, then yes.


True

it’s doomed now, but I love my Reverb G2, I got it for the same price as a Quest 2 (before the q3 released) and, having used both, its a lot better.


The game starts at 60 USD and goes down to 30 pretty often. If you have VR already, it’s not very expensive.



I think most big budget multiplayer games last 2-5 years, but there are some (among us, fall guys, lethal company, etc) that pass pretty quickly, and some that are just bad enough that they are basically outdated already when they come out.


The game I most recently bought is Trackmania United Forever, still $15 on sale even though it came out in 2008. I suppose my purchase of that is less though than of what they get from a user playing their new subscription based (!) racing game for a year.


I think the only games I’ve played in the last month or so have been Trackmania United Forever and bonk.io





In theory it should be able to be more power efficient. In practice, less development has been put into RISC-V CPU designs so they are still less power efficient than Arm (and maybe x86 even)


Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great

And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.

It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance


Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower

Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.

I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.


The b580 is pretty fast with RT, it beats the price comparable Nvidia gpus


With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.

Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)

See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.


You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.


OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.

Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.


Yes, the game should account for latency as much as it can, so a conscious decision to lead or trail probably won’t help. It’s more useful for debugging sort of purposes imo, like figuring out if your network is slow or if it’s just the person you’re playing against.


It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.

Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.

This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.

Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)


8x the size of the world either means 1/8 the original handcrafted stuff per area or 8x the development time and cost, there’s no way you can get around this



antialiasing and denoising through temporal reprojection (using data from multiple frames)

it works pretty well imo but makes things slightly blurry when the camera moves, it really depends on the person how much it bothers you

its in a lot of games because their reflections/shadows/ambient occlusion/hair rendering etc needs it, its generally cheaper than MSAA (taking multiple samples on the edges of objects), it can denoise specular reflections, and it works much more consistently than SMAA or FXAA

modern upscalers (DLSS, FSR, XeSS) basically are a more advanced form of taa, intended for upscaling, and use the ai cores built into modern gpus. They have all of the advantages (denoising, antialiasing) of taa, but also generally show blurriness in motion.


“the garbage trend is to produce a noisy technique and then trying to “fix” it with TAA. it’s not a TAA problem, it’s a noisy garbage technique problem…if you remove TAA from from a ghosty renderer, you have no alternative of what to replace it with, because the image will be so noisy that no single-shot denoiser can handle it anyway. so fundamentally it’s a problem with the renderer that produced the noisy image in the first place, not a problem with TAA that denoised it temporally”

(this was Alexander Sannikov (a Path of Exile graphics dev) in an argument/discussion with Threat Interactive on the Radiance Cascades discord server, if anyone’s interested)

Anyways, it’s really easier said than done to “just have a less noisy technique”. Most of the time, it comes down to this choice: would you like worse, blobbier lighting and shadows, or would you like a little bit of blurriness when you’re moving? Screen resolution keeps getting higher, and temporal techniques such as DLSS keep getting more popular, so I think you’ll find that more and more people are going to go with the TAA option.


I think modern graphics cards are programmable enough that getting the gamma correction right is on the devs now. Which is why its commonly wrong (not in video games and engines, they mostly know what they’re doing). Windows image viewer, imageglass, firefox, and even blender do the color blending in images without gamma correction (For its actual rendering, Blender does things properly in the XYZ color space, its just the image sampling that’s different, and only in Cycles). It’s basically the standard, even though it leads to these weird zooming effects on pixel-perfect images as well as color darkening and weird hue shifts, while being imperceptibly different in all other cases.

If you want to test a program yourself, use this image:

Try zooming in and out. Even if the image is scaled, the left side should look the same as the bottom of the right side, not the top. It should also look roughly like the same color regardless of its scale (excluding some moire patterns).

image and explanation







No one’s making billions of dollars. No one’s making a single dollar. Both games have absolutely no monetization.