e

  • 1 Post
  • 70 Comments
Joined 2Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

It’s not just risk, you also can’t really target a narrow audience. Indies can afford to make a game that only 1/100th of people will be interested in. Even if the AAA studio was 100% sure they would succeed and gain a loyal fanbase, they won’t do that if the potential fanbase is pulled from too small of a group.


the other thing is that you don’t actually need to rise the video game hierarchy to get an executive position like you might expect. You just need a business degree and some examples of successful leadership at other companies, even ones totally unrelated to video gaming


Yeah. AAA higher-ups are very rarely gamers or actually interested in playing video games. They’re just business people who I think mostly want to chase the profitable trends and recreate whatever successes they had in the past under projects with actually decent leadership.

Indie devs also generally aren’t concerned with stretching the runtime out over return limits or in a way that will prevent people from reselling the game.


There’s a pretty long video about why this sort of thing happens. Basically this sort of game is relatively cheap to make and investors think they have a chance of recreating the success of Overwatch or Fortnite or smth


I suspect that if you’re now playing where everyone else gets the same advantages, that ruins the fun of having cheats

If not and the cheats themselves are just that fun to use, sure, add it in as another gamemode


I know that camera hardware does not return hdr values. So something in the actual conversion from/in the sensor (idk how cmos sensors work) would have to be affected by the white balance for changing it in the camera software to do lose a significant amount more information than changing it after the picture was taken. Unless the conversion from a raw image also is a factor, but raw images aren’t hdr either so I don’t really see how that could cause much significant difference.

If the white balance only dims colors and doesn’t brighten them then it couldn’t possibly clip anything and would have the same effect as lowering the exposure originally (with the new white balance) to avoid a clipped highlight.

I’m not a photography guy (just a computer graphics guy) so idk what the software usually does (I suspect it would avoid clipping? You could also brighten something with a gamma curve for example to prevent clipping…) but I can’t find anything online about sensors having hardware support for white balance adjustment.


After some more testing I think the OnePlus one isn’t usually that bad, it just works terribly in low light


I’m surprised that it loses dynamic range. White balance is actually built into the camera hardware?


Yeah, white balance is very fixable in post tho so that doesn’t seem like a significant problem.



This was the auto white balance, and these images are only very lightly cropped. The paper is fairly light but the lights are warm, so it’s slightly arbitrary which is better.


These have both been taken with the exact same camera from the same location. The one on the left is with the OnePlus camera app, and the one on the right is from a community modification of the Google camera app to work on the OnePlus 12. The Google one looks a lot better because they use super-resolution from multiple short exposures automatically. The Google camera app does not usually look better without zoom (in my short time testing) and also has a harder time focusing.
fedilink

They’re already going to only ship it through Steam. As long as you’re using Steam, they don’t care.


You could use Nsight, it has a Linux version and is very in depth (shows every draw call, also has one that shows very detailed CPU tasks)

Of course harder to use than presentmon


Yeah, it’s probably not something I would have chosen if I had the option but I don’t really care about the curved screen.


Yea, I got the op 12 because it was just $50 more than the r on Amazon at the time.

It’s definitely powerful enough but I’m slightly disappointed by the software, arcore is just completely broken, and hdr is fairly spotty (works in yt app and photos app but doesn’t work in chrome or Google photos)



Degrees of freedom

3dof things usually just track rotation, because that’s easier. But for a full VR experience, better depth perception, and more normal interactions, 6dof devices are used which track position as well.


IMO even a normal flatscreen is more immersive on average than a google cardboard, although that’s partially because a flatscreen hides the flaws in the graphics a lot better.

HLA tho needs 6dof controllers for the intended experience. That mod tries to get around it, but that obviously involves some sacrifices.


IIRC no cardboard ‘headset’ ever had 6dof tracking. It’s about as far as you can get from an immersive VR experience. I say this as someone who bought one before learning about VR and getting a real vr headset.

It’s like VR with all of the downsides, even less apps, and the only advantage over a flatscreen being (limited) depth perception.


If you already have a (lower midrange) PC, then yes.


True

it’s doomed now, but I love my Reverb G2, I got it for the same price as a Quest 2 (before the q3 released) and, having used both, its a lot better.


The game starts at 60 USD and goes down to 30 pretty often. If you have VR already, it’s not very expensive.



I think most big budget multiplayer games last 2-5 years, but there are some (among us, fall guys, lethal company, etc) that pass pretty quickly, and some that are just bad enough that they are basically outdated already when they come out.


The game I most recently bought is Trackmania United Forever, still $15 on sale even though it came out in 2008. I suppose my purchase of that is less though than of what they get from a user playing their new subscription based (!) racing game for a year.


I think the only games I’ve played in the last month or so have been Trackmania United Forever and bonk.io





In theory it should be able to be more power efficient. In practice, less development has been put into RISC-V CPU designs so they are still less power efficient than Arm (and maybe x86 even)


Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great

And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.

It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance


Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower

Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.

I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.


The b580 is pretty fast with RT, it beats the price comparable Nvidia gpus


With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.

Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)

See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.


You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.


OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.

Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.


Yes, the game should account for latency as much as it can, so a conscious decision to lead or trail probably won’t help. It’s more useful for debugging sort of purposes imo, like figuring out if your network is slow or if it’s just the person you’re playing against.


It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.

Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.

This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.

Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)


8x the size of the world either means 1/8 the original handcrafted stuff per area or 8x the development time and cost, there’s no way you can get around this