
I dunno, I don’t play these games. The most demanding game I play on steam deck is Oblivion Remastered which runs fine with upscaling/framegen and lowish settings. The nostalgia factor makes low settings totally fine for this game, too, so its not a big deal. Anything game where I want great graphics and performance, I’ll just play on my desktop.
For $900 you could literally just build a decent desktop, but you do you

I have the original and passed on upgrading to the OLED. It really hasn’t shown much age at all, yet. I’m not really playing AAA or demanding titles on it, anyway, and it works perfectly for all of the games I do want to play on it. I figure the limiting factor will be the battery, and that seems to be just as good as it was new.
The clones aren’t acceptable replacements to me, they are more of handheld-consoles than handheld-PCs. If it doesn’t have touchpads, I don’t want it, period.

Colorbots are extremely efficient and can be run on just a raspberry pi.
Human reaction time is ~200-250ms, while the cheat will be introducing easily less than 10ms of latency.
I’ve never used cheats in a video game because I don’t see the point and it would spoil the fun of playing, but as a software developer, it is interesting to learn about how they work and are implemented

Kernel anti-cheat does absolutely nothing to prevent aimbots/triggerbots, as most are run using 2 separate machines, anyway. The first machine runs the game in a totally clean and legitimate environment, but sends its video output (either using standard streaming tools like OBS or by using special hardware) to the 2nd machine. The 2nd machine runs the cheat and processes the video to detect where to aim and/or when to shoot, and sends mouse input back to the 1st machine.

It’s literally just implementation and they’re both statistical models, but 👍
If you disagree, explain how. I’ll wait
no wonder you hail AI as good
When, exactly, did I? I called them both janky dogshit, but simply pointed out the very real hypocrisy of supporting procedural generation while hating generative AI.

by your logic, slavery would be excusable. That’s the argument you’re making.

I’m sorry, we’re talking about the implementation of generated content in video games. That only works if it’s EQUIVALENT to slavery, it’s not (which you yourself said in an attempt to have it both ways lol), so “my logic” does not apply to slavery… Dude.

👀 SLAVERY??? Come on man. Outrageous.
theunknownmuncher thinks it’s somehow inconsistent to be against generative AI while being ok with procedural generation, which implies that they think they’re equivalent in some way.
It’s genuinely wild that you wrote this and then minutes later tried to make a “comparison but totally NOT equivalency, guys” to SLAVERY. 🤦🤦🤦
EDIT: btw, not that it matters at this point, but that’s not what a simile is. It is analogy, though, but a super flawed and shitty one

both are used to produce more content with less effort. There’s your equivalence.
Bingo.
As if the reason people don’t like generative AI is because it makes bad games.
Nice, point proven. 😎 If it doesn’t make games bad, then the complaints are simply invalid and bandwagoning, and developers cannot be faulted for using it. LOL

Nope! It actually is mathematically how it works. Upscaling does not amplify NOISE, like eg surface boiling, although it does introduce many other artifacts. Noise, specifically, would be smoothed. The problem with upscaling is actually not noise, but oversmoothing, which is why it’s paired with sharpening. You can just look at an upsampled signal to see how noise is affected. Boosting gain would increase noise; interpolating samples does not increase noise.

You can test it yourself and see, just go ahead and disable the FSR and frame gen gimmicks entirely while keeping ray tracing on. Hell, disable all AA and motion blur while you’re at it, and really take a gander at what actual, unblurred ray tracing looks like.
Edit: also, “with low quality upscaling” lmao I’d love to hear what the implied “high quality upscaling” does differently 😂 something right? It’s totally different!!

That’s not down to graphic card.
Yeah. That’s literally my point. Ray tracing just isn’t there yet. Has nothing to do with GPUs.
Surprisingly, Oblivion Remaster running Unreal Engine 5 doesn’t have this issue even on RX 9070 XT.
Because you have aggressive upscaling and frame gen enabled, so you’ve blurred your screen to the point that details like boiling are lost and then artificially resharpened your screen with the details that an AI is guessing were there.
Disable these and set to render natively and enjoy the analog static

A huge factor is rendering resolution. I only render at most <1080p (1024x768 or 1600x1200). 2x performance improvement over 6800XT in general sounds very incorrect if the benchmarks are run at 1080p, unless they are using upscaling and frame gen to cheat the performance numbers. Do you have a link to these benchmarks? I’d be less skeptical about a significant performance improvement over 6800XT if the benchmarks were done specifically at 4k resolution though as both AMD and NVIDIA have further optimized GPUs for 4k rendering each passing generation.
Upscaling/framegen and 4k are completely irrelevant to me, so counting that out, it is marginal improvement based on the numbers I’ve seen. I’d like to be wrong though, and I could be
I mean I never really had any problem running it with proton on desktop anyway