• 0 Posts
  • 96 Comments
Joined 9M ago
cake
Cake day: Feb 10, 2025

help-circle
rss

The game engines are programmed to use them as part of the rendering cycle.

If you’re using DLSS or RT, they’re being used.


NVIDIA’s RTX series of cards have two fixed-function blocks that sit beside the regular CUDA/shader cores.

They have RT Cores which are optimized to accelerate the Bounding-volume-hierachy (BVH) traversal and ray/triangle intersection tests, speeding up raytracing operations.

There are also Tensor Cores which are NVIDIA’s “AI” cores, they’re optimized for mixed-precision matrix multiplication. DLSS 3 uses a Convolutional Neural Network (CNN) for upscaling and that is, essentially, a bunch of matrix multiplications.

These offload some computation onto dedicated hardware so the CUDA cores that handle the bulk shading/rasterizing are not tied up with these calculations resulting in lower time to render a frame which equates to higher FPS.

AMD cards, in the RDNA2/3 chips have Ray Accelerators, which accelerate the ray/triangle tests but the bulk of the RT load (BVH, shading and denoising) are ran on the regular shader cores. They’ve just announced (this month) that they’re adding ‘Radiance Cores’ in future hardware, which will handle all raytracing functions like the RT Cores.

AMD doesn’t have an equivalent of a Tensor Core, FSR is done in software on the standard shader compute units.

So on NVIDIA cards, DLSS upscaling is ‘free’ in the sense that it doesn’t take time away from the shader cores and RT is accelerated similarly.

This is a good video explaining how Raytracing works if some of the terms are strange to you: https://www.youtube.com/watch?v=gsZiJeaMO48

As an aside, this video is from the 3Blue1Brown ‘Summer of Math Exposition’ video collection where every year there is a contest for who can make the best and most interesting math explainer videos and this video is one of the winners of the 1st year’s contest, the playlists on are 3blue1brown’s YT. 3b1b is great all around, if you’re into that kind of thing.


There’s still a gear grind so you can progress in your item score, but you don’t have to kill 30 rats and run around farming herbs for 2 hours just to do a raid.


5800X3D

It may be the gains from having dedicated hardware to run DLSS and RT.

Of course, It does drop into the 70s during combat and in some outdoor areas.


I played this morning before work, worked just fine.

It uses EAC, which may be a kernel anticheat on Windows, but on Linux it runs in user space.



Check out Fellowship, it’s WoW Mythic+ dungeons without the MMO grind.



I was looking into this, it’s weird that it isn’t on ProtonDB

Future Linux Converts:

If you wonder “Will the game that I play work on Linux?”, there’s a website for that:

https://www.protondb.com/


I’m sad the FPS-RTS hybrid really never took off.

Savage 1/2 and the Half Life mod Natural Selection were kind of popular for a while but the genre just kind of faded away.

I did like HoN better than LoL though


I’m not sure I understand the point that you’re trying to make.

If you use Linux you can use more power to get higher clock rates, have a longer battery life, more stable framerate and a suspend feature that works.

It seems reasonable to say “ROG Xbox Ally runs better on Linux than the Windows it ships with”

It’s like claiming a race car is only faster because it produces more horsepower… yes, that’s the entire point and what we want.


I’ve been playing it with friends (just past the looooooooooong ramp).

It’s a fun game to grab and play over the weekend and then forget about, not much replayability but just go in blind with some friends and enjoy the hilarity that ensues.


Even if it were a fair comparison, 32% is still cherry picked. It’s the best result, versus the average result of +6–7% (sorry, that was the fps increase) +13–14%. 🤷‍♂️

Oh sure. Headlines are always going for maximum clickbait.

The bottom line, from the article and video is that the experience is much better in Linux, outside of the kernel anticheat games.


Did we read the same article?

So sure, if you run one device at 21 TDP and the other at 17, one will do better!

From the article:

Unless you’re really splitting hairs, 31.91% = 32%. Using the same TDP on both.

It’s due to a bug not apples to apples comparison.

There’s only two mentions of bugs, and both are used to describe Bazzite’s rapid development, nothing to do with performance.

So, what are you talking about?



RPing when shot and going down. “Help me sarge! Gah. It-it hurts!”

The chopper pilots that spent the entire match ferrying people from base to the front while blasting the weirdest song mix you’ve ever heard.


Even if the application doesn’t support it, you can just bind mount a directory on the storage drive into /var as the directory where it is trying to save and the application won’t even know.



People complain about performance and then complain about a patch. I’m starting to think social media only cares about outrage.


It’s the JIT shader compilations that need to be rebuilt in the state cache, other games do the same thing it’s why there are tiny stutters the first time you see a new effect in some games.


The argument is that this tech is being used by both the manufacturer and game devs to be lazy and market lies not how can we ever get to 1000hz with path tracing.

Yeah, marketing lies. I mentioned this in the last paragraph.

The whole 500hz benefits are skeptical and subjective at best considering even going from 144 to 240 you’re already seeing diminishing returns on but that’s really a whole other argument about monitor BS currently.

You’re skeptical of the benefits, that is obvious.

You’re wrong about it being subjective though. There are peer reviewed methods of creating photographs that display motion blur as a human eye would experience it. People have been using these techniques to evaluate monitors for years now. Here’s a very high level overview of the state of objective testing: https://blurbusters.com/massive-upgrade-with-120-vs-480-hz-oled-much-more-visible-than-60-vs-120-hz-even-for-office/ . We are seeing diminishing returns because it, roughly, takes a doubling in the refresh rate to cut the motion blur in half. 60-120 is half as blurry, 144 to 240 is only 25% less blurry.

If you want to keep seeing noticeable gains, up to being imperceptible, then display refresh rates need to continue to double and there have to be new frames generated for each of those refresh rates. Even if a card can do 480fps on some limited games, it can’t do 1000fps, or 2000fps.

We need exponential increases in monitor refresh rates in order to achieve improvements in motion blur, but graphics cards have not been making exponential increases in power in quite some time.

Rasterization and Raytracing performance growth is sub-exponential while the requirements for reducing motion blur are exponential. So either monitor companies can decide to stop improving (not likely since TCL just demoed a 4k 1000hz monitor) or there has to be some technological solution for filling the gap.

That technological solution is frame generation.

Unless you know of some other way to introduce exponential growth in processing power (if you did you would win multiple Nobel prizes), then we have to use something that isn’t raw rendering. There is no way for a game to ‘optimize’ its way into having 10x framerate, or 100x framerate.

Being a complex solution doesn’t make it a good solution and frame gen is not a good solution for making sure your game doesn’t run like ass.

Yes, game companies are lazy and they cover the laziness by marketing their game with a lot of upscaling so that they can keep producing crazier and crazier graphics despite graphics cards performance growth not keeping up. This is the fault of gaming companies and their marketing and not of upscaling and frame generation technology

Frame generation is supposed to help older cards get better “FPS” and smooth out motion, you know what would help that over having new games use frame generation as a big ass crutch? Optimizing your damn game so you don’t stutter like a drunken sailor with a speech impediment in the first place and not adding a crap ton of latency with fake frames.

Frame generation gives all cards better FPS, which objectively smooths out motion. Going from 30 to 60 fps cuts motion blur in half. Nothing supposed about it.

A developer’s choice to optimize their game and their choice to support upscaling and frame generation are not mutually exclusive choices. There are plenty of examples of games which run well natively and also support frame generation and upscaling.

Also, frame generation only adds latency when the frame time is long (low FPS). As the source framerate increases the input latency and the frame time converge. In addition, it’s possible to use frame generation to reduce input delay (blur busters: https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/). Input latency is a very solvable problem.


My point is that you’re not understanding the trajectory of display hardware development vs the graphics card performance growth and presenting frame generation and upscaling as some plot by game developers and graphics card designers so that they can produce worse products.

It’s conspiracy nonsense.


Frame generation objectively reduces motion blur and frame consistency.

Neural network-based upscaling is a far better alternative. Previously, in the time of the dinosaurs, we’d get better frame rate by turning the resolution down and letting the monitor handle upscaling. This looked bad but higher frame rate often is more important for image quality than resolution. Now we get the same performance boost with much less loss of visual clarity, and some antialiasing for free on top of it.

Upscaling and frame generation are good technologies. People are upset at the marketing of graphics cards which abuse these technologies to announce impressive FPS numbers when the hardware isn’t as big of an upgrade as implied.

Marketing departments lying about their products isn’t new, but for some people this is the first time that they’ve noticed it affecting them. Instead of getting mad at companies for lying, they’re ignorantly attacking the technologies themselves.


Frame generation is a requirement if we’re going to see very high refresh rate (480hz+) displays become the norm. No card is rasterizing an entire scene 500 times per second.

Calling it fake frames is letting Internet memes stand in place of actual knowledge. There’s a lot of optimizations done in the rendering pipeline which use data from previous frames to generate future frames, generating an intermediate frame while waiting for the GPU to finish rendering the previous frame is just one trick.

The generated frame increases the visual clarity of motion, you can see at https://testufo.com/photo.

We’re not going to have cards that can pathtrace at 4k@1000hz anytime soon, frame generation is one of the techniques that will make it possible.

It’s one thing to be upset at companies marketing teams who try to confuse people with FPS numbers by tweaking up scaling and frame generation. Directing that frustration at the technology itself is silly.

e: a downvote, great argument


No matter how optimized a game is, there will be someone with hardware that can barely run it.

For those people, having access to upscaling in order to gain performance is a plus.




Ah, I didn’t RTFM completely. I just read a snippet that mentioned the NGX updater and misunderstood the context.

I’ll give it a shot, thanks.


it’s there on others (edges can get a bit shimmery with tsr) but really bad with dlss

Yeah that’s my experience as well. TSR seems to be doing the same thing but it isn’t applying the oversharpening which makes them stand out.

What launch options are you using if you don’t mind?

ENABLE_HDR_WSI=1 PROTON_ENABLE_HDR=1 PROTON_ENABLE_WAYLAND=1 gamemoderun %command%

You have to be using GE-Proton10 or above in order to use Wayland’s HDR. I don’t think you need ENABLE_HDR_WSI (I believe PROTON_ENABLE_HDR makes Proton set a bunch of environmental variables on startup.) but I’m not sure.


Using GE-Proton10-15, HDR works great too.

I did notice the edge flickering artifacts with upscaling. XeSS is a bit higher quality than TSR but it also has the flickering. FSR framegen causes the flickering to happen on some particle effect that they use for atmosphere effects (like pieces of dust floating in the air) so it isn’t very usable currerntly.

The game isn’t perfect, but it’s very playable for me after some settings adjustments. I didn’t have any crashes in 5.5 hours of playtime, but I did notice the shader compiling stutter and there were some spots where you could tell that it was loading a zone if you walked over a specific point and I was in combat at the time so I ran across that point a few times and that caused some framerate issues.

A HUGE amount of the stuttering was eliminated by setting the Textures Streaming Speed to Very High, it looks like this is throttling disk IO for performance reasons. If you have an NVME SSD then I can’t think of a reason not to set it to reason not to set it to very high.

That’s good to know that Steam co-op works. I’ll try it later today, my friends are all running Linux too and didn’t want to buy a copy if it wasn’t going to work. I happened to be home yesterday so I was the guinea pig.

I tried updating the DLSS version (using PROTON_ENABLE_NGX_UPDATER=1), the flickering still occurs. Same with using RENDER_PRESET_K. It almost looks like they’re applying too much sharpening when you’re using DLSS, but I don’t see a way to adjust that specifically.


Playing, on Linux (Arch, btw) with no issues. The defaults were a bit harsh (45fps@4k) but once I ran the graphics setting autodetection and it went to medium with balanced upscaling I was getting 60+ FPS.

I couldn’t connect to the matchmaking servers (my system doesn’t meet the requirements, apparently) but it otherwise ran just fine.


I can sum up this topic for basically every game:

If I can’t beat it, it is too hard.

If you can’t beat it, it is because you suck.


Same, I enjoyed the gameplay but every item drop was far less exciting because I’d need to pause and consult a wiki before making a choice.





Both the OP and the article are both about a Reddit post by one user who got a weird error when trying to run both games at the same time.

Then the lead developer at Riot responded on X explaining that the error was caused by running both clients at the same time.

That’s the entire story.

There’s nothing presented, in either article, that suggests that it is a widespread problem.

Just because the clickbait press is reporting the same story with different headlines doesn’t mean it is a widespread problem. They’re both writing about the same Reddit post and the same X reply.

It’s a non story



Why did you link an article that you haven’t read?

From your article:

AnAveragePlayer tried to run both games simultaneously on his PC, which led to the problems. This is generally not a particularly good idea, as both programs compete for the available hardware.

The problem can be easily solved by not trying to play two games at the same time. Which is actually impossible with two fast-paced first-person shooters.


The headline makes it sound worse than it is.

From the article:

Riot head of anti-cheat Phillip Koskinas cleared up the misunderstanding in an X post earlier this week.

“Vanguard is compatible with Javelin, and you don’t need to uninstall one anti-cheat to use the other. However, BF6 does not currently allow the VALORANT client to be running simultaneously, because both drivers race to protect regions of game memory with the same technique.”

So, you can play play BF6 and VALORANT at the same time… not exactly a massive issues unless you’re running a mainframe, I guess?