
It’s on the AUR but you need a patched kernel which you can grab from catchyOS
Instructions are in the developer’s blog post: https://pixelcluster.github.io/VRAM-Mgmt-fixed/
Q: I use another Arch-based distro! What now?
The
dmemcg-boosterandplasma-foreground-boosterutilities are available in the AUR as well (plasma-foreground-booster carries the package nameplasma-foreground-booster-dmemcg), so you can install them from there.For the kernel side, you can either use the CachyOS kernel package on a non-CachyOS system by retrieving the package from their repository, or you can compile your own kernel. Installing
linux-dmemcgfrom the AUR will compile the development branch I used to develop this. Being a development branch, this carries the risk of some stuff being broken, so install at your own risk!If you want to apply the kernel patches yourself, you need these six .patch files: [links in blog]
I’m not sure how easily they apply on specific kernel versions, but feel free to leave a comment if you run into issues and I’ll try to help out.

Lots of for-profit commercial entities contribute to open source projects.
The code they’re contributing is covered by the same license as the code contributed by volunteer developers.
I understand why we should be cautious about these things, but the current situation is that Valve is contributing a lot and their contributions are open source. Yeah, they’re doing it for a profit motive, but not to the point where they’re trying to kill open source projects or hide the updates behind proprietary binaries.
Valve is, currently, not being evil. GabeN has plenty of yacht money.

essentially a manager game? but how does the multiplayer fit in? it can’t have the social depth a real mmorpg has
It reminds me of idle games.
Specifically, IdleOn (https://www.youtube.com/watch?v=JvBFRDuNRoo). The progression is in progressing every character individually and also in ways that are shared among characters.
This just seems more WoW-like while IdleOn is MapleStory-like.

I played a few hours of the first one with a pirated copy and then immediately purchased it for the social aspect.
As long as there are no issues running it via Proton, I’m going to buy it tomorrow.
That being said, it’s expensive and younger/broker me would pirate it. The first one was a great experience in every aspect and if the only way you can play is to pirate it, go for it.

If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.

I get your point, I don’t think it looks very good on the whole and I almost certainly won’t use it.
However, the direction that they’re going in inserting it earlier in the rendering chain seems a bit more promising than simply taking a low-res output and making it bigger.
I could easily see having the ability to add properties of materials/shaders which would exclude them from the process. An artist may not care too much about how the grass is enhanced they may want to disable it for parts of a character’s model or set pieces in the world.
That kind of thing isn’t really possible with DLSS as it stands now (and probably isn’t possible with DLSS 5), but the idea of attacking the problem earlier in the rendering sequence is interesting.

You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.

You don’t understand, once DLSS 5 is released into the wild then nobody will have a choice. It’s basically Skynet, the end of the world, Snow Crash, a breach in the Black Wall.
It will install itself the moment a person searches for Godot tutorials and nobody can ever disable it. It would be LITERALLY IMPOSSIBLE (didn’t you see that they said ‘literally’?!) for an artist to control.
/s

RTGI is already pretty much perfect these days if you do it right
Ray Tracing the entire scene looks great. It’s also way more computationally expensive than upscaling.
DLSS is just a shortcut, and shortcuts have costs. I don’t like the image quality cost so I don’t use DLSS (XeSS looks better anyway) and so I just buy more powerful hardware. Someone on a low-end machine can’t simply enable raytracing and still have a playable game, DLSS gives them more options.

Except DLSS 5 isn’t just upscaling. It’s replacing the image.
Technically all upscaling replaces the frame with a higher resolution frame.
Even with non-AI upscaling, like linear or bicubic, the original frame isn’t copied and then upscaled. The upscaled image is built based on the old image andreplaces the original frame in the frame buffer. DLSS doesn’t alter the process, it just uses a neural network instead of a linear/bicubic algorithm.
The new difference with DLSS 5 seems to be that instead of using the frame as the only input it also takes in additional information from earlier in the rendering pipeline (motion vectors) prior to upscaling. This would theoretically create more accurate outputs.
It’s kind of like how asking an LLM a question becomes more accurate if you first paste the Wikipedia article which answers your question into the context. Having more information allows for better output quality.
And to achieve what it does, they used one 5090 to render the game normally, and an entire second 5090 just to run DLSS 5.
How is that an improvement in efficiency?
Based on the reporting the use of 2x 5090s in the demo was due to the VRAM requirements of the current iteration, it isn’t due to a higher compute requirement. The official DLSS5 release will run on a single card (according to NVIDIA).

The term slop is essentially meaningless.
It’s like people that ‘woke’ as an insult, it applies to everything they don’t like despite nobody having a clear definition of what it actually means.
To me, slop is the mass produced articles/videos created by generative and not ‘everything that is done with machine learning’.
Simply calling everything AI ‘slop’ is meaningless virtue signaling, like using ‘woke’.

Hope you like slop in your slop
What does this even mean?
DLSS applies upscaling to video games. So, even if we buy the “call anything made by AI ‘slop’” meme then wouldn’t the headline be ‘Hope you like slop in your video games’?
Some people are so anti-AI-brained that they don’t even make sense. I’m just picturing the OP going back and forth trying to wedge the word ‘clanker’ in there somewhere but giving up and posting this nonsense instead.

I don’t see the point in hiding it other than being somewhat petty.
The point in hiding it was that it was being used, without harassment or complaint, right up until he added attribution which resulted in an avalanche of complaints which require resources to deal with. Discord, the forums and Github pull requests now require much more moderation labor, which takes away from the project.
People had no complaints about the code quality until he started adding AI attribution. So he removed the attribution.
Like he said, if people can’t tell the difference until he started marking the code AI assisted… then they don’t actually have an argument and are simply bringing anti-AI politics into the project.

If there’s no difference in quality why obfuscate it? Why hide something that you think is a valuable tool if your code can speak for
The timeline was that he started adding attribution indicating the use of AI.
Then the anti-AI drones started bombarding the Github, Discord and forums with harassment. His recent statements and removal of attribution are entirely addressed at and because of the anti-AI people harassing the project staff.
He’s not removing it and saying ‘fuck you’ to the users. He’s tired of being harassed by third parties who are not involved with the project in any way and so he removed the source of the harassment.

I agree.
If you read the anti-AI comments you’ll find that when they say ‘AI’ they mean ‘LLMs fine tuned to be chatbots’ and ‘Diffusion models which generate bitmaps or video files’
They’re seemingly ignorant of all of the other things that Transformers and Deep Neural Networks are used for.
Remember how there were all of these projects trying to crowd source an algorithm to fold proteins given an amino acid sequence? Well, a trained neural network ‘AI’ called Alphafold was created and it can complete the task with >90% accuracy. THEN, using a network like AlphaFold another group of scientists made a diffusion model that could be prompted with protein parameters and then generate the string of amino acids which would fold into that protein.
I find it hard to believe that the ‘fuck AI’ crowd understands that ‘AI’ is completely separate from the capitalist frenzy over chatbots and image generation. The vast majority of their complaints are not about the technology, they are about assholes who have a lot of money that are abusing and overhyping the technology in order to get more money.

It seems like you’re glossing over the fact that he was including authorship until he was targeted with a harassment campaign by the anti-ai nutjobs.
He removed authorship in response to being harassed. His point was that including authorship has only led to harassment which takes resources away from the actual project. If a person can’t tell that the code was AI generated with out a ‘Generated by Claude Code’ tag then their complaints about AI’s quality seem to fall flat.

Out of many more ethical models out there, why go with that one specifically?
Because it is the better tool in the usecase that he is engaging with.
You’re setting up an impossible standard, one that you don’t follow yourself.
You know that Social Media is used to spread propaganda throughout the world, leading to hate crimes, genocides, wars, sexual exploitation etc. You’re still using social media. There are many more ethical ways to talk to people, why go with social media specifically?
All you’ve discovered is that there is no ethical consumption under capitalism. You can take anything that a person does and trace the supply chain to find examples of wholly immoral behavior. Unless you plan on living in a cave, you’re going to appear like a hypocrite at the very least if you start picking apart the choices of others under that lens.

That’s twisting the order of events.
The developer was marking code when AI was used.
Anti-AI drones started harassing him in Discord, the forums and Github PRs
The developer stopped marking code when AI was used.
The Anti-AI assholes are not participating in development in good faith, this is a harassment campaign. He’s taking steps to mitigate the harassment.
The fault and blame here is entirely on the people who thought it was okay to dog pile on a volunteer developer.
I think vanadium block screenshots by default as part of the security model.