Which was what this tech was supposed to be for when it was first pitched to gamers, a tool to help extend the usable life of a GPU.
Not we know now that’s not how the tech is being used and especially for Nvidia, that not how this is used and marketed at this point and it would seem that developers are just expecting upscaling to fill in the gap for not doing a proper job to being with.
ETA, also don’t forget that it’s not just upscaling, Nvidia are pushing fake frames as the standard too in their marketing and optimization push.
It’s very unfortunate that all of this shiny new tech is often only present on the latest GPUs, this is a good exception to something that looks like a forever rule.
I understand there were big changes between RDNA 3 and 4, but if you look at GCN and it’s support thru the generations this trend still seems greedy as hell.
As an example of how this tech can be useful: sometimes, games just hitch for a quick second. Can be any number of reasons why. Even on a ‘perfect’ system, it can happen. Such is the case with my PC and emulating android to play Destiny Rising. No matter what, it just likes to hitch occasionally. By using Lossless Scaling’s frame generation, it’s buttery smooth. I don’t notice any input lag (base FPS is 60) so everything’s all good.
I also use Lossless Scaling on my Lenovo Legion Go a lot. Just helps things look that much better.
Just because some developers are bad or lazy at optimisation doesn’t make these tools bad. Unoptimized games have existed for far longer than AI upscaling tools. If I can use DLSS to still get solid framerates in new releases without needing to buy a new $2000 graphics card every two years that sounds pretty good in my book. I get why some people dislike Frame Generation as it does typically come with some input lag and is a bit of a win-more in that you need 60+ FPS in the first place for it to work well. But DLSS/FSR are good tools in my book and one of the best applications of AI.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]
No game suggestions, friend requests, surveys, or begging.
No Let’s Plays, streams, highlight reels/montages, random videos or shorts.
No off-topic posts/comments, within reason.
Use the original source, no clickbait titles, no duplicates.
(Submissions should be from the original source if possible, unless from paywalled or non-english sources.
If the title is clickbait or lacks context you may lightly edit the title.)
Or developers could just optimize their games instead of using this generative drivel to compensate for lazy or rushed development.
No matter how optimized a game is, there will be someone with hardware that can barely run it.
For those people, having access to upscaling in order to gain performance is a plus.
Which was what this tech was supposed to be for when it was first pitched to gamers, a tool to help extend the usable life of a GPU.
Not we know now that’s not how the tech is being used and especially for Nvidia, that not how this is used and marketed at this point and it would seem that developers are just expecting upscaling to fill in the gap for not doing a proper job to being with.
ETA, also don’t forget that it’s not just upscaling, Nvidia are pushing fake frames as the standard too in their marketing and optimization push.
It’s very unfortunate that all of this shiny new tech is often only present on the latest GPUs, this is a good exception to something that looks like a forever rule.
I understand there were big changes between RDNA 3 and 4, but if you look at GCN and it’s support thru the generations this trend still seems greedy as hell.
As an example of how this tech can be useful: sometimes, games just hitch for a quick second. Can be any number of reasons why. Even on a ‘perfect’ system, it can happen. Such is the case with my PC and emulating android to play Destiny Rising. No matter what, it just likes to hitch occasionally. By using Lossless Scaling’s frame generation, it’s buttery smooth. I don’t notice any input lag (base FPS is 60) so everything’s all good.
I also use Lossless Scaling on my Lenovo Legion Go a lot. Just helps things look that much better.
Just because some developers are bad or lazy at optimisation doesn’t make these tools bad. Unoptimized games have existed for far longer than AI upscaling tools. If I can use DLSS to still get solid framerates in new releases without needing to buy a new $2000 graphics card every two years that sounds pretty good in my book. I get why some people dislike Frame Generation as it does typically come with some input lag and is a bit of a win-more in that you need 60+ FPS in the first place for it to work well. But DLSS/FSR are good tools in my book and one of the best applications of AI.