Game creation is one thing, but AMD's CEO believes that AI is going to be increasingly used by developers to get games onto your screen without necessarily rendering everything.
Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:
Interpreting between frames
For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.
Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.
Optical Flow -
This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.
Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.
While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.
Edit: added some sources below and fixed up optical flow description.
This is kind of the opposite of that idea though. This is saying that not everything put on the screen needs to be computed from the game engine. Some of the content on the screen can be inferred from a predictive model. What remains to be seen is if that requires less computing power from the GPU.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]
No game suggestions, friend requests, surveys, or begging.
No Let’s Plays, streams, highlight reels/montages, random videos or shorts.
No off-topic posts/comments, within reason.
Use the original source, no clickbait titles, no duplicates.
(Submissions should be from the original source if possible, unless from paywalled or non-english sources.
If the title is clickbait or lacks context you may lightly edit the title.)
Techniques ro only render what is on screen has been a thing for decades.
It has yes, however the techniques Carmack used in Doom’s engine probably don’t have much of an impact on something like Cyberpunk 2077.
The exact techniques, maybe not. But the fundamental approach of only rendering what you see has been continued since then.
Right, so what is the point in bringing it up?
“Sony just released a new 150 megapixel mirrorless digital camera!”
“Cameras have been a thing since the 1800’s…”
Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:
Interpreting between frames
For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.
Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.
Optical Flow -
This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.
Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.
While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.
Edit: added some sources below and fixed up optical flow description.
https://www.digitaltrends.com/computing/everything-you-need-to-know-about-nvidias-rtx-dlss-technology/
https://www.youtube.com/watch?v=pSiczcJgY1s
Yes, a new approach to the same concept.
No, rendering at a smaller resolution and upscaling is not the same concept as only rendering what will end up in frame.
This is kind of the opposite of that idea though. This is saying that not everything put on the screen needs to be computed from the game engine. Some of the content on the screen can be inferred from a predictive model. What remains to be seen is if that requires less computing power from the GPU.