For PC gaming news and discussion.
PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let’s Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates.
(Submissions should be from the original source if possible, unless from paywalled or non-english sources.
If the title is clickbait or lacks context you may lightly edit the title.)
- 1 user online
- 16 users / day
- 230 users / week
- 850 users / month
- 3.27K users / 6 months
- 1 subscriber
- 5.86K Posts
- 40.6K Comments
- Modlog
I’m starting to have a sneaking suspicion that putting 24G of VRAM on a card isn’t happening because they don’t want people using AI models locally. The moment you can expect the modern gamers computer to have that kind of local computing power - is the moment they stop getting to slurp up all of your data.
Honestly I think it is because of DLSS. If you can get a $300 card that could do 4k DLSS performance well, why would you need to buy a xx70(ti) or xx80 card?
Lossless Scaling (on Steam) has also shown HUGE promise from a 2-GPU standpoint as well. I’ve seen some impressive results from people piping their NVidia cards, into an Intel GPU (on-die or discreet) and using a dedicated GPU for the upscaling as well.
It’s because the GTX 10XX gen had much VRAM (yes 1070 had 8GB VRAM in 2016) and was a super good generation that lasted many years. Clearly they want you to change GPUs more often and that’s why they limit the VRAM.