Does something not add up here? We’re hopeful something special is in the works, but wary of these rumors

WHO’S THAT POKÉMON?

TomAwsm
link
fedilink
26M

Rotom RTX?

And only needs 600w!

Drasglaf
link
fedilink
46M

And it will cost 3000€

It could be, yes of course

@[email protected]
link
fedilink
53
edit-2
6M

Yes those are all lovely fancy numbers, but the only ones I really give a shit about come after the $, and the one that comes before the W on the power supply requirements.

Coming soon to Costco: 10 packs of 5090s.

@[email protected]
link
fedilink
23
edit-2
6M

Yeah, about clock speeds… remember when they were front and center 20 years ago while marketing CPUs? Intel started marketing CPUs by their clock speeds in the 90’s, hilighting that as a selling point over their competitors that usually ran at slightly lower clock speeds.

But Intel painted themselves into a corner: Clock speeds don’t matter - instruction sets and floating point ops per seconds do. In the mid 2000s they had to slowly phase out the clock speed marketing, as clock speeds had reached such levels that further increases would be detrimental to performance, so they had to change their marketing and branding strategy.

As soon as clock speed marketing had been phased out, Intel CPUs actually ran at lower speeds than the previous generation, while still outperforming them.

I’m curious to see whether nvidia is about to do the same thing.

@[email protected]
link
fedilink
17
edit-2
6M

GPU code is more amenable to high clock speeds because it doesn’t have the branch prediction and data prefetch problems of general purpose CPU code.

Intel stopped chasing clock speed because it required them to make their pipelines extremely long and extremely vulnerable to a cache miss.

@[email protected]
link
fedilink
10
edit-2
6M

also to bring a rudamentary comparison:

a cpu is a few very complicated cores, a gpu is thousands of dumb cores.

its easier to make something doing something low in instructions(gpu) faster than something that has a shit ton of instructions(cpu) due to like you mention, branch prediction.

modern cpu performance gains is focusing more on paralellism and in the case of efficiency cores, scheduling to optimize for performance.

GPU wise, its really something as simple as GPUs are typically memory bottlenecked. memory bandwidth (memory speed x bus width with a few caveats with cache lowering requirements based on hits) its the major indicator on GPU performance. bus width is fixed on a hardware chip design, so the simpilist method to increase general performance is clocks.

Cool cool…. What about the price? That’s all I care about at this point.

No no 5090 is the price, not the model

Create a post

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let’s Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
  • 1 user online
  • 69 users / day
  • 323 users / week
  • 850 users / month
  • 3.13K users / 6 months
  • 1 subscriber
  • 4.49K Posts
  • 28.9K Comments
  • Modlog