Microsoft's big update for DirectX and PC game development aims to eliminate CPU bottlenecks and improve performance with GPU-driven rendering.
@[email protected]
link
fedilink
English
010M

At some point… do you need the CPU? There’s stuff it will be better at, yes, and more power is always better. But the GPU can run any code.

The whole computer outside the video card could be reduced to a jumped-up southbridge.

@[email protected]
link
fedilink
English
210M

Buncha dry students here giving you shit. It is not a stupid question.

Some day we might not need a cpu. The biggest hurdle probably isn’t actually even the chip architecture, but that the software needs to be remade and it’s not something you do in a day exactly

@[email protected]
link
fedilink
English
3
edit-2
10M

Right, GPGPU is a thing. You can do branch logic on GPU and you can do SIMD on a CPU. But in general, logic and compute have some orthogonal requirements which means you end up with divergent designs if you start optimizing in either direction.

This is also a software architecture and conceptual problem as well. You simply can’t do conditional SIMD. You can compute both graphs in parallel and “branch” when the tasks join (which is a form of speculative execution), but that’s rarely more efficient than defining and dispatching compute tasks on demand when you get to the edges of the performance curve.

@[email protected]
link
fedilink
English
1810M

GPUs are ridiculously, ludicrously good at doing an absolute shit-ton of very simple, non-dependent calculations simultaneously. CPUs are good at… Well, everything else. So yes, you do still need the CPU.

@[email protected]
link
fedilink
English
-810M

GPUs are pretty good at doing half a shit-ton of mildly complex calculations simultaneously. And even the things they’re not so good at, they can still do in parallel.

Remember that GPU ray-tracing didn’t start with bespoke hardware. The first Nvidia card celebrated by path-tracing nerds was the GTX 480 from 2010. “GPGPU” shenanigans began even before CUDA. People were coercing work out of video cards by converting data to colors. You ever worry about the middle bytes of a long int getting saturated? It’s not good times.

Couple that with how good design practices have pushed toward stream processing, just to make good use of many-core CPUs, and the question is worth asking. Especially when true parallel hardware is easier to scale up, costs way less per flop, and won’t run into looming obstacles in SRAM size.

I guess the hybrid alternative path is just Xeon Phi.

@[email protected]
link
fedilink
English
610M

GPU’s as the ONLY compute source in a computer cannot and will not function, mainly due to how pipelining works on existing architectures (and other instructions)

You’re right, in that GPU’s are excellent at parallelization. Unfortunately when you pipeline several instructions to be run in parallel, you actually increase each individual instruction’s execution time. (Decreasing the OVERALL execution time though).

GPU’s are stupid good at creating triangles effectively, and pinning them to a matrix that they can then do “transformations” or other altering actions to. A GPU would struggle HARD if it had to handle system calls and “time splitting” similar to how an OS handles operating system background tasks.

This isnt even MENTIONING the instruction set changes that would be needed for x86 for example to run on a GPU alone.

TLDR: CPU’s are here to stay for a really really long time.

@[email protected]
link
fedilink
English
110M

… why would you run x86?

Nevermind that “cannot function” is not the same thing as “slow.” Every reply has been a technically-proficient attack rather than sincere consideration of what is possible. The article is about rearranging the established relationship of CPU and GPU - the root comment asks “at some point.” An all-caps dismissal of running existing software is a tell.

We’re not talking about binaries you already have. We’re not necessarily talking about general software. This is about future games. We’re not even talking about a system with no CPU - the root comment describes reducing the importance of components. Crucial pieces of discrete hardware in past computers live on in modern motherboards as a tiny fraction of some chip.

Even CPUs themselves are experimenting with heterogeneous core layouts, where an itty-bitty Atom or ARMv7 handles the basics, while some wildly different silicon either sits idle or does all the work. The difference between that and an APU chewing through SPIR-V might be less than you think.

@[email protected]
link
fedilink
English
3
edit-2
10M

You are the one who brought up the question of even needing the CPU at all. Also, It wasn’t meant to be an attack. Just an explanation as to why you’d still need a CPU.

why would you run x86

All I meant was a large portion of software and compatibility tools still use it, and our modern desktop CPU architectures are still inspired from it. Things like CUDA are vastly different was my point

But if what you meant by your original comment was to not do away with the CPU, then yes! By all means, plenty of software is now migrating to taking advantage of the GPU as much as possible. I was only addressing you asking “at some point do we even need the CPU?” - the answer is yes :)

@[email protected]
link
fedilink
English
110M

Lol, there are a lot of people in here who got their digital design education from Linus Tech Tips downvoting you.

@[email protected]
link
fedilink
English
210M

It’s a whole mess of legitimate reasons we’re not already doing it (which I know), misapplied to ‘so it can never make sense,’ with a tone of ‘how dare you.’

The obvious example application ages ago would’ve been a console - y’know, specialty hardware with bespoke software, optimized for maximum oomph at minimum up-front cost. But everything since the PS3-360-Wii generation has been a whole-ass computer judged on its ability to handle multiplatform games. Even your damn phone is expected to run Fortnite. Everything’s gotta have an everything.

Maybe people are no longer used to considering how computing could get weird.

Maybe they don’t recognize how weird it already got.

@[email protected]
link
fedilink
English
0
edit-2
10M

Fuck me for playing what-if, apparently.

Not like this news is explicitly about upending the typical CPU-GPU relationship.

ferret
link
fedilink
English
1310M

GPUs are really terrible at the kind of multitasking required to run an OS

@[email protected]
link
fedilink
English
-610M

Are they, though? They’re hardware-threaded. Context switches are how they deal with a cache miss.

This specific news sounds a lot like what an interrupt-driven scheduler would do.

The bigger obstacle to a GPU OS is surely that the video card does not tend to talk to itself… and evidently that’s being addressed.

Create a post

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc…
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc…)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

  • 1 user online
  • 108 users / day
  • 422 users / week
  • 1.12K users / month
  • 3.95K users / 6 months
  • 1 subscriber
  • 12.5K Posts
  • 86.7K Comments
  • Modlog