@[email protected]
link
fedilink
English
335d

They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.

@[email protected]
link
fedilink
English
25d

Except you cannot use them for AI commercially, or at least in data center setting.

@[email protected]
link
fedilink
English
25d

Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.

@[email protected]
link
fedilink
English
35d

What if I’m buying a graphics card to run Flux or an LLM locally. Aren’t these cards good for those use cases?

@[email protected]
link
fedilink
English
45d

Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.

@[email protected]
link
fedilink
English
35d

Welcome to the future

Create a post

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Weekly Threads:

What Are You Playing?

The Weekly Discussion Topic

Rules:

  1. Submissions have to be related to games

  2. No bigotry or harassment, be civil

  3. No excessive self-promotion

  4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

  5. Mark Spoilers and NSFW

  6. No linking to piracy

More information about the community rules can be found here.

  • 1 user online
  • 132 users / day
  • 747 users / week
  • 2.28K users / month
  • 6.3K users / 6 months
  • 1 subscriber
  • 4.88K Posts
  • 100K Comments
  • Modlog