Yesterday, DeepSeek released a series of very powerful language models including DeepSeek R1 and a...

With these new models is it worth trying it without a Nvidia GPU / running on CPU?

chiisana
link
fedilink
22d

Depending on what you want to do with it, and what your expectations are; the smaller distilled versions could work on CPU, but most likely will need extra help on top, just like other similar sized models.

This being a reasoning model, you might get a more well thought out results out of it, but at the end of the day, smaller parameter space (easiest to think as ‘less vocabulary’), smaller capabilities.

If you just want something to very quickly chat back and forth with on a CPU, try IBM’s granite3.1-moe:3b, which is very fast even on a modern CPU, but doesn’t really excel in complex problems without additional support (ie: RAG or tool use).

@[email protected]
link
fedilink
2
edit-2
2d

My primary use would probably be a model for home assistant to reason what was said, and based reason what is desired like “Turn off all outlets” being said in the living room would turn them off. Or saying “its cold” to adjust the thermostat. Also hoping it would be able to figure out speech to text inaccuracies and reason what is wanted still. I think a smaller model would be an appropriate use for this also since response times should be on the quicker side.

chiisana
link
fedilink
32d

Yep! Give granite a try. I think that would be perfect for this use case both in terms of able to answer your queries and doing them quickly, without a GPU by just using modern CPU. I was getting above 30 tokens per second on my 10th gen i5, which kind of blew my mind.

Thinking models like r1 will be better at things like troubleshooting a faulty furnace, or user problems, so there’s benefits in pushing those envelopes. However, if all you need is to give basic instructions, have it infer your intent, and finally perform the desired tasks, then smaller mixture of experts models should be passable even without a GPU.

1.5b will run fast on just CPU I think.

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 42 users / day
  • 169 users / week
  • 461 users / month
  • 2.24K users / 6 months
  • 1 subscriber
  • 3.07K Posts
  • 43.9K Comments
  • Modlog