Hangzhou-based DeepSeek is 2025’s ‘biggest dark horse’ in open-source large language models, Nvidia research scientist Jim Fan says.
☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
525d

there’s some more info with benchmarks here, it does as well and in some cases better than top tier commercial models https://www.analyticsvidhya.com/blog/2024/12/deepseek-v3/

The trick that makes it possible is the mixture-of-experts approach. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.

Ty for the benchmarks and extra info. Much appreciated!

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
125d

no prob

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 80 users / day
  • 165 users / week
  • 456 users / month
  • 2.28K users / 6 months
  • 1 subscriber
  • 3.07K Posts
  • 43.9K Comments
  • Modlog