China open-source AI models surpass 10 billion downloads - Daily Ittehad
dailyittehad.com.pk
external-link
China's domestically developed open-source large language models have recorded more than 10 billion cumulative downloads worldwide, and the country now holds
☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
175d

16gb is a bit low unfortunately. You could run a 2 bit quant of latest Qwen, but that’s going to be a severely degraded performance. https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF

Might be worth trying though to see if it does what you need.

@[email protected]
link
fedilink
English
15d

Thanks! I figured it’s low on ram, but with the way things are going in the world, maybe it’s better than nothing is what I’m thinking.

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
105d

It’s entirely possible we might see fairly capable models that can be run with 16 gigs of RAM in the near future. Qwen 3.5 came out in February, and you needed a server with hundreds of gigs of memory to run a 397bln param model. Fast forward to a couple of weeks ago and 3.6 comes out with a 27bln param version beating the old 397bln param one in every way. Just stop and think about how phenomenal that is https://qwen.ai/blog?id=qwen3.6-27b

So, it’s entirely possible people will find ways to optimize this stuff even further this year or the next, and we’ll get an even smaller model that’s more capable.

Avid Amoeba
link
fedilink
3
edit-2
5d

Still worth using Qwen3-Coder-Next 80B? Runs about slightly faster than 3.6 27B on my hw.

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
45d

I haven’t tried comparing them myself, I guess you just kind of have to gauge if it works well enough. :)

Avid Amoeba
link
fedilink
15d

What software are u using with the models for code? OpenCode, Nanocoder, etc.?

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
35d

I ended up settling on opencode, but I find all of them work more or less the same nowadays. Pi is an interesting one which is very minimalist.

Avid Amoeba
link
fedilink
15d

Integration with an editor?

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
55d

I’ve stopped bothering using an editor with LLMs. I just get the model to make a phased plan, write using TDD, and tell it to do staged commits for each feature. Then I just review the diffs after.

@[email protected]
link
fedilink
English
45d

Thanks! That’s really amazing to hear. I guess I’ll wait a bit and see what happens.

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 12 users / day
  • 93 users / week
  • 312 users / month
  • 1.48K users / 6 months
  • 1 subscriber
  • 4.96K Posts
  • 53.3K Comments
  • Modlog