Emergence of DeepSeek raises doubts about sustainability of western artificial intelligence boom

My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

@[email protected]
link
fedilink
English
393d

The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

Something trained on audio/pictures/videos/text is probably going to cost more than just text.

But maybe I’m missing something.

@[email protected]
link
fedilink
English
243d

The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

@[email protected]
link
fedilink
3
edit-2
3d

I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

Edit: and I think the real money will be in the more complex models focused on workflows automation.

Yea except DeepSeek released a combined Multimodal/generation model that has similar performance to contemporaries and a similar level of reduced training cost ~20 hours ago:

https://huggingface.co/deepseek-ai/Janus-Pro-7B

veroxii
link
fedilink
42d

Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy

modulus
link
fedilink
93d

One of the things you’re missing is the same techniques are applicable to multimodality. They’ve already released a multimodal model: https://seekingalpha.com/news/4398945-deepseek-releases-open-source-ai-multimodal-model-janus-pro-7b

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 165 users / day
  • 333 users / week
  • 621 users / month
  • 2.32K users / 6 months
  • 1 subscriber
  • 3.1K Posts
  • 44.3K Comments
  • Modlog