This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
It’s based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.
There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.
If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it’s going to chop it down by a lot.
If someone finds another use case, for example a model with new capabilities, boom value goes up.
It’s a rollercoaster…
I would disagree on that. There are a few niche uses, but OpenAI can’t even make a profit charging $200/month.
The uses seem pretty minimal as far as I’ve seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.
Ultimately the market value of any work produced by a generic LLM is going to be zero.
It’s difficult to take your comment serious when it’s clear that all you’re saying seems to based on ideological reasons rather than real ones.
Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.
There is zero reason to think the current slop generating technoparrots will ever lead into AGI. That premise is entirely made up to fuel the current “AI” bubble
They may well lead to the thing that leads to the thing that leads to the thing that leads to AGI though. Where there’s a will
sure, but that can be said of literally anything. It would be interesting if LLM were at least new but they have been around forever, we just now have better hardware to run them
That’s not even true. LLMs in their modern iteration are significantly enabled by transformers, something that was only proposed in 2017.
The conceptual foundations of LLMs stretch back to the 50s, but neither the physical hardware nor the software architecture were there until more recently.
The market don’t care what either of us think, investors will do what investors do, speculate.
Language learning, code generatiom, brainstorming, summarizing. AI has a lot of uses. You’re just either not paying attention or are biased against it.
It’s not perfect, but it’s also a very new technology that’s constantly improving.
I decided to close the post now - there is place for any opinion, but I can see people writing things which are completely false however you look at them: you can dislike Sam Altman (I do), you can worry about China’s interest in entering the competition now and like that (I do), but the comments about LLM being useless while millions of people use it daily for multiple purposes sound just like lobbying.