A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world? One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
Veraticus
link
fedilink
English
11Y

It’s not from scratch, it’s seeded and trained by humans. That is the intelligence.

Just like humans are! Do you know what happens when a human grows up without any training by other humans? They are essentially feral, unable to communicate, maybe even unable to think the way we do.

Veraticus
link
fedilink
English
11Y

LLMs do not grow up. Without training they don’t function properly. I guess in this aspect they are similar to humans (or dogs or anything else that benefits from training), but that still does not make them intelligent.

What does it mean to “grow up”? LLMs get better at their tasks during training, just as humans do while growing up. You have to clearly define the terms you use.

Veraticus
link
fedilink
English
11Y

You used the term and I was using it with the same usage you were. Why are you quibbling semantics here? It doesn’t change the point.

Yes, I used the term because “growing up” has a well-defined meaning with humans. It doesn’t with LLMs, so I didn’t use it with LLMs.

Veraticus
link
fedilink
English
11Y

Did you have a point or are you only trying to argue semantics?

LLMs do not grow up.

You should ask yourself that question.

Veraticus
link
fedilink
English
01Y

So, no point? Cool.

From scratch in the sense that it starts with random weights, and then experiences the world and builds a model of it through the medium of human text. That’s because text is computationally tractable for now, and has produced really impressive results. There’s no inherent need for text to be used though, similar models have been trained on time series data, and it will soon be feasible to hook up one of these models to a webcam and a body and let it experience the world on its own. No human intelligence required.

Also, your point is kind of silly. Human children learn language from older humans, and that process has been recursively happening for billions of years, all the way through the first forms of life. Do children not have intelligence? Or are you positing some magic moment in human evolution where intelligence just descended from the heavens and blessed us with it?

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 40 users / day
  • 139 users / week
  • 304 users / month
  • 2.32K users / 6 months
  • 1 subscriber
  • 3.01K Posts
  • 43.4K Comments
  • Modlog