A reflective essay exploring how classic LLM failure modes---limited context, overgeneration, poor generalization, and hallucination---are increasingly recognizable in everyday human conversation.
Sims
link
fedilink
315h

Agree. For example; the amount of times we correct our own speech before ‘releasing it’ is staggering. We have a ‘stochastic parrot’ mechanism build right into the hearth of our own cognition and it generates the same problems for us. ‘Hallucinations’ are build into a statistical model. It takes a lot of culture/rules and energy to constantly adjust(habituate to expectations/environment into the ‘norm’. People that have fallen out of normal social environments know how difficult human interactions can be to learn/overcome.

Current llm’s doesn’t have the ability to do these micro-corrections on the fly or habituate the corrected behavior through learning/culture etc.

‘Context length’ is also directly mappable to human cognitive load, where chronic stress tends to shorten our ‘context length’ and we lose overview in a split-second, and forget the simplest things. ‘Context length’ are for an llm, roughly equivalent to our ‘working memory’.

However, compensating systems are already being designed. Just like life/evolution did, one by one, these natural tendencies from statistics will be fixed by adding more ‘cognitive modules’ that modulate the internal generation and final output…

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
110h

Right, I think the key difference is that we have a feedback loop and we’re able to adjust our internal model dynamically based on it. I expect that embodiment and robotics will be the path towards general intelligence. Once you stick the model in a body and it has to deal with the environment, and learn through experience, then it will start creating a representation of the world based on that.

Romkslrqusz
link
fedilink
31d

I know I can put together a prompt to give any of today’s leading models and am essentially guaranteed a fresh perspective on the topic of interest

I’ll never again ask a human to write a computer program shorter than about a thousand lines, since an LLM will do it better

I can agree with some of the parts about how some humans can be really annoying but this mostly reads like AI propaganda from someone who has deluded themselves into believing an LLM is actually any good at critical thought and context awareness.

This article was written by someone who apparently can’t switch from the “Fast” to the “Thinking” mode.

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 81 users / day
  • 131 users / week
  • 327 users / month
  • 1.38K users / 6 months
  • 1 subscriber
  • 4.53K Posts
  • 50.5K Comments
  • Modlog