
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
They are errors, not hallucinations. Use the right words and then you can talk about the error rate and the acceptable error rate, the same way we do everything else.
An “error” could be like it did a grammar wrong or used the wrong definition when interpreting, or something like an unsanitized input injection. When we’re talking about an LLM trying to convince the user of completely fabricated information, “hallucination” conveys that idea much more precisely, and IMO differentiating the phenomenon from a regular mis-coded software bug is significant.
But calling it an error implies that it can be solved. I’d call it a fundamental design flaw.