This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Fuck Jeff Atwood
Why?
If you’ve ever had an interaction with him, you’d know why
Given that it’s pointing straight to “no”, should I interpret “AI” as “additional irony”?
…seriously, model-based generation is in its infancy. Currently it outputs mostly trash; you need to spend quite a bit of time to sort something useful out of it. If anyone here actually believes that it’s smart, I have a bridge to sell you.
LLMs will undoubtedly improve as we build more systems around them.
The question is will it ever be reliable enough to trust? You can’t have a 99% reliable critical system.
Topical:
https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/
But 99% accuracy is better than any human alive, so while maybe LLMs won’t be able to substitute critical systems, they might just replace all the people around those systems.
Like, we won’t want an AI as the failsafe for a nuclear plant. But we might prefer an AI as the the “person” in charge of this failsafe.
Current generations aren’t even close to that rate, and it’s unclear if it’s economical or even possible to fix the deep structural issues of our current Gen LLMs.
My professional experience with LLMs is that they don’t even approach 20% accuracy for a field as ridiculously structured as programming.
They’re just helpful enough to not be a hindrance.
Not too mention plenty of humans are 99% accurate
I love how the question “should I use AI?” points directly to no.
It also ponys directly to Yes es?
I require humarmip in cons cunt
“cons cunt” is just Aussie slang for Lisp Programmer.
You are a genius
the task is clearly repeisitive
Data privacy concerns, data privay concerns, data priacy coucerns, and data apiacy concext!