This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
The best implementation of LLM-AI for me is self hosted but accessible on my phone so it’s mine and under my control with my data scoping.
Thinking like Immich, for LLM-AI.
Sounds like this steps us closer which is great.
Indeed, self hosing has to be the way forward.
Based FOSS but id take the claims of coding ability with a entire salt mine, LLMs simply arent all that great in general at coding no matter what tech Bros try to tell you.
It depends on the task and the specific LLM. My experience is that they can do a lot of things effectively nowadays, and they’re improving rapidly.
They really cant, they’re good for boilerplate but thats about it. Every example of a AI heavy code base I’ve seen has been poorly optimised security hole ridden garbage.
I can tell you for a fact that they can. However, even managing boilerplate and repetitive code is a huge benefit. Furthermore, these tools are great at combing through code bases and helping you find where you need to make changes in code. If you haven’t actually used these tools in a real project yourself then you don’t really know what they’re capable of.