In my view, this is the exact right approach. LLMs aren’t going anywhere, these tools are here to stay. The only question is how they will be developed going forward, and who controls them. Boycotting AI is a really naive idea that’s just a way for people to signal group membership.
Saying I hate AI and I’m not going to use it is really trending and makes people feel like they’re doing something meaningful, but it’s just another version of trying to vote the problem away. It doesn’t work. The real solution is to roll up the sleeves and built an a version of this technology that’s open, transparent, and community driven.

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Kind of off topic, but this reminded me about something I really don’t like about the current paradigm of “intelligence” and “knowledge” being parts of a single monolithic model.
Why aren’t we training models on how to search any generic dataset for information, find patterns, draw conclusions, etc, rather than baking the knowledge itself into the model? 8 or so GB of pure abstract reasoning strategies would probably be way more intelligent and efficient than even a much larger model we have now. Imagine if you can just give it an arbitrarily sized database whose content you control, which you can then fill with the highest quality, ethically obtained, human expert moderated data complete with attributions to original creators, and have it base all its decisions from that. It would even be able to cite what it used with identifiers in the database, which can then be manually verified. You get a concrete foundation of where it’s getting its information from, and you only need to load what it currently needs into memory, whereas right now you have to load all the AI’s “knowledge,” relevant or not, into your precious and limited RAM. You would also be able to update the individual data separately from the model itself, and have it produce updated results from the new data. That would actually be what I consider an artificial “intelligence” and not a fancy statistical prediction mechanism.
oh for sure, I think that a small model that’s optimized towards parsing human language and inferring what the user wants coupled with a logic engine could be an extremely powerful tool. Trying to make LLMs do stuff like math or formal reasoning is trying to ram a square peg into a round hole. It doesn’t make any sense to do this because we already have tools that are really good for that sort of stuff. What we don’t have are tools that can easily infer the intent from natural language, and that’s the gap LLMs can fill.