• 0 Posts
  • 1 Comment
Joined 10d ago
cake
Cake day: Nov 05, 2025

help-circle
rss

Datacentre-hosted LLM’s have a long way to go to be accurate enough for mass deployment. It looks to me like it will take a miracle of some sort for them to manage it before this bubble pops. It could be decades or more, after all we don’t have a real understanding of how the brain works, so hoping to mimic it now seems a bit premature.

I can see RAG and fine tuning making an LLM accurate enough to be functional, enough for a range of natural language processing computing tasks (with a decent amount of human input that ultimately is used for fine-tuning). But even if just for cost reasons (in RAG’s case), you will surely want your LLM hosted locally. I don’t see a need for that data centre.

Venture capitalists/silicone valley bros might have burned through trillions to do the work to get trained LLM’s useful enough for people to run in their own organisations/at home.