I agree that the results will be different, and certainly a very narrowly trained LLM for conversation could have some potentials if it has proper guardrails. So either way there’s a lot of prep beforehand to make sure the boundaries are very clear. Which would work better is debatable and depends on the application. I’ve played around with plenty of fine tuned models, and they will get off track contextually with enough data. LLMs and procedural generation have a lot in common, but the latter is far easier to manage predictable outputs because of how the probability is used to create them.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
News oriented content (general reviews, previews or retrospectives allowed).
Broad discussion posts (preferably not only about a specific game).
No humor/memes etc…
No affiliate links
No advertising.
No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
No self promotion.
No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
No politics.
Comments.
No personal attacks.
Obey instance rules.
No low effort comments(one or two words, emoji etc…)
Please use spoiler tags for spoilers.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Sorry but procedural generation will never give you the same result as a well tuned small LLM can.
Also there’s no “hoping”, LLM context preservation and dynamic memory can be easily fine-tuned even on micro models.
I agree that the results will be different, and certainly a very narrowly trained LLM for conversation could have some potentials if it has proper guardrails. So either way there’s a lot of prep beforehand to make sure the boundaries are very clear. Which would work better is debatable and depends on the application. I’ve played around with plenty of fine tuned models, and they will get off track contextually with enough data. LLMs and procedural generation have a lot in common, but the latter is far easier to manage predictable outputs because of how the probability is used to create them.