This game gives me weird vibes. The studio already had 2 other games in early access before this one, both of which are seemingly abandoned or at a standstill development wise. The gameplay itself just kind of looks like a generic base building game but with Pokémon and guns. Most of the steam reviews are just making jokes about the knockoff elements, guns, animal cruelty, etc.
I honestly can’t tell if this game is actually good or if it’s just a brief trend.
That isn’t neccesarily true, though for now there’s no way to tell since they’ve yet to release their code. If the timeline is anything like their last paper it will be out around a month after publication, which will be Nov 20th.
There have been similar papers for confusing image classification models, not sure how successful they’ve been IRL.
I find a lot of “visual overhaul” mods beyond just textures make games look worse. Most that I’ve tried go overboard with lighting effects that are distracting and don’t really fit the original art. The best visual mods I’ve used were the ones that extended view distances, increased shadow resolution and fixed small but noticeable visual issues like banding and shimmering. Trying to completely rework lighting in a game rarely turns out well.
If this were to ever become mainstream this would likely be incorporated into the GPU for cost reasons. Small machine learning acceleration boards already exist but their uses are limited because of limited memory. Google has larger ones available but they’re cloud only.
Currently I don’t see many uses in gaming other than upscaling.
It’s a shame that this game got written of by so many people due to its shaky launch. The game’s story and side quests are good and most of the game breaking stuff has been fixed. The visuals and art direction are great so it’s nice to see it getting support for new technologies even if no one can run them.
LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.
Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.
Looks like I got early access: %