Not in niche games. Rimworld and Stellaris (for instance) are dramatically faster on Windows, hence I keep a partition around. I’m talking 40%ish better simulation speeds vs Linux native (and still a hit with Proton, though much less).
Minecraft and Starsector, on the other hand, freaking love Linux. They’re dramatically faster.
These are kinda extreme scenarios, but the point is AAA benchmarks don’t necessarily apply to the spectrum of games across hardware, especially once you start looking at simulation heavy ones.
Yeah, Halo 3 wrote them into a corner.
Still, they could have creatively ‘reset’ the scope. Focus on a frontier story, a prequel, some kind of cataclysm, probably reset a lot of characters. Even if they lost the silent-chief ‘feel’ of the trilogy, the tone change would have been excused (like Reach to an extent).
Infinite tried this, I guess, but they hauled over too much baggage from previous games and lore.
Honestly, if I were in charge of Halo I’m not sure what I’d do now… As I’m sure ‘make Halo Infinite multiplayer only, kill any semblance of story there, f that wierd stuff in the novels and start over with a narrower setting’ would get shot down.
Sometimes there’s a rock hard justification. Orbital mechanics is a great one. Game engines are literally not built for physics at celestial scales.
KSA’s feature scope is way narrower than, say, a game with tons of NPCs and voxels and elaborate foilage and MMO-scale multiplayer and such. The DayZ guy and that studio are also pretty experienced at this point.
So yeah, I agree! And I’m glad KSA is seemingly progressing well, actually…
5090 is kinda terrible for AI actually. Its too expensive. It only just got support in pytorch, and if you look at ‘normie’ AI bros trying to use them online, shit doesn’t work.
4090 is… mediocre because it’s expensive for 24GB. The 3090 is basically the best AI card Nvidia ever made, and tinkerers just opt for banks of them.
Businesses tend to buy RTX Pro cards, rent cloud A100s/H100s or just use APIs.
The server cards DO eat up TSMC capacity, but insane 4090/5090 prices is mostly Nvidia’s (and AMD’s) fault for literally being anticompetitive.
One issue is everyone is supply constrained by TSMC. Even Arc Battlemage is OOS at MSRP.
I bet Intel is kicking themselves for using TSMC. It kinda made sense when they decided years ago, but holy heck, they’d be swimming in market share if they used their own fabs instead (and kept the bigger die).
I feel like another is… marketing?
Like, many buyers just impulse buy, or go with what some shill recommended in a feed. Doesn’t matter how competitive anything is anymore.
I have lost track of them, lol. Isn’t that just SE underneath…
I think I inherited AE too, somehow. I dunno, honestly I haven’t touched any BGS game in a while because other RPGs I’ve been playing (2077, KCD2, even the GOTG game) make them feel dated.
Like, with KCD2, I keep thinking if I had witnessed this as a kid in love with Oblivion, it would have blown my mind, while Skyrim would feel similar and Starfield… kinda dull?
Spicy take: I hope they dump 2077’s engine and go Unreal.
I recently followed this guide to try and set up “optimized” path tracing (no raster lighting, with everything raytraced) in 2077, and on my lowly RTX 3090 it runs like cold molasses. Not a chance. Raster + RT reflections is all I can manage, and it looks… good.
Meanwhile, I’ve also been playing Satisfactory (an Unreal Engine game from a comparatively microscopic studio), and holy moly. Unreal Engine’s dynamic lighting looks scary good. Like, I get light bounces and reflections and everything, and it runs at like quadruple the FPS in hilariously complex areas, again, with a fraction of the dev effort.
Cryengine in KCD2 is rather sick as well, though probably less tuned for urban landscapes.
…So why don’t they save a few years and many millions, and just go with one of those instead of poorly reinventing the wheel?
for reasons that aren’t entirely clear beyond aesthetics and bragging rights
Oh my sweet summer child.
The base M4 is a very small chip with a modest memory config. Don’t get me wrong, it’s fantastic, but it’s more Steam Deck/laptop than beefy APU (which the M4 Pro is a closer analogue to).
$1200 is pricey for what it is, partially because Apple spends so much on keeping it power efficient, rather than (for example) using a smaller die or older process and clocking it higher.
It means emulation with pretty much every current title, and graphics driver issues and sluggish game out of the wazoo (as Qualcomm is very different than AMD/Intel/Nvidia).
ARM being more power efficient is also kind of a meme. Intel/AMD can be extremely good when clocked low (which they can do since there’s no emulation overhead), with both the CPU/GPU. Apple just makes x86 look bad because they burn a ton of money on power efficiency, but Qualcomm is more in the “budget” space. No one is paying $2K for an Xbox handheld like they would for an Apple product.
Using Qualcomm chips
Oof.
Why didn’t they go AMD, or heck, even Intel? They have GPU-heavy APUs in the pipe that would mostly just work.
Intel, in particular, is not bad power-wise as long as they aren’t clocking chips to very edge like they’ve been doing, and won’t necessarily have the TSMC capacity constraint. That’s huge.
OK, yes, but that’s just semantics.
Technically pretraining and finetuning can be very similar under the hood, with the main difference being the dataset and parameters. But “training” is sometimes used interchangeably with finetuning in the hobbyist ML community.
And there’s a blurry middle ground. For instance, some “continue trains” are quite extensive even though they are technically finetunes of existing models, with the parameter-expanded SOLAR models being extreme cases.
It doesn’t though. Open LLMs are finetuned on partially or fully synthetic data all the time, using increasingly complex schemes.
Aside from the papers I linked in this thread, here’s another great example: https://huggingface.co/deepcogito/cogito-v1-preview-qwen-32B
No I was thinking fully synthetic data actually.
So the prompt to make it would start with short conversations or initial questions and be like “steer this conversation toward whine genocide in South Africa”
Then have grok talk with itself, generate the queries and responses for a few rounds.
Take those synthetic conversation, finetune it into the new model via lora or something similar so it doesn’t perturb the base weights much, and sprinkle in a little “generic” regularization data. Wala, you have biased the model with no system prompt.
…Come to think of it, maybe that’s what X is doing? Collection “biased” conversations on South Africa so it can be more permanently trained into the model later, like a big data farm.
On a big scale? Yeah, sure. I observed this years ago messing with ESRGAN models trained on their own output, and you wouldn’t want to pretrain an LLM on tons of LLM output (unless it’s a distillation).
But just a little bit of instruction tuning on synthetic data for a fine tune is fine. This is literally how Deepseek was made: https://arxiv.org/abs/2402.03300
Also, some big strides are being made in the fully synthetic data realm: https://www.arxiv.org/pdf/2505.03335
Is it just stuffed in the system prompt? Should be easy to find out… That’s also hilariously stupid.
X could bias it ‘properly’ by training it in with some synthetic data, generated by Grok itself. Hell, I know how to do that. It generally wouldn’t comment on that type of bias, and also function better on other topics… but screw doing anything competently, right? Even if it’s a shitty, obvious lie, I guess X users will still eat it up.
This planet is so screwed.
I am a huge BGS and “game cinema” fan, and Starfield felt so… boring. Both the first bit I played before I dropped it, and YT videos to see what I was missing.
For lack of another explanation, its like all those fun side quests and nooks individual writers went crazy making lost their spark. Even ME Andromeda had more compelling bits.
So I can see modders shying away. Why put all that work into something one has no desire to replay, especially with the alternatives we have these days.
I mean, DLSS looks great. Can’t speak to FSR, but if it’s anywhere close that’s incredible.
I’m speaking as a major pixel peeper. I’ve spent years pouring over vapoursynth filters, playing with scaling algorithms, being really obsessive about video scaling, calibration, proper processing bit depths, playing games at native res even if I have to run on mostly low, omitting AA because it felt like blur, modding shaders out myself… And to me, DLSS quality or balanced (depending on the situation) looks like free lunch.
It’s sharp. Edges are good. Overprocessing artifacts are minimal. It’s not perfect, but infinitely better than naive (bilinear or bicubic) scaling to monitor res.
My only complaint is improper implementations that ghost, but that aside, not once have I ever switched back and fourth (either at native 1440P or 4K) and decided ‘eh, DLSS’s scaling artifacts look bad’ and switched back to native unless the game is trivial to run. And one gets pretty decent AA as a cherry on top.
So… Microtransactions.
They want more microtransactions?
Even giving them the benefit of the doubt, is there any game dev or gamer currently dissatisfied with existing payment systems? Are people in certain countries struggling with the mechanics of paymernt? Like, there are tons of ways to shoehorn in random charges or in-game ownership systems, and I don’t see what crypto brings other than moving the purse-holder.
Again, devil’s avocate: one could argue current platform fees (30%) are very high, but this is more of a monopolization issue than a fundamental payment system one,
Not following that at all…
AI Bro is pretty specific. To me, its evangelists worshipping nebulous ideas and figures like Altman or maybe Musk, looking down on others for not “understanding” how amazing their vision of AI is, all in on the enshittification and impracticality, all in on the raging hype.
It feels very much like crypto fanaticism.
Even if we interpret OP as cynically as possible (lazy AI-only translation when they have another option)… that’s bad, but not “AI Bro” to me.
I was testing heavily modded Minecraft, specifically Enigmatica, which chugs even on beefy PCs.
Out of curiosity, what mod are you running for shaders, specifically? That may have an effect.