• 0 Posts
  • 352 Comments
Joined 2Y ago
cake
Cake day: Mar 22, 2024

help-circle
rss

Whoa, this sounds like drama. Though over what, I don’t know. I didn’t know CDPR’s co-founder left awhile ago.

What’s going on there at CDPR, and why would GoG want to get out from under them?


Some “DLC happy” games seem to work in niches while mostly avoiding the micro-transaction trap. I’m thinking of Frontier’s “Planet” games, or some of Paradox’s stuff.

I’m confused at some games not taking the DLC happy route, TBH. 2077, for instance, feels like it’s finally fixed up, and they could make a killing selling side quests smaller in scope than the one they have.


CUDA is actually pretty cool, especially in the early days when there was nothing like it. And Intel/AMD attempts at alternatives have been as mixed as their corporate dysfunction.

And Nvidia has long has a focus on other spaces, like VR, AR, dataset generation, robotics, “virtual worlds” and such. If every single LLM thing disappeared overnight in a puff of smoke, they’d be fine; a lot of their efforts would transition to other spaces.

Not that I’m an apologist for them being total jerks, but I don’t want to act like CUDA isn’t useful, either.


Yeah I mean you are preaching to the choir there. I picked up a used 3090 because rocm on the 7900 was in such a poor state.


That being said, much of what you describe is just software obstinacy. AMD (for example) has had hardware encoding since early 2012, with the 7970. Intel quicksync has long been a standard on laptops. It’s just a few stupid propriety bits that never bothered to support it.

CUDA is indeed extremely entrenched in some areas, like anything involving PyTorch or Blender’s engines. But there’s no reason (say) Plex shouldn’t support AMD, or older editing programs that use OpenGL anyway.


Well, same for me TBH. I’d buy a huge Battlemage card before you could blink, but the current cards aren’t really faster than an ancient 3090.

But most people I see online want something more affordable, right in the 9000 series or Arc range, not a 384-bit GPU.


Fact is, Nvidia has the vast majority of market share: https://www.techpowerup.com/337775/nvidia-grabs-market-share-amd-loses-ground-and-intel-disappears-in-latest-dgpu-update

It’s inexplicable to me. Why are folks ignoring Battlemage and AMD 9000 series when they’re so good?

Alas, for whatever reason they aren’t even picking existing alternatives to Nvidia, so Nvidia suddenly becoming unavailable would be a huge event.


AFAIK, the H100 and up (Nvidia’s bestselling data center GPUs) can technically game, but they’re missing so many ROPs that they’re really bad at it. There will be no repurposing all those AI datacenters for cloud gaming farms.


Nah, I’m a hardcore Rimworld fan who’s been with it since the forum Alphas, and that’s completely true, heh. IIRC was explicitly inspired by DF, literally part of the description in the early days.

That being said, I’ve never played DF. I’m more into sci fi and tech than Tolkien-esque fantasy, hence I’ve passed on stuff like Necesse and… what’s that massive city builder? That, and many I’ve forgotten the names of.

Maybe I will give DF a look though. Maybe it has automation/tech mods that would make it more appealing, like Minecraft and Rimworld?


Again, they’re tools. Some of the most useful applications for LLMs I’ve worked on are never even seen by human eyes, like ranking, then ingesting documents and filling out json in pipelines. Or as automated testers.

Another is augmented diffusion. You can do crazy things with depth maps, areas, segmentation, mixed with hand sketching to “prompt” diffusion models without a single typed word. Or you can use them for touching up something hand painted, spot by spot.

You just need to put everything you’ve ever seen with ChatGPT and copilot and the NotebookLM YouTube spam out of your head. Banging text into a box and “prompt engineering” is not AI. Chat tuned decoder-only LLMs are just one tiny slice that a few Tech Bros turned into a pyramid scheme.


An OpenAI subscription does not count.

Otherwise, yeah… but it helps them less, proportionally. AAAs still have the fundamental Issue of targeting huge audiences with bland games. Making them even more gigantic isn’t going to help much.

AAs and below can get closer to that “AAA” feel with their more focused project.


Then most just won’t go on the Game Awards, and devs will go on using Cursor or whatever they feel comfortable with in their IDE setup.

I’m all against AI slop, but you’re setting an unreasonably absolute standard. It’s like saying “I will never use any game that was developed in proximity to any closed source software.” That is possible, technically, but most people aren’t gonna do that. It’s basically impossible on a larger team. Give them some slack with the requirement; it’s okay to develop on Windows or on Steam, just open the game’s source.

Similarly, let devs use basic tools. Ban slop from the end product.


Now my blood boils like everyone else’s when it comes to being forced to use AI at work, or when I hear the AI Voice on Youtube, or the forced AI updates to Windows and VS Code

You don’t hate AI. You hate Big Tech Evangelism. You hate corporate enshittification, AI oligarchs, and the death of the internet being shoved down your throat.

…I think people get way too focused on the tool, and not these awful entries wielding them while conning everyone. They’re the responsible party.

You’re using “AI” as a synonym for OpenAI, basically, but that’s not Joel Haver’s rotoscope filter at all. That’s niche machine learning.


As for the exponential cost, that’s another con. Sam Altman just wants people to give him money.

Look up what it takes to train (say) Z Image or GLM 4.6. It’s peanuts, and gets cheaper every month. And eventually everyone will realize this is all a race to the bottom, not the top… but it’s talking a little while :/


Yeah.

Maybe a technicality too. The rule said “no AI,” and E33 used AI.

I get their intent: keep AI slop games out. But in hindsight, making the restriction so absolute was probably unwise.


If we’re banning games over how they make concept art… I’m not sure how you expect to enforce that. How could you possibly audit that?

Are you putting coding tools in this bucket?


Then you’re going to get almost no games.

Or just get devs lying about using cursor or whatever when they code.

If that’s the culture of the Game Awards, if they have to lie just to get on, that… doesn’t seem healthy.


That’s just not going to happen.

Nearly any game with more than a few people involved is going have someone use cursor code completion, or use one for reference or something. They could pull in libraries with a little AI code in them, or use an Adobe filter they didn’t realize is technically GenAI, or commission an artist that uses a tiny bit in their workflow.

If the next Game Awards could somehow audit game sources and enforce that, it’d probably be a few solo dev games, and nothing elsex

Not that AI Slop should be tolerated. But I’m not sure how it’s supposed to be enforced so strictly.


Oh, yes. Big publisher will try it on a huge scale. They cant help themselves.

And they’re going to get sloppy results back. If they wanna footgun themselves, it’s their foot to shoot.


Some mid sized devs may catch this “Tech Bro Syndrome” too, unfortunately.


I think AI is too dumb, and will always be too dumb, to replace good artists.

I think most game studios can’t afford full time art house across like 30 countries, nor should they want the kind of development abomination Ubisoft has set up. That’s what I’m referring to when I say “outsourced”; development that has just gotten too big, with too many people and too generic a target market. And yes, too many artists working on one game.

I think game artists should have a more intimate relationship with their studio, like they did with E33.

And it’d be nice for them have tools to make more art than they do now, so they can make bigger, richer games, quicker, with less stress and less financial risk. And no enshittification that happens when their studio gets too big.


That’s fair.

But the Game Awards should reconsider that label next year. The connotation is clearly “AI Slop,” and that just doesn’t fit for stuff like cursor code completion, or the few textures E33 used.

Otherwise studios are just going to lie. If they don’t, GA will be completely devoid of bigger projects.

…I don’t know what the threshold for an “AI Slop” game should be through. It’s clearly not E33. But you don’t want a sloppy, heavily marketed game worming its way in, either.


I understand the principle. Even if E33 is not slop, people should fear a road that leads to dependence on “surveillance state AI” like OpenAI. That’s unacceptable.

That being said, I think a lot of people don’t realize how commoditized it’s getting. “AI” is not a monoculture, it’s not transcending to replace people, and it’s not limited to corporate APIs. This stuff is racing to the bottom to become a set of dumb tools, and dirt cheap. TBH that’s something that makes a lot of sense for a game studio lead to want.

And E33 is clearly not part of the “Tech Bro Evangalism” camp. They made a few textures, with a tool.


More that an existing smaller studio doesn’t have to sell their soul to a publisher (or get lucky) to survive. They can more safely make a “big” game without going AAA.

My observation is that there’s a “sweet spot” for developers somewhere around the Satisfactory (Coffee Stain) size, with E33 at the upper end of that, but that limits their audience and scope. If they can cut expensive mocap rigs, a bunch of outsourced bulk art, stuff like that with specific automation, so long as they don’t tether themselves to Big Tech AI, that takes away the advantage AAAs have over them.

A few computer generated textures is the first tiny step in that direction.

So no. AI is shit at replacing artists. Especially in E33 tier games. But it’s not a bad tool to add to their bucket, so they can do more.


So is the source.

If they’re paying a bunch of money to OpenAI for mega text prompt models, they are indeed part of the slop problem. It will also lead to an art “monoculture,” Big Tech dependence, code problems, all sorts of issues.

Now, if they’re using open weights models, or open weights APIs, using a lot of augmentations and niche pipelines like, say, hand sketches to 3D models, that is different. That’s using tools. That’s giving “AI” the middle finger in a similar way to using the Fediverse, or other open software, instead of Big Tech.



Yeah.

A lot of devs may do it personally, even if it’s not a company imperative (which it shouldn’t be).


And little tools like that give studios like this an edge over AAAs. It’s the start of negating their massive manpower advantage.

In other words, the anti-corpo angle seems well worth the “cost” of a few generations. That’s the whole point of AI protest, right? It really against the corps enshittifying stuff.

And little niche extensions in workflows is how machine learning is supposed to be used, like it was well before it got all the hype.


Seems excessive.

There’s AI slop games, the new breed of lazy asset flips. There’s replacing employees with slop machines.

And then there’s “a few of our textures were computer generated.” In a game that is clearly passionately crafted art.

I get it’s about principle, but still.


Y’all got any recs for “AI sandboxes” in the vein of Rimworld and Stellaris? Stellaris is just borked now, and I’ve modded/played the hell out of Rimworld.


Horizon 5 has one heck of a Hot Wheels expansion, if y’all don’t already have it.


They can’t react in 2026 unless they already have something in the pipe, and they don’t (yet) make their own RAM.

…That being said, it could be a good year for big APUs, which use system RAM you already have to have.


Because, to be blunt, a lot of apps seem to get away with it without getting sued. Especially mobile clones


To be blunt: were you holding it wrong? Was the game on an HDD? Did you tweak the inis in a dangerous way?

It’s been ages since I played FO4. It was janky for sure. But it seemed to run okay, and load fast, on a toaster compared to what I have now. And I never had a loading screen last close to a minute.

Starfield was a whole nother level, though. It felt like a game trying to look like 2077, but with the engine “feel” of something from 2006.


Do folks enjoy Starfield these days?

Or 76?

I’ve been playing BGS since Oblivion, and my experience was:

  • 76 was boring, even with coop. That’s saying something. The world was interesting, but the main and side (fetch) quests were the dullest, buggiest things that kept trying to sell us some anti grind stuff; and this was well after launch.

  • Starfield was… well, even more boring. It felt like Fallout 3 with all the jank, 10X the production budget, 100X the graphics requirements (as smooth as a cactus on my 3090), yet somehow, none of the charm. I only played for a bit, but I don’t remember a single character name. Whereas I can still recall little side quests from Oblivion and FO3. Quirks persisted all the way from Oblivion, yet all the fun bugs were patched out. Basically, ME: Andromeda was better in every way.

But, you know, whatever floats peoples boats. I’m curious if these games have grown a following over whatever I was missing.


Setting criticism of BGS aside, Fallout isn’t as “hard” with its lore as some. There are little inconsistencies between the games and other media that are basically written off as “gameplay mechanics things” or simple oversights.

…Hence there will probably be conflicts with the TV show. But that’s fine. It’s nothing earth shattering for the IP.


Yeah. That’s a bit extreme.

You can sit back and let this stuff collapse under its own weight, you know.

TBH a violent reaction feels like is just going to help politicize this LLM mania (and therefore present an excuse to cement the enshittification). Let people see how awful and annoying it is all by itself.

You should break Meta glasses though. That’s totally warranted.


because anyone who knows even a scrap of how LLM/GANs work knows that the data needs to train a model would be far beyond the reach of a company of Larian’s scale

If it’s like an image/video model, they could start with existing open weights, and fine tune it. There are tons to pick from, and libraries to easily plug them into.

If it’s not, and something really niche, and doesn’t already exist to their satisfaction, it probably doesn’t need to be that big a model. A lot of weird stuff like sketch -> 3D models are trained on university student project time + money budgets (though plenty of those already exist).

We don’t need defenders coming in here trying to pretend that the CEO hasn’t just clarified that they are using AI for preproduction, we know this and it’s not up for debate now.

No. We don’t know.

And frankly, why do I need to play their game when I could just AI generate my own slop and save the 70 bucks

I dunno what you’re on about, that has nothing to do with tools used in preproduction. How do you know they’ll even use text models? Much less that a single would ever be shipped in the final game? And how are you equating LLM slop to a Larian RPG?

hit, it seems like they’ve forgotten about the community that got them to where they are today in favor of some AAA gaming nonsense.

Except literally every word that comes out of interviews is care for their developers, and their community, which they continue to support.

Frankly, there are plenty of games that people judge from the outset. There’s a reason why we have the saying “First impressions matter”. They’ve left a bad taste in anyone who dares question the ethics of AI use, but thankfully there might be an audience of people out there who like slop more than I dislike it so they could be ok. No skin off my nose.

Read that again; pretend it’s not about AI.

It sounds like language gamergate followers use as excuses to hate something they’ve never even played, when they’ve read some headline they don’t like.


…Look, if Divinity comes out and it has any slop in it, it can burn in hell. If it comes out that they partnered with OpenAI or whomever extensively, it deserves to get shunned and raked over coals.

But I do not like this zealous, uncompromising hate for something that hasn’t even come out, that we know little about, from a studio we have every reason to give the benefit of the doubt. It reminds me of the most toxic “gamer” parts of Reddit and other cesspools of the internet, and I don’t want it to spread here.


That’s an awfully early point to judge a game, with basically zero knowledge of what they’re actually doing/using.

What if they’re referencing a small, home grown model to assist with mocap? Or a sketch->3D drafting tool? Would that be enough to write it off?


That’s extreme, and put abrasively.

…But the sentiment isn’t wong.

Except it’s not a small minority anymore, which is understandable given how pervasive chatbot enshittification is becoming. Maybe the ‘made with AI’ label isn’t enough to deter everyone, but it’s enough to kill social media momentum, which is largely how games sell these days.


like the recent Warlock game announcement.

That’s a very… abstract trailer.

Yeah, I’m suspicious too.


WTF. That’s awful, and also totally baffling. “This single game is responsible for a huge chunk of revenue and introducting countless people to D&D; let’s lay off its staff and leadership.”

Baldur’s Gate 4 will arrive far sooner than you think, and it will be terrible.

What do you mean by this? An outsourced spinoff is already in the works? I don’t see that in the linked article.


At a certain level, it is going to be a chore to determine who is or is not slopping up with AI media. Not every asset comes out with six fingers and a half-melted face.

Image/video diffusion is a tiny subset of genAI. I’d bet nothing purely autogenerated makes it into a game.

I can see legitimate frustration with an industry that seems reliant on increasingly generic and interchangeable assets. AI just becomes the next iteration of this problem. You’ve expanded the warehouse of prefab images, but you’re still stuck with end products that are uncannily similar to everything else on the market.

See above. And in many spaces, there are a sea of models to choose from, and an easy ability to tune them to whatever style you want.

And that’s before you get to the IP implications of farming all your content out to a third party that doesn’t seem to care where its base library is populated from.

Thier tools can be totally in house, disconnected from the outside web, if they wish. They might just be a part of the pipeline on their graphics workstations.


Keep a distinction between “some machine learning in tedious parts of our workflows” and “a partnership with Big Tech APIs.” Those are totally different things.

It sounds like Larian is talking about the former, and I’m not worried about any loss of creativity from that.