• 0 Posts
  • 7 Comments
Joined 1M ago
cake
Cake day: Mar 01, 2026

help-circle
rss

Yes, because usually the people using generative AI for rapid concepts like these are the higher ups with no art experience, just a vision. Then they send it to a concept artist to ‘make it work’ without understanding processes or complexity or feasibility of what they’re asking.

AI had essentially become a tool for people with no high level skill to simulate high level skill, but without any of the understanding that comes from years of real world practice. Often it costs more money because the concept artist now has no control of the workflow and has to sink more time into trying to make a shit concept work properly with the medium.

Case in point: the upcoming Zelda movie’s concepts were all done by generative AI and there’s only one concept artist (usually at least 100) trying to make the shit concepts work for film.


But, again, why? All this is applied post production, so there’s no control from the artist’s perspective on what the player sees on their end. I’d much rather a static pipeline where I’m in control of the look and feel, while also providing the player with options for accessibility like gamma adjustment.

But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?

We already have all that. This ‘feature’ literally adds nothing of value to our pipeline because it is all applied after the product is shipped and on the player’s computer.

Further, because it’s a filter, it obfuscates what’s actually happening underneath. Why learn to predict what the filter will do when you can just not work with it and create scenes exactly how you want it?

This whole thing is providing a solution to a problem that doesn’t exist simply to recoup their investments. It’s a complete waste of energy, materials, processing power etc. Absolutely unnecessary.


But… Why though? As a dev, why would I go through the ideation process only to have it filtered through TWO GPUs? For what benefit? This type of filtering is completely out of my control as a developer, and I wouldn’t want my game to be attached to third party parasite companies and basically split my player base into two classes.



And this is why indie games will probably have a second golden age.


Exactly, right? I remember streaming on discord was a pain because of all the lag and TeamSpeak + pigdin + Overwolf was so much better.

Now i use Legcord instead until my friends switch, which I hope is soon.


Some of my friends have registered on Flux but still use discord. It’s sort of just there until discord makes it unbearable for them to use but discord will never do that. They’ll just slowly tighten the noose until you get comfortable.

They pay for nitro, which to me is bonkers.