
I have zero special interest in AI, what pisses me off are weird vague rules.
If all copied code ever is plagiarism and must be reported, the whole world would grind to a halt as we need to lawyer up and rewrite everything with verified clean room protocols.
There are finite ways to solve problems with code, how can anyone prove a piece of code is actually written by them and not AI generated or copied from SO or a blog if they all look the same? There is no audit trail, nobody recorded their coding sessions with cryptographic signatures to prevent tampering.
What I’m getting at here is the complete impossibility of proving a piece of code is man-made and not plagiarised, copied or otherwise generated.
And if it’s impossible to prove something is man-made without a doubt, why have vague rules against code that is not?

But what is “my code”?
If I solve a problem but it turns out later I had read a solution to this problem somewhere and inadvertently copied it. Is it my code?
If I use a Jetbrains provided built in template for a function and just fill in the variables, is it my code?
What if I just accept it as is, still my code?
If I copy a solution verbatim from Stack Overflow or a book, is it my code?
If I iplement a well known algorithm, is it my code if it looks exactly the same as a billion other implementations of the same thing? Can you tell whether I wrote it or just copied someone elses code?
What if Intellisense autocompletes a full function, is it my code?
What if the autocomplete is powered by a LLM, is it my code?
Can anything except a full clean-room implementation on a computer with no internet access be “my code”?
Please tell me, as you seem to have this thing nailed down. I work with this stuff every day and I’m mostly in the dark about where the line between “my code” and “too much autogenerated, no copyright or even copyright ifringement” goes.

It’s not me saying it, it’s the lawyers. The jury is quite literally out own where the copyright lies on AI generated content. The only definite verdict has been that the AI itself isn’t it.
But whether it’s the one who created the model, prompted the model or the ones whose data was used to teach the model 🤷🏻♂️ Wibbly wobbly timey wimey
I get regular briefings about this at work, because we have really good lawyers who actually read contracts of the services we use. And have banned multiple ones due to … creative copyright clauses in their contracts.
As for your “generated code is plagiarism” argument, do you have any precedents on that because I’d be interested in reading the verdicts? If true it’s a massive game changer for many industries and open so fucking many companies to lawsuits.

It’s a weird gray area. Nobody really knows where the limit is. The current consensus is that for a fact the “AI” can’t own a copyright to anything.
How smart can an autocomplete be before it takes away your copyright? Does using snippets count? How smart can the snippet engine be at filling the template?
If I ask AI how to solve something but write the exact same code myself, is it mine?
It If I grab code from stack overflow, does it make it mine?

This one here: https://www.scenario.com/
Also at least Rovio has had an “AI” art asset pipeline for years now, even before ChatGPT. Their ML unit is well over a decade old. And it’s specifically tuned for their own style: https://youtu.be/ZDrqhVSY9Mc
I’m not talking out of my ass, I work with this shit daily.

The problem is that Chinese, Indian and Turkish developers couldn’t care less about western AI purity tests and will blast past any competition who does.
Unless Captain AI Planet stops them, the cat is out of the bag and not going back.
If you want to run a AAA live service game the amount of content you put out every X weeks is how you make money. And the one who can keep up the best amount/quality ratio will always win.
The average gamer won’t care if the latest Gooner F2P or FPS game DLC is AI generated, AI assisted or lovingly hand crafted. They’ll throw their money at it anyway…

Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process
Not exactly a massive AI slop problem, right?
Can we put our collective pitchforks away for this case at least?

So any game whose developer has used a recent version of VSCode will be disqualified in the future? VSCode has a GenAI autocomplete turned on by default.
One single question about an API to ChatGPT and your game is out.
Use Photoshops generative features for a marketing asset: out.
You get how insane the rule is?
You can only qualify it you write your game in vanilla vim with no extensions and graphics must be drawn in an old version of Gimp? 😆

It takes less time for the actual in house artists to use GenAI with a dataset trained with the company’s own style to generate “bulk art” than it takes them to manage an outsourced company doing the same thing.
Sauce: work in gaming, just talked about this with our art producer.
The outsourcing work is literally “make this texture we made ourselves by hand look like it was snowing” type of shit. You can use GenAI and have it done in 30 minutes or spend 2 hours talking back and forth with the outsourcing partner in 10 minute intervals over a week - interrupting your flow every time.
I think I’m like 4-5 hours in and so far I’ve slogged through snow really slowly with some fucking assholes I just want to shove down a mountain.
When does it get good? Do I ever see the sun?