There's a bit of drama going on with the popular game manager Lutris right now, with users pointing out the developer using AI generated code via Claude.

A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

@[email protected]
link
fedilink
English
125m

Every extra person using all these AI tools is only adding to the issue.

No, literally the opposite. They are going to do this until it is not financially viable. The more frugal and conscientious people are with their AI, the longer it is financially viable. If you want to pop the bubble, go set up a bot to hammer their free systems with bogus prompts. Run up their bills until they can’t afford to be speculative any more.

@[email protected]
link
fedilink
English
21h

AI is immeasurably shitty, both in terms of code quality and of morality. The fact that this developer is hiding his use of it from his community is despicable. I will never use Lutris again, nor will I allow PRs from this developer on any repos of mine. Fuck AI, and fuck strycore (deceitful bastard and Lutris “developer”).

southsamurai
link
fedilink
English
173h

Yeah, this is actually one of the good things a technology like this can do.

He’s dead right, in terms of slop, if it’s someone with training and experience using a tool, it doesn’t matter if that tool is vim or claude. It ain’t slop if it’s built right.

Echo Dot
link
fedilink
English
0
edit-2
34m

It ain’t slop if it’s built right.

Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?

He says it helps him get work done he wouldn’t otherwise do, but how’s that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain’t matching on this one.

southsamurai
link
fedilink
English
114m

Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.

Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.

If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.

And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.

Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).

My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.

Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.

@[email protected]
link
fedilink
English
-133h

slop is slop.

microslop

slopware

slopity slop slop.

naticus
link
fedilink
English
21h

And talking in absolutes without looking for nuance is not mature nor does it use any form of critical thinking.

@[email protected]
link
fedilink
English
218m

I’m sorry. you’re absolutely right. I shouldn’t have said that.

@[email protected]
link
fedilink
English
154h

If he’s using like an IDE and not vibe coding then I don’t have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn’t even write this comment I just wrote without asking AI for assistance.

@[email protected]
link
fedilink
English
21h

Hell most people turn off their brains when the word gets mentioned at all. There’s plenty of basic shit an ai can do exactly as good as a human. But people hear AI and instantly become the equivalent of a shit eating insect.

As long as your educated and experienced enough to know the limitations of your tools and use them accurately and correctly. Then AI is literally a non factor and about as likely to make an error as the dev themselves.

The problem with AI slop code comes from executives in high up positions forcing the use of it beyond the scope it can handle and in use cases it’s not fit for.

Lutris doesn’t have that problem.

So unless the guy suddenly goes full stupid and starts letting AI write everything the quality is not going to change. If anything it’s likely to improve as he off loads tedious small things to his more efficient tools.

Echo Dot
link
fedilink
English
130m

The problem is I’ve seen people who supposedly have a brain start to use a high and over time they become increasingly confident in the AI’s abilities. Then they stop bothering to review the code.

Ephera
link
fedilink
English
21h

Yeah, that’s my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.

But that was pretty much always true. We still did not slap another implementation onto the side, because it’s horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
And it’s horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.

And the worst part is that they don’t even have an answer to those concerns. They know that it’s going to bite us into the ass in the near future. They’re on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.

And when it comes to actually maintaining that generated code, they’ll be the hardest to motivate, because that isn’t as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don’t know any better how it actually works. Nevermind that they’re also less sharp in general, because they’ve outsourced thinking.

@[email protected]
link
fedilink
English
03h

@gruk iz dis tru?

@[email protected]
link
fedilink
English
5
edit-2
3h

I think the simple fact is that some of the people in this thread don’t understand is that the people they’re asking to vet the code don’t know how.

They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don’t know that most of the people in opposition to their comments understand that context.

I haven’t coded anything since the 90’s. I know HTML and basic CSS and that’s it. I wouldn’t have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I’d still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn’t know where to start to make a program. It’s not part of my skill set.

Most users are like that. They engage with only parts of a thing. It’s why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.

It’d be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn’t retract. A lot of people wouldn’t know where to start.

I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.

But the way I see it there’s two different groups and they have very different views of this situation.

The people who can’t code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.

If those people choose to boycott, it’ll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.

The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there’s something wrong.

I suppose there’s a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.

Humans certainly aren’t infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren’t going to make something up unless they have an ulterior motive.

Perhaps breaking things down into tiny chunks makes AI better or it’s outputs more usable. Maybe there’s a 'sweet spot".

But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that’s dangerous for many reasons.

This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.

I know that from experience. So in this case does the AI have more potential to help or do harm?

There’s a lot to this. I have not personally used Lutris, but before this happened I wouldn’t have thought twice about saying that I’ve heard good things about it if someone asked me for a Heroic launcher style software for Linux.

But just like the Ladybird fork of Firefox I don’t know that I feel comfortable suggesting it if this is the state of things. For the same reason I don’t currently feel comfortable recommending Windows 11 or Chrome.

There are so many sensitive things that OS’s, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn’t swimming in a big pond of sensitive information but it is running on people’s hardware and they should have both the right to be informed and the right to choose.

@[email protected]
link
fedilink
English
43h

In this particular case, I think the use of AI is tolerable. But as someone who uses Lutris sometimes, I do have concerns about whether or not this will cause issues with running games through it. How do we know if the AI generated code is going to make Lutris slow or possibly cause games to not work properly that otherwise would have worked perfectly fine?

Whenever I’ve tried running games in both Wine by itself and Lutris, I have noticed that they do often run noticeably slower in Lutris. And I also don’t have the best PC to begin with, so this is a big concern of mine.

HotsauceHurricane
link
fedilink
English
406h

Somehow hiding the code feels worse than using the code. This whole thing is yuck.

Ephera
link
fedilink
English
21h

Yeah, management wants us to use AI at $DAYJOB and one of the strategies we’ve considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.

Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.

We haven’t actually started doing these separate commits, because it’s cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.

@[email protected]
link
fedilink
English
21h

Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it’s true or not.

I blame fucking no one for hiding the fact.

This is on the users not the dev. The users are fucking animals and created this very problem.

Blaming the wrong people and attacking them is the yuck.

Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.

bibbasa
link
fedilink
English
4
edit-2
4h

“if you’re gonna be the bitch, be the whole bitch”

@[email protected]
link
fedilink
English
418h

To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.

Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.

I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.

@[email protected]
link
fedilink
English
43h

Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives.

I don’t think you should make a claim like this while AI is being heavily subsidized and burning VC cash to stay afloat. The truth is whatever value it may add to such a society might actually be completely negated by it’s resource costs. Is even “moderate” AI use ecologically or economically sustainable?

@[email protected]
link
fedilink
English
138m

For full disclosure, I remembered once someone claimed to me there are AI models that use much less power. But, to confirm that statement before replying, I looked up an investigation, and they say it’s much murkier, and that a company’s own claims are usually understating it. So, you’re on point.

tb_
link
fedilink
English
56h

It can be useful for generating switch cases and other such not-quite copy-paste work too. There are reasonable use cases… if you ignore how the training data was sourced.

@[email protected]
link
fedilink
English
146h

And the incredible amount of damage and destruction it’s still inflicting on the environment, society, and the economy.

No amount of output is worth that cost, even if it was always accurate with no unethical training.

magikmw
link
fedilink
English
307h

Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.

Zos_Kia
link
fedilink
English
157h

Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.

In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.

I’d sooner have a drink with a salesman from OpenAI than with one of them.

@[email protected]
link
fedilink
English
6
edit-2
10m

Just, what kind of pleasure can one derive from harming these projects? It’s so frigging weird, man.

@[email protected]
link
fedilink
English
2
edit-2
1h

Throwing down people is the easiest way to stand above them. 😒

@[email protected]
link
fedilink
English
1169h

you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

@[email protected]
link
fedilink
English
419h

You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

Zos_Kia
link
fedilink
English
287h

It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

@[email protected]
link
fedilink
English
177h

I see your point. I might also have responded poorly to that, on some level at least.

Zos_Kia
link
fedilink
English
87h

Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.

@[email protected]
link
fedilink
English
-109h

Trolling? They gave a pretty good answer explaining their reasoning.

@[email protected]
link
fedilink
English
519h

I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.

@[email protected]
link
fedilink
English
179h

Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

UnfortunateShort
link
fedilink
English
26h

They are on liberapay if you want to support the project btw. Combined with Patreon, they sit at less than 700$ a week. That’s like half a dev before tax

Kilgore Trout
link
fedilink
English
22h

You might as well donate to Anthropic.

@[email protected]
link
fedilink
English
65h

?

UnfortunateShort
link
fedilink
English
43h

Yes, that’s Liberapay. You may have noticed that I mentioned Patreon.

@[email protected]
link
fedilink
English
-37h

They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”

It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.

Also, you should let your community make ethics decisions about whether to support you.

Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.

@[email protected]
link
fedilink
English
76h

Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

P03 Locke
link
fedilink
English
1
edit-2
21m

In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

Oh, it’s more than subconscious, as you can see in this thread.

Lutris developer makes a perfectly sane and nuianced response to a reactionary “is lutris slop now” comment, and gets shit on for it, because everybody has to fight in black and white terms. There are no grey opinions, only battle lines to be drawn to these people.

What? Are you all going to shit on your lord and savior Linus himself for also saying he uses LLMs? Oh, what, you didn’t know?!?

Cyv_
link
fedilink
English
140
edit-2
11h

I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

Tony Bark
creator
link
fedilink
English
419h

I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

aloofPenguin
link
fedilink
English
11h

I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.

P03 Locke
link
fedilink
English
141m

This was a few years ago.

That’s 50 years in LLM terms. You might as well have been banging two rocks together.

@[email protected]
link
fedilink
English
43h

You might genuinely be using it wrong.

At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

P03 Locke
link
fedilink
English
1
edit-2
31m

Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.

Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.

It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.

Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just “me needs problem solvey, go do fix thing!”

@[email protected]
link
fedilink
English
239h

Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

@[email protected]
link
fedilink
English
87h

It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

@[email protected]
link
fedilink
English
24h

Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.

daikiki
link
fedilink
English
8
edit-2
7h

Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

@[email protected]
link
fedilink
English
-125h

Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

TachyonTele
link
fedilink
English
219m

Vaccines are misinformation? What.

Fatal
link
fedilink
English
26h

At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

@[email protected]
link
fedilink
English
-27h

I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.

XLE
link
fedilink
English
1310h

Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

Alex
link
fedilink
English
1210h

I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

@[email protected]
link
fedilink
English
110h

I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

@[email protected]
link
fedilink
English
8112h

Tbh I agree, if the code is appropriate why care if it’s generated by an LLM

Dettweiler
link
fedilink
English
3411h

It’s all about curation and review. If they use AI to make the whole project, it’s going to be bloated slop. If they use it to write sections that they then review, edit, and validate; then it’s all good.

I’m fairly anti-AI for most current applications, but I’m not against purpose-built tools for improving workflow. I use some of Photoshop’s generative tools for editing parts of images I’m using for training material. Sometimes it does fine, sometimes I have to clean it up, and sometimes it’s so bad it’s not worth it. I’m being very selective, and if the details are wrong it’s no good. In the end, it’s still a photo I took, and it has some necessary touchups.

@[email protected]
link
fedilink
English
109h

If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.

I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

@[email protected]
link
fedilink
English
77h

It’s the same for me.

I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.

@[email protected]
link
fedilink
English
4111h

It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

@[email protected]
link
fedilink
English
46h

Speaking only on the programming part of the slop machine, programmers typically copy code anyways. It’s not an ethical issue for a programmer using a tool that has been trained on other people’s “stolen” code.

@[email protected]
link
fedilink
English
-25h

rofl don’t quit your day job

@[email protected]
link
fedilink
English
9
edit-2
8h

If the developer isn’t able to keep up, they should look for (co-)maintainers.

Same energy as “Just go on Twitter and ask for free voice actors,” a la Vivziepop. A lot of people think this kind of shit is super easy, but realistically, it’s nearly impossible to get people to dedicate that kind of effort to something that can never be more than a money/time sink.

@[email protected]
link
fedilink
English
13h

Hey, if your project is important enough you might get your own Jia Tan (:

@[email protected]
link
fedilink
English
48h

I was under the impression that FOSS developers do it for the love of the game and not for monetary compensation. They’re literally putting the software out for free even though they don’t need to. They are going to be making this shit regardless.

P03 Locke
link
fedilink
English
113m

At this point, teachers do it “for the love of the game”, but they still want to get paid more than minimum wage.

@[email protected]
link
fedilink
English
24h

My point was “Help me with my passion project for nothing” is a much harder sell. “Just find some help,” is advice along the lines of “Just get in a plane and fly it.”

@[email protected]
link
fedilink
English
26h

That is what they are technically doing but they often don’t always consider the consequences and often react poorly when they realize that an Amazon (it whatever) comes along and contributes nothing and monetizes their work while dumping the support and maintenance on them.

That is the name of the game though if you use an MIT license.

@[email protected]
link
fedilink
English
28h

Absolutely true, but there’s one clear and obvious way; drop support for the project yourself.

If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.

P03 Locke
link
fedilink
English
1
edit-2
14m

XKCD, of course

If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

No, they won’t. This line of thinking is how we got the above.

Their line of work is thankless, and nobody wants to do a fucking thankless job, especially when the last maintainer was given a bunch of shit for it.

bookmeat
link
fedilink
English
3311h

A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

@[email protected]
link
fedilink
English
5011h

Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

lumpenproletariat
link
fedilink
English
1310h

More reason to destroy copyright.

Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.

Because its used to benefit megacorps in practice. This situation is just more proof of that.

Beacon
link
fedilink
49h

We weren’t all saying copyright altogether was unfair. In fact i think most of us have always said copyright law should exist, just that it shouldn’t be like ‘lifetime of the creator plus another 75 years after their death’. Copyright should be closer to how it was when the law was first started, which is something like 20 years.

(And personally imo there should also be some nuanced exceptions too.)

@[email protected]
link
fedilink
English
1111h

Copyright is what makes the GPL license enforceable.

P03 Locke
link
fedilink
English
118m

The GPL license only exists because copyright fucked over the public contract that it promised to society: Copyrights are temporary and will be given back to public domain. Instead, shitheads like Mark Twain and Disney extended copyright to practically forever.

Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.

@[email protected]
link
fedilink
English
210h

It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.

iamthetot
link
fedilink
English
19h

Who is we? I wasn’t.

@[email protected]
link
fedilink
English
-119h

Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.

@[email protected]
link
fedilink
English
139h

LLMs are not sentient and they’re not learning.

@[email protected]
link
fedilink
English
1410h

Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.

However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.

As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.

P03 Locke
link
fedilink
English
13m

However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure.

That’s not completely true. Claude and some of the Chinese coding models have gotten a lot better at creating a good first pass.

That’s also why I like tests. Just force the model to prove that it works.

Oh, you built the thing and think it’s finished? Prove it. Go run it. Did it work? No? Then go fix the bugs. Does it compile now? Cool, run the unit test platform. Got more bugs? Fix them. Now, go write more unit tests to match the bugs you found. You keep running into the same coding issue? Go write some rules for me that tell yourself not to do that shit.

I mean, I’ve been doing this programming shit for many decades, and even I’ve been caught by my overconfidence of trying to write some big project and thinking it’s just going to work the first time. No reason to think even a high-powered Claude thinking model is going to magically just write the whole thing bug-free.

@[email protected]
link
fedilink
English
1311h
  • Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
  • Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
  • Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
XLE
link
fedilink
English
3
edit-2
10h

“If” doing all the lifting here.

If we ignore the mountain of evidence saying the opposite…

@[email protected]
link
fedilink
English
010h

I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school

@[email protected]
link
fedilink
English
-411h

Yeah. Call me if he starts using AI artwork.

wholookshere
link
fedilink
English
3011h

so you draw the line at stealing artists work, but not programmers work?

Dremor
mod
link
fedilink
English
34
edit-2
11h

Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.

wholookshere
link
fedilink
English
36h

Sure, but that’s just your view.

And also not how LLMs work.

They gobble up everything and cause unreadable code. Not learning.

@[email protected]
link
fedilink
English
2111h

Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed

wholookshere
link
fedilink
English
26h

That’s not what an LLM is doing is it.

galaxy_nova
link
fedilink
English
710h

Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting

@[email protected]
link
fedilink
English
211h

Lutris is GPL-licenced, so isn’t it the opposite of stealing?

wholookshere
link
fedilink
English
2011h

LLMs have stolen works from more than just artists.

ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.

so there’s more than just lutris stollen.

Lung
link
fedilink
English
-2011h

So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?

wholookshere
link
fedilink
English
78h

The fuck you talking about.

Using a tool with billions of dollars behind it robinhood?

How is stealing open source prihcets code regardless of license stealing fr corporation’s?

Lung
link
fedilink
English
0
edit-2
6h
  • he’s not anthropic, and doesn’t have billions of dollars
  • stealing from open source is not stealing, that’s the point of open source
  • the argument above is that these models are allegedly trained “regardless of license” i.e. implying they are trained on non-oss code
@[email protected]
link
fedilink
English
49h

No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)

@[email protected]
link
fedilink
English
3611h

I don’t support the use of AI tools in general, but i have a soft spot for long-term maintainers. These people generally don’t have enough support for this to be a full-time hobby, and when a project becomes popular the pressure is massive.

If the community wont step up to take the burden off the maintainer, but they still want active development, what can you do? As long as the program continues to be high quality, i cant complain about a free thing.

@[email protected]
link
fedilink
English
1711h

Up until recently, Lutris worked perfectly for me. Ever since around the release of Wine 11, though, cant get anything to even install, let alone play. This might explain my increasing frustration with the app.

Guess I’m going back to using Bottles for the odd game or app I don’t feel like trying to shoehorn into steam.

@[email protected]
link
fedilink
English
29h

I am very much a beginner, and until now lutris was kind of my default answer for “how the hell do I get that windows exe installer to spit its entrails so I can run it through wine” (or even native engines like VCMI, Daggerfall Unity and Creatures Docking Station).

For everything that doesn’t come from Steam, obviously.

What is the more direct way? Does Bottles do that? I haven’t tried it yet.

@[email protected]
link
fedilink
English
35h

There’s actually a number of options. Lutris and Bottles are both built on top of Wine. And there are other apps that use Wine to make it all work, but I’m not very familiar with anything else…yet!

Bottles can be a little tricky to get used to - one of the biggest issues is that it sandboxes the Wine runtime, so you’ll often need to move your .exe into the right file path. But, other than that I found it pretty easy to use! So if you need something that you can “drop in” to replace Lutris, it’s worth a try! It has some helpful preconfigured runtime environments, depending on if you are running a general propose application or a video game. For the power users, you can even start with a blank slate.

@[email protected]
link
fedilink
English
15h

Interesting. I am mostly interested in running games. I’ll have a look into how Bottles work then.

I feel like for most if not all of my use cases that are not specific games, I can find some decent stuff running natively.

@[email protected]
link
fedilink
English
28h

If you’re talking about games, I usually just add the exe to Steam as a non-Steam game and enable proton for it

Create a post

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you’re submitting before posting to see if it’s already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don’t share it here, there are other places to find it. Discussion of piracy is fine.

We don’t want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform
By type
By games
Language specific
  • 1 user online
  • 215 users / day
  • 1.01K users / week
  • 2.1K users / month
  • 5.8K users / 6 months
  • 1 subscriber
  • 8.63K Posts
  • 180K Comments
  • Modlog