There's a bit of drama going on with the popular game manager Lutris right now, with users pointing out the developer using AI generated code via Claude.

A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

Tony Bark
creator
link
fedilink
English
185d

AI has caused plenty of headaches for developers. This isn’t some culture war shit.

@[email protected]
link
fedilink
English
24d

But that kind of proves their point, right?

Yes, a lot of projects have had issues with contributers who push unreviewed AI slop that they don’t understand, ultimately creating more work for the project. Or with avalanches of AI code review bug reports that do nothing to help. But that’s not what’s happening here.

In this case, the main developer of the project is choosing to use AI, on their own terms, because they find it helpful, and people are giving them shit for it. It’s their project and they feel this technology is beneficial. Isn’t that their call to make? Why are people treating the former and the latter as completely interchangeable scenarios when they’re clearly not? It kind of does suggest that people are coming at this from a more ideological rather than rational perspective.

@[email protected]
link
fedilink
English
0
edit-2
5d

That is for each developer to decide, if they can handle it or not.

As I said: judge the result, not the workflow.

Tony Bark
creator
link
fedilink
English
105d

As I said: judge the result, not the workflow.

I’ve tested AI myself and seen the results. I’ll judge how I see fit.

@[email protected]
link
fedilink
English
15d

I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.

@[email protected]
link
fedilink
English
45d

There’s a problem with that. The vast majority of Linux users are probably more tech savvy than average but I’d wager not all of them or even the vast majority have the skills to vet the code.

Lots of the people in the gaming space who are having Lutris suggested/recommended to them are not going in to check that code for problems. They install the flatpak on move on with their lives.

It appears (from what I’ve read which isn’t necessarily the end all be all) that the people taking exception to the use of AI to code Lutris are doing so because they do decompile and vet code.

My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

You’re asking people who don’t have the skills to ignore people who do have the skills who are sounding the alarm.

I get that this person is a single person writing code and disseminating it for free. I get that we should be thankful for free and open software. I fully understand why this person might use AI to help with coding.

I understand that they are upset about the backlash. But that was a very much foreseeable consequence of the credits they gave the AI (a choice they made), and honestly the use of AI (which might have been called out later on if they hadn’t credited it).

They shot themselves in the foot with the part of their response that was flippant and a “fuck you” to anyone who might find the use of AI concerning.

There’s also the fact that AI is something that a lot of people in the Linux community at large seem to already be boycotting and boycotting derivatives of it make sense.

Just because you create something for free doesn’t mean people have to use it. Or that people aren’t free to boycott it.

@[email protected]
link
fedilink
English
45d

Thanks for that long answer. I agree completely with the second half of it. I also agree with most of the first half of it, but I have to add a remark to it:

My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

That is mostly true, but also depends on the usage. You don’t have to tell an agent to “develop feature X” and then go for a coffee. You can issue relatively narrow scoped prompts that yield small amounts of changes/code which are far easier to review. You can work that way in small iterations, making it completely possible to follow along and adjust small things instead of getting a big ball of mud to entangle.

And while it’s true that not everyone is able to vet code, that was also true before and without coding agents. Yet people run random curl-piped-to-bash commands they copy from some website because it says it will install whatever. They install something from flathub without looking at the source (not even talking about chain of trust for the publishing process here). There is so much bad code out there written by people who are not really good engineers but who are motivated enough to put stuff together. They also made and make ugly mistakes that are hard to spot and due to bad code quality hard to review.

The main risk of agents is, that they also increase the speed of these developers which means they pump out even more bad code. But the underlying issue existed before and agents don’t automatically mean something is bad. That would also be dangerous to believe that, because that might enforce even more the feeling of security when using a piece of code that was (likely) written without any AI influence. But that’s just not true; this code could be as harmful or even more harmful. You simply don’t know if you don’t review it. And as you said: most people don’t.

@[email protected]
link
fedilink
English
34d

Frankly, most AI generated code is often easier to review, thanks to a combination of standardized practices (LLMs regress to the mean by design) and a somewhat overly enthusiastic approach to commenting and segmented layouts.

@[email protected]
link
fedilink
English
45d

judge the result, not the workflow.

This kind of seems like bad advice in general. The process to create a result is often extremely important to be aware of. For example, if possible, I would like to not consume products built with slave labor.

@[email protected]
link
fedilink
English
4
edit-2
4d

The thing is, you’re conflating ethical and practical concerns here. The commenter you’re responding to is clearly talking about the practical aspects of using AI tools.

If you have a fundamental moral issue with AI that is entirely independent of how efficacious it is, that’s fine. That’s a completely reasonable position to hold. But don’t fall into the trap of wanting every use of genAI to be impractical because it aligns with your morality to feel that way.

If this is an ethical stance that you truly hold, you should be willing to believe that using these tools is bad even when they’re effective. But a lot of people instead have to insist that every use of AI is impractical, in the face of any evidence to the contrary, because they’ve talked themselves into believing that on some fundamental level. Like “If AI is ever useful, that means I’m wrong about it being immoral.”

@[email protected]
link
fedilink
English
1
edit-2
5d

Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?

You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.

It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they (the author, ultimately) produce. And by produce I don’t mean the final binary, but their code.

Create a post

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you’re submitting before posting to see if it’s already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don’t share it here, there are other places to find it. Discussion of piracy is fine.

We don’t want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform
By type
By games
Language specific
  • 1 user online
  • 271 users / day
  • 920 users / week
  • 2.14K users / month
  • 5.86K users / 6 months
  • 1 subscriber
  • 8.66K Posts
  • 181K Comments
  • Modlog