The most thought-provoking thing I saw at Gamescom 2025 is the blurred line that AI represents to the industry at large.
@[email protected]
link
fedilink
English
2710d

Eurogamer is shit. You can serve ads without tracking. But, they don’t care.

Echo Dot
link
fedilink
English
48d

Yeah I hate this trend of you have to subscribe in order to not be tracked. I just agree to the cookies and then block them at the OS level. Get to have my cake and eat it too.

@[email protected]
link
fedilink
English
5510d

Honestly, it would be weird for any industry to start caring about ethics after all this time.

Not an endorsement of AI but a criticism of capitalism.

@[email protected]
link
fedilink
English
-3310d

doesn’t have to be an ethical nightmare. Public domain datasets on local hardware using renewable eletricity, who’s mad now, the artist you already can’t afford to pay because you have no fucking money anyway?

@[email protected]
link
fedilink
English
4210d

AI would be fine if we just changed everything about it

lol

@[email protected]
link
fedilink
English
510d

Not all LLMs are the same. You can absolutely take a neural network model and train it yourself on your own dataset that doesn’t violate copyright.

@[email protected]
link
fedilink
English
1010d

I can almost guarantee that hundred billion params LLMs are not trained on that, and are trained on the whole web scraped to the furthest extent.

The only sane and ethical solution going forward is to force to opensource all LLMs. Use the datasets generated by humanity - give back to humanity.

@[email protected]
link
fedilink
English
-110d

Besides, the article is about image gen AI, not LLMs.

@[email protected]
link
fedilink
English
29d

That’s an LLM, buddy.

@[email protected]
link
fedilink
English
29d

What do you think the letters LLM stand for, pal?

@[email protected]
link
fedilink
English
49d

Article directly complains about AI artwork. You know what LLM even means?

@[email protected]
link
fedilink
English
-29d

Yes, I do. I also know that multimodal LLMs are what generate AI artwork.

null
link
fedilink
English
49d

Image Gen AI is an LLM?

@[email protected]
link
fedilink
English
-29d

Yes, it is. LLMs do more than just text generation.

@[email protected]
link
fedilink
English
-410d

The only sane and ethical solution going forward is to force to opensource all LLMs.

Jesus fucking christ. There are SO GODDAMN MANY open source LLMs, even from fucking scumbags like facebook. I get that there’s subtleties to the argument on the ProAI vs AntiAI side, but you guys just screech and scream.

https://github.com/eugeneyan/open-llms

@[email protected]
link
fedilink
English
610d

Where are the sources? All I see is binary files.

@[email protected]
link
fedilink
English
28d

there are barely any. I can’t name a single one offhand. Open weights means absolutely nothing about the actual source of those weights.

@[email protected]
link
fedilink
English
510d

even meta

Lol, ofc meta, they have the biggest bigdata out there, full of private data.

Most of the opensources are recompilations of existing opensource LLMs.

And the page you’ve listed is <10b mostly, bar LLMs with huge financing, and generally either copropate or Chinese behind them.

Riskable
link
fedilink
English
710d

Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.

You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.

Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.

@[email protected]
link
fedilink
English
310d

That’s assuming you own the media in the first place. Often AI is trained with large amounts of data downloaded illegally.

So, yes, it’s fair use to train on information you have or have rights to. It’s not fair use to illegally obtain new data. Even more, to renting that data often means you also distribute it.

For personal use, I don’t have an issue with it anyway, but legally it’s not allowed.

Riskable
link
fedilink
English
28d

Incorrect. No court has ruled in favor of any plaintiff bringing a copyright infringement claim against an AI LLM. Here’s a breakdown of the current court cases and their rulings:

https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training

In both cases, the courts have ruled that training an LLM with copyrighted works is highly transformative and thus, fair use.

The plaintiffs in one case couldn’t even come up with a single iota of evidence of copyright infringement (from the output of the LLM). This—IMHO—is the single most important takeaway from the case: Because the only thing that really mattered was the point where the LLMs generate output. That is, the point of distribution.

Until an LLM is actually outputting something, copyright doesn’t even come into play. Therefore, the act of training an LLM is just like I said: A “Not Applicable” situation.

@[email protected]
link
fedilink
English
13d

Just a heads up that anthropic have just lost a $1.5b case for downloading and storing copyrighted works. That’s $3,000 per author of 500000 books.

The wheels of justice move slowly but fair use has limits. Commercial use is generally not one. Commentary and transformation are, so we’ll see how this progresses with the many other cases.

Warner Brothers have recently filed another case, I think.

Riskable
link
fedilink
English
12d

Anthropic didn’t lose their lawsuit. They settled. Also, that was about their admission that they pirated zillions of books.

From a legal perspective, none of that has anything to do with AI.

Company pirates books -> gets sued for pirating books. Companies settles with the plaintiffs.

It had no legal impact on training AI with copyrighted works or what happens if the output is somehow considered to be violating someone’s copyright.

What Anthropic did with this settlement is attack their Western competitor: OpenAI, specifically. Because Google already settled with the author’s guild for their book scanning project over a decade ago.

Now OpenAI is likely going to have to pay the author’s guild too. Even though they haven’t come out and openly admitted that they pirated books.

Meta is also being sued for the same reason but they appear to be ready to fight in court about it. That case is only just getting started though so we’ll see.

The real, long-term impact of this settlement is that it just became a lot more expensive to train an AI in the US (well, the West). Competition in China will never have to pay these fees and will continue to offer their products to the West at a fraction of the cost.

@[email protected]
link
fedilink
English
18d

While that’s interesting info and links, I don’t think that’s true.

https://share.google/opT62A4cIvKp6pwhI This case with Thomson has, but is expected to be overturned.

Most of the big cases are in the early stages. Let’s see what the Disney one does.

There is also the question, not just of copyright or fair use, but legally obtaining the data. Facebook torrented terabytes of data and claimed they did not share it. I don’t know that that’s enough to claim innocence. It hasn’t been for individuals.

The question is whether they are actually transformative. Just being different is not enough. I can’t use Disney IP to make my new movie, for instance.

@[email protected]
link
fedilink
English
1610d

Out of legit curiosity, how many models do you know trained exclusively on public domain data, which are actually useful?

lime!
link
fedilink
English
510d

anything trained on common corpus. which, oddly, is harder to find than the actual training data.

@[email protected]
link
fedilink
English
69d

I mean this respectfully, but that wasn’t an actual answer.

lime!
link
fedilink
English
29d

no, it sort of reinforced your point.

@[email protected]
link
fedilink
English
19d

I see, that’s fair.

HarkMahlberg
link
fedilink
2010d

Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I’m not just talking about “AI brain.”

Let’s say you’re looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). “Do these look like reputable sites? Have I heard of any of them before?” You scroll click a bunch of them and read through them. Now you evaluate their contents. “Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated.” Then you pare down your links to a smaller number and try the solution each one provides, one at a time.

Now let’s say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It’s a total soup of its training data. You can’t tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn’t. If it doesn’t, you have to write a new prayer try again.

Even running a local model means you can’t discern the source material from the output. This isn’t Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. You can’t be critical about the output, because there’s nothing to critique but a homogenous answer. And because the process is destructive, you can’t un-soup the output. You’ve robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.

@[email protected]
link
fedilink
English
-910d

you can’t be critical about the answer

You actually can, and you should be. And the process is not destructive since you can always undo in tools like cursor, or discard in git.

Besides, you can steer a good coding LLM in a right direction. The better you understand what are you doing - the better.

@[email protected]
link
fedilink
English
910d

How would you be critical of the answer without also doing a traditional search to compare its answer? If you have to search and verify the answer anyway, didn’t we just add an unnecessary step to the process?

@[email protected]
link
fedilink
English
1
edit-2
10d

You can have knowledge of the technology firsthand and just need to generate the code? I mean I would need to google different function names and conversion tricks all the time anyway, even if I’m really good at it. If AI slops it for me, it just speeds it up by a lot, and I can notice bad moments.

Again, the better you know what you are doing, the more it could help.

ElectricMachman
link
fedilink
English
29d

But if you know what you’re doing, you can do a better job than the “AI”…??? This is a weird argument

@[email protected]
link
fedilink
English
29d

With infinite time, sure. Time isn’t infinite.

@[email protected]
link
fedilink
English
29d

That would be all well and good, if corpos weren’t pushing AI as a technology that everyone should be using all the time to reshape their daily lives.

The people most attracted to AI as a technology (and the ones that AI companies are marketing to the hardest) are the ones who want to use it for things where they don’t already have domain-specific expertise. Non-artists generating art, or non-coders making apps on “vibes”, etc. Have you ever heard of Travis Kalanick? He’s one of the co-founders of Uber and he recently made the news after he went on some podcast to breathlessly rave about how he’s been using LLMs to do “vibe physics”. Kalanick, as you can guess, is not a physicist. In fact he’s not a scientist of any kind.

The vast, vast majority of people using AI aren’t using it to augment their existing skills, and they aren’t using their own expertise to evaluate the output critically. This was never the point nor the promise of AI, and it’s certainly not the direction that the people pushing this technology are attempting to push it.

@[email protected]
link
fedilink
English
09d

AI marketing is total BS, but it doesn’t mean AI is not useful in it’s current state. People try to argue as if that was the case, but it simply isn’t. Agentic AI + LLM does speed up usual tasks by a whole fucking lot.

Next day, these people would be wondering why they don’t have access to essential tools they need to be effective (means of production), completely forgotten they were against these tools completely out of principle. This is as shortsighted as it can get.

@[email protected]
link
fedilink
English
2
edit-2
9d

AI marketing is total BS, but it doesn’t mean AI is not useful in it’s current state.

But the AI only exists because of the marketing BS! The fact that AI is useful to qualified people in specialized fields doesn’t matter when the technology is being mass marketed to a completely different group of people for completely different use cases.

LLMs are called “large” for a reason — their existence demands large datasets, large data centers, large resource consumption, and large capital expenditure to secure all of those things. The only entries with the resources to make that happen are large corporations (and rich nation-states, but they seem to be content to keep any of their own LLM efforts under wraps for now). You can only say “don’t blame the technology, blame the technologist” when it’s possible to separate the two, but in this case it’s not. LLMs don’t exist without the corpos, and the corpos are determined to push LLMs into places and use cases where they have no business being.

HarkMahlberg
link
fedilink
79d

You misunderstood, I wasn’t saying you can’t Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can’t be reverse engineered to see how it came up with it’s answers.

It can’t tell you how much of one source it used over another. It can’t tell you what it’s priorities are in evaluating data… not without the risk of hallucinating on you when you ask it.

@[email protected]
link
fedilink
English
0
edit-2
10d

The topic is : using AIs for game dev.

  1. I’m pretty sure that generating placeholder art isn’t going to ruin my ability to research
  2. AIs need to be used TAKING THEIR FLAWS INTO ACCOUNT and for very specific things.

I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.

There is a sane take : use AIs sparingly, taking their flaws into consideration, for placeholder work, or once you obtain a training base on content you are allowed to use. Run it locally, and use renewable sources for electricity.

lime!
link
fedilink
English
610d

as someone who has studied ml since around 2015, i’m still not convinced. i run local models, i train on CC data, i triple-check everything, and it’s just not that useful. it’s fun, but not productive.

HarkMahlberg
link
fedilink
29d

Wild to see you call for a “sane take” when you strawman the actual water problem into “draining the oceans.”

Local residents with nearby data centers aren’t being told to take fewer showers with salt water from the ocean.

@[email protected]
link
fedilink
English
39d

Is that a problem with the existence of llms as a technology, or shitty corporations working with corrupt governments in starving local people of resources to turn a quick buck?

If you are allowing a data center to be built, you need to make sure you have power etc to build it without negativitely impacting the local people. It’s not the fault of an LLM that they fucked this shit up.

@[email protected]
link
fedilink
English
19d

Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?

Let’s not forget that the first ‘L’ stands for “large”. These things do not exist without massive, power and resource hungry data centers. You can’t just say “Blame government mismanagement! Blame corporate greed!” without acknowledging that LLMs cease to exist without those things.

And even with all of those resources behind it, the technology is still only marginally useful at best. LLMs still hallucinate, they still confidently distribute misinformation, they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.

What tangible benefit is there to LLMs that justifies their absurd cost? Honestly?

@[email protected]
link
fedilink
English
0
edit-2
9d

making up deficiencies in your own artistic and linguistic skills , getting easy starting points for coding solutions.

LLMs still hallucinate,

Emergent behaviour can be useful in coming up with new ideas that you were not expecting and areas to explore

they still confidently distribute misinformation,

yeah, that’s been a problem since language, if you want a statement more close to the topic at hand, the printing press.

they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.

so does the fucking internet.

Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?

chad.jpg

@[email protected]
link
fedilink
English
-5710d

AI is the future. Sure you can hate on it all you like. Can’t stop progress.

@[email protected]
link
fedilink
English
1810d

Heh. Out of curiosity how many nfts did you buy?

@[email protected]
link
fedilink
English
-210d

Zero. I took a deep dive into nfts and determined they were problematic.

@[email protected]
link
fedilink
English
1510d

All I ask is in what way are LLMs progress. Ability to generate a lot of slop is pretty much only thing LLMs are good for. Even that is not really cheap, especially factoring the environmental costs.

@[email protected]
link
fedilink
English
-1610d

Sure everything starts with meager beginnings. The AI you’re upset about existing may find the cure to many diseases. It may save the planet one day.

@[email protected]
link
fedilink
English
189d

The type of AI that researchers are building to try cure diseases are not LLMs. So not the stuff that is running behind these kind of tech for games.

@[email protected]
link
fedilink
English
16
edit-2
9d

or a silly, halfwit race to build out the infrastructure (because they’re smoking their own product) that could crash the economy.

You’re only seeing the upsides - make nifty pictures, ai music, whatever - because the entire shitshow is a free or exceptionally underpriced preview of what’s to come. while everyone from google to grok to your mom fails to find a way to actually profit off of it all when they have to figure the costs of the water, power, training data, lawsuits and other shit into the actual equation it blows up.

These aren’t my ideas - please, take a break from your preconceptions and read:

https://futurism.com/data-centers-financial-bubble

https://www.zdnet.com/article/todays-ai-ecosystem-is-unsustainable-for-most-everyone-but-nvidia-warns-top-scholar/

https://www.dailykos.com/stories/2025/8/22/2339789/-Why-The-AI-Bubble-Will-Burst

https://www.wheresyoured.at/the-haters-gui/

@[email protected]
link
fedilink
English
109d

Where is the idea that LLMs will ever to curing diseases coming from? What is the possible mechanism? LLMs generate text from probability distributions. There is no reason to trust their output because they don’t have built-in concept of true or false. When one cannot judge the quality of the output, how can one reliably use it as a tool for any purpose, let alone scientific research?

@[email protected]
link
fedilink
English
39d

There are other general AIs that can look over imaging like cat scans and in some situations catch things a doctor can’t.

There are also ones that can simulate drug interactions with the body and can be used to model creating novel drugs for treatments.

These are not LLMs though.

@[email protected]
link
fedilink
English
-6
edit-2
9d

LLMs are actually spectacular for indexing large amounts of text data and pulling out the answer to a query. Combine that with natural language processing and it is literally what we all thought Ask Jeeves was back in the day. If you ever spent time sifting through stack overflow pages or parsing discussion threads, that is what it is good at. And many models actually provide ways to get a readout of the “thought process” and links to pages that support the answer which drastically reduces the impact of hallucinations.

And many of those don’t necessarily require significant power usage… relative to what is already running in data centers.

The problem is that people use it and decide it is “like magic” and then insist on using it for EVERYTHING. And you go from “Write me a simple function to interface with this specific API” to “Write me an application to do my taxes and then file them for me”

Of course, there is also the issue of where training data comes from. Which is why so much of the “generative AI” stuff is so disgusting because it is just stealing copyrighted data left and right. Rather than the search engine style LLMs that mostly just ignore the proverbial README_FBI.txt file.

And the “this is magic” is on both sides. The evangelists are demonstrably morons. But the rabid anti-AI/“AI” crowd are just as bad with “it gave you a wrong answer, it is worthless”. Think of it less like a magic box and more like asking a question on a message board. You are gonna get a LOT of FUD and it is on you to do additional searches to corroborate when it actually matters.

Like a lot of things AI/“AI”, they are REALLY good at replacing intern/junior level employees (and all the consequences of that…) and are a way to speed through grunt work. And, much like farming a task out to that junior level employee, you need to actually supervise it and check the results. Whether that is making sure it actually does what you want it to do or making sure they didn’t steal copyrighted work.

@[email protected]
link
fedilink
English
1
edit-2
9d

How much do you know about transformers?

Have you ever programmed an interpreter for interactive fiction / MUDs, before all this AI crap? It’s a great example of the power that even super tiny models can accomplish. NLP interfaces are a useful thing for people.

Also consider that Firefox or Electron apps require more RAM and CPU and waste more energy than small language models. A Gemma slm can translate things into English using less energy than it requires to open a modern browser. And I know that because I’m literally watching the resources get used.

@[email protected]
link
fedilink
English
29d

I am not implying that transformers-based models have to be huge to be useful. I am only talking about LLMs. I am questioning the purported goal of LLMs, i.e., to replace all humans in as many creative fields as possible, in the context of it’s cost, both environmental and social.

@[email protected]
link
fedilink
English
1410d

It can be stopped just like climate change but we won‘t and kill humanity instead apparently.

@[email protected]
link
fedilink
English
-1810d

We as humans can take steps to lessen our impact on the planet. We cannot stop climate change. The planet by design will always change climates. It has changed without humans influence and it will continue after we are gone.

@[email protected]
link
fedilink
English
89d

Yep that’s absolutely not what people are talking about when they say ‘climate change’ in this context, they mean anthropogenic climate change, and you know it. Your bad faith response shows you have no interest in an honest discussion.

@[email protected]
link
fedilink
English
99d

Don’t be pedantic. Anyone with half a brain knows that when someone brings up “climate change” they’re referring to “human-made climate change” — and it’s completely uncontroversial that the changes we’ve made since the industrial revolution have greatly outweighed the changes of the Earth’s natural climate cycles.

@[email protected]
link
fedilink
English
119d

Ya you can, stop using it and don’t. No use, no VC money nor customers. Business baby

ssillyssadass
link
fedilink
English
79d

I can guarantee you that there will not be a point in time at which everybody on the planet just decides to stop using AI out of the goodness of their hearts.

Echo Dot
link
fedilink
English
28d

That’s like saying that colonies on Mars are the future. In the future colonies on Mars will be the direction things are going, (assuming we don’t global warm ourselves to death first) but we’re not there yet. AI have yet to prove themselves.

@[email protected]
link
fedilink
English
78d

This really depends on what you consider “progress”. Some forms of AI are neat pieces of tech, there’s no denying that. However, all I’ve really seen them do in an industrial sense is shrink workforces to save a buck via automation, and produce a noticably worse product.

That quality is sure to improve, but what won’t change is the fact that real humans with skill and talent are out of a job because of a fancy piece of software. I personally don’t think of that as progress, but that’s just me.

Victor Gnarly
link
fedilink
English
2
edit-2
8d

Typographers saw the same thing with personal computing in the latter half of the 90s. Almost over night, everyone starting printing their own documentation and comic sans became their canary in the coal mine. It was progress but progress is rarely good for everyone. There’s always a give and a take.

Echo Dot
link
fedilink
English
18d

Except typographers still exist, we need them to create fonts that aren’t comic sans.

@[email protected]
link
fedilink
English
17d

As another user said, typographers still exist. And, until now, computers weren’t really a threat to their job security. They were just a new set of tools they had to adapt to. But, if I was running a business and had little regard for ethics, why would I hire a typographer when I could just ask an AI to generate a new font for my billboard, and have it done in 30 seconds for free?

I get the argument that AI is a tool that lowers the barrier of entry to certain fields, which is absolutely true. If I wanted to be a graphic designer today, I could do it with AI. But, when I went to sell my logo to the small company down the street, I’d have to come to terms with the fact that the owner of that business also happened to become a graphic designer that very morning, and all of a sudden my career is over before it started.

PastafARRian
link
fedilink
English
3
edit-2
8d

If someone said this in 1970 it would be just as true as you saying it today. Would you have used generative AI tools for video game development back then?

@[email protected]
link
fedilink
English
08d

💯%. No doubt advancements don’t stop because people are upset about it.

PastafARRian
link
fedilink
English
48d

I meant more like, AI is the future but it may be of limited use right now.

Create a post

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you’re submitting before posting to see if it’s already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don’t share it here, there are other places to find it. Discussion of piracy is fine.

We don’t want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform
By type
By games
Language specific
  • 1 user online
  • 228 users / day
  • 696 users / week
  • 2.13K users / month
  • 7.17K users / 6 months
  • 1 subscriber
  • 7.24K Posts
  • 148K Comments
  • Modlog