The most thought-provoking thing I saw at Gamescom 2025 is the blurred line that AI represents to the industry at large.
HarkMahlberg
link
fedilink
2010d

Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I’m not just talking about “AI brain.”

Let’s say you’re looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). “Do these look like reputable sites? Have I heard of any of them before?” You scroll click a bunch of them and read through them. Now you evaluate their contents. “Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated.” Then you pare down your links to a smaller number and try the solution each one provides, one at a time.

Now let’s say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It’s a total soup of its training data. You can’t tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn’t. If it doesn’t, you have to write a new prayer try again.

Even running a local model means you can’t discern the source material from the output. This isn’t Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. You can’t be critical about the output, because there’s nothing to critique but a homogenous answer. And because the process is destructive, you can’t un-soup the output. You’ve robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.

@[email protected]
link
fedilink
English
-910d

you can’t be critical about the answer

You actually can, and you should be. And the process is not destructive since you can always undo in tools like cursor, or discard in git.

Besides, you can steer a good coding LLM in a right direction. The better you understand what are you doing - the better.

@[email protected]
link
fedilink
English
910d

How would you be critical of the answer without also doing a traditional search to compare its answer? If you have to search and verify the answer anyway, didn’t we just add an unnecessary step to the process?

@[email protected]
link
fedilink
English
1
edit-2
10d

You can have knowledge of the technology firsthand and just need to generate the code? I mean I would need to google different function names and conversion tricks all the time anyway, even if I’m really good at it. If AI slops it for me, it just speeds it up by a lot, and I can notice bad moments.

Again, the better you know what you are doing, the more it could help.

ElectricMachman
link
fedilink
English
29d

But if you know what you’re doing, you can do a better job than the “AI”…??? This is a weird argument

@[email protected]
link
fedilink
English
29d

With infinite time, sure. Time isn’t infinite.

@[email protected]
link
fedilink
English
29d

That would be all well and good, if corpos weren’t pushing AI as a technology that everyone should be using all the time to reshape their daily lives.

The people most attracted to AI as a technology (and the ones that AI companies are marketing to the hardest) are the ones who want to use it for things where they don’t already have domain-specific expertise. Non-artists generating art, or non-coders making apps on “vibes”, etc. Have you ever heard of Travis Kalanick? He’s one of the co-founders of Uber and he recently made the news after he went on some podcast to breathlessly rave about how he’s been using LLMs to do “vibe physics”. Kalanick, as you can guess, is not a physicist. In fact he’s not a scientist of any kind.

The vast, vast majority of people using AI aren’t using it to augment their existing skills, and they aren’t using their own expertise to evaluate the output critically. This was never the point nor the promise of AI, and it’s certainly not the direction that the people pushing this technology are attempting to push it.

@[email protected]
link
fedilink
English
09d

AI marketing is total BS, but it doesn’t mean AI is not useful in it’s current state. People try to argue as if that was the case, but it simply isn’t. Agentic AI + LLM does speed up usual tasks by a whole fucking lot.

Next day, these people would be wondering why they don’t have access to essential tools they need to be effective (means of production), completely forgotten they were against these tools completely out of principle. This is as shortsighted as it can get.

@[email protected]
link
fedilink
English
2
edit-2
9d

AI marketing is total BS, but it doesn’t mean AI is not useful in it’s current state.

But the AI only exists because of the marketing BS! The fact that AI is useful to qualified people in specialized fields doesn’t matter when the technology is being mass marketed to a completely different group of people for completely different use cases.

LLMs are called “large” for a reason — their existence demands large datasets, large data centers, large resource consumption, and large capital expenditure to secure all of those things. The only entries with the resources to make that happen are large corporations (and rich nation-states, but they seem to be content to keep any of their own LLM efforts under wraps for now). You can only say “don’t blame the technology, blame the technologist” when it’s possible to separate the two, but in this case it’s not. LLMs don’t exist without the corpos, and the corpos are determined to push LLMs into places and use cases where they have no business being.

@[email protected]
link
fedilink
English
09d

Openweight/Opensource LLMs do exist though. And isn’t not only tiny models.

HarkMahlberg
link
fedilink
710d

You misunderstood, I wasn’t saying you can’t Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can’t be reverse engineered to see how it came up with it’s answers.

It can’t tell you how much of one source it used over another. It can’t tell you what it’s priorities are in evaluating data… not without the risk of hallucinating on you when you ask it.

@[email protected]
link
fedilink
English
0
edit-2
10d

The topic is : using AIs for game dev.

  1. I’m pretty sure that generating placeholder art isn’t going to ruin my ability to research
  2. AIs need to be used TAKING THEIR FLAWS INTO ACCOUNT and for very specific things.

I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.

There is a sane take : use AIs sparingly, taking their flaws into consideration, for placeholder work, or once you obtain a training base on content you are allowed to use. Run it locally, and use renewable sources for electricity.

HarkMahlberg
link
fedilink
210d

Wild to see you call for a “sane take” when you strawman the actual water problem into “draining the oceans.”

Local residents with nearby data centers aren’t being told to take fewer showers with salt water from the ocean.

@[email protected]
link
fedilink
English
310d

Is that a problem with the existence of llms as a technology, or shitty corporations working with corrupt governments in starving local people of resources to turn a quick buck?

If you are allowing a data center to be built, you need to make sure you have power etc to build it without negativitely impacting the local people. It’s not the fault of an LLM that they fucked this shit up.

@[email protected]
link
fedilink
English
19d

Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?

Let’s not forget that the first ‘L’ stands for “large”. These things do not exist without massive, power and resource hungry data centers. You can’t just say “Blame government mismanagement! Blame corporate greed!” without acknowledging that LLMs cease to exist without those things.

And even with all of those resources behind it, the technology is still only marginally useful at best. LLMs still hallucinate, they still confidently distribute misinformation, they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.

What tangible benefit is there to LLMs that justifies their absurd cost? Honestly?

@[email protected]
link
fedilink
English
0
edit-2
9d

making up deficiencies in your own artistic and linguistic skills , getting easy starting points for coding solutions.

LLMs still hallucinate,

Emergent behaviour can be useful in coming up with new ideas that you were not expecting and areas to explore

they still confidently distribute misinformation,

yeah, that’s been a problem since language, if you want a statement more close to the topic at hand, the printing press.

they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.

so does the fucking internet.

Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?

chad.jpg

lime!
link
fedilink
English
610d

as someone who has studied ml since around 2015, i’m still not convinced. i run local models, i train on CC data, i triple-check everything, and it’s just not that useful. it’s fun, but not productive.

Create a post

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you’re submitting before posting to see if it’s already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don’t share it here, there are other places to find it. Discussion of piracy is fine.

We don’t want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform
By type
By games
Language specific
  • 1 user online
  • 260 users / day
  • 717 users / week
  • 2.15K users / month
  • 7.18K users / 6 months
  • 1 subscriber
  • 7.24K Posts
  • 148K Comments
  • Modlog