Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.
Video games, tabletop, or otherwise. Posts not related to games will be deleted.
This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.
No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.
We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.
Try to keep it to 10% self-promotion / 90% other stuff in your post history.
This is to prevent people from posting for the sole purpose of promoting their own website or social media account.
This community is mostly for discussion and news. Remember to search for the thing you’re submitting before posting to see if it’s already been posted.
We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.
Make sure to mark your stuff or it may be removed.
No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.
Don’t share it here, there are other places to find it. Discussion of piracy is fine.
We don’t want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.
PM a mod to add your own
Video games
Generic
Help and suggestions
lol
Not all LLMs are the same. You can absolutely take a neural network model and train it yourself on your own dataset that doesn’t violate copyright.
I can almost guarantee that hundred billion params LLMs are not trained on that, and are trained on the whole web scraped to the furthest extent.
The only sane and ethical solution going forward is to force to opensource all LLMs. Use the datasets generated by humanity - give back to humanity.
Jesus fucking christ. There are SO GODDAMN MANY open source LLMs, even from fucking scumbags like facebook. I get that there’s subtleties to the argument on the ProAI vs AntiAI side, but you guys just screech and scream.
https://github.com/eugeneyan/open-llms
Lol, ofc meta, they have the biggest bigdata out there, full of private data.
Most of the opensources are recompilations of existing opensource LLMs.
And the page you’ve listed is <10b mostly, bar LLMs with huge financing, and generally either copropate or Chinese behind them.
Where are the sources? All I see is binary files.
there are barely any. I can’t name a single one offhand. Open weights means absolutely nothing about the actual source of those weights.
Besides, the article is about image gen AI, not LLMs.
That’s an LLM, buddy.
Article directly complains about AI artwork. You know what LLM even means?
Yes, I do. I also know that multimodal LLMs are what generate AI artwork.
Then you should provably know that image gen existed long before MLLMs and was already a menace to artists back then.
And that MLLM is generally a layered combo of lots of preexisting tools, where LLM is used as a medium that allows to attach OCR inputs and give more accurate instructions to image gen AI part.
What do you think the letters LLM stand for, pal?
Image Gen AI is an LLM?
Yes, it is. LLMs do more than just text generation.
Source?
Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
That’s assuming you own the media in the first place. Often AI is trained with large amounts of data downloaded illegally.
So, yes, it’s fair use to train on information you have or have rights to. It’s not fair use to illegally obtain new data. Even more, to renting that data often means you also distribute it.
For personal use, I don’t have an issue with it anyway, but legally it’s not allowed.
Incorrect. No court has ruled in favor of any plaintiff bringing a copyright infringement claim against an AI LLM. Here’s a breakdown of the current court cases and their rulings:
https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
In both cases, the courts have ruled that training an LLM with copyrighted works is highly transformative and thus, fair use.
The plaintiffs in one case couldn’t even come up with a single iota of evidence of copyright infringement (from the output of the LLM). This—IMHO—is the single most important takeaway from the case: Because the only thing that really mattered was the point where the LLMs generate output. That is, the point of distribution.
Until an LLM is actually outputting something, copyright doesn’t even come into play. Therefore, the act of training an LLM is just like I said: A “Not Applicable” situation.
Just a heads up that anthropic have just lost a $1.5b case for downloading and storing copyrighted works. That’s $3,000 per author of 500000 books.
The wheels of justice move slowly but fair use has limits. Commercial use is generally not one. Commentary and transformation are, so we’ll see how this progresses with the many other cases.
Warner Brothers have recently filed another case, I think.
Anthropic didn’t lose their lawsuit. They settled. Also, that was about their admission that they pirated zillions of books.
From a legal perspective, none of that has anything to do with AI.
Company pirates books -> gets sued for pirating books. Companies settles with the plaintiffs.
It had no legal impact on training AI with copyrighted works or what happens if the output is somehow considered to be violating someone’s copyright.
What Anthropic did with this settlement is attack their Western competitor: OpenAI, specifically. Because Google already settled with the author’s guild for their book scanning project over a decade ago.
Now OpenAI is likely going to have to pay the author’s guild too. Even though they haven’t come out and openly admitted that they pirated books.
Meta is also being sued for the same reason but they appear to be ready to fight in court about it. That case is only just getting started though so we’ll see.
The real, long-term impact of this settlement is that it just became a lot more expensive to train an AI in the US (well, the West). Competition in China will never have to pay these fees and will continue to offer their products to the West at a fraction of the cost.
While that’s interesting info and links, I don’t think that’s true.
https://share.google/opT62A4cIvKp6pwhI This case with Thomson has, but is expected to be overturned.
Most of the big cases are in the early stages. Let’s see what the Disney one does.
There is also the question, not just of copyright or fair use, but legally obtaining the data. Facebook torrented terabytes of data and claimed they did not share it. I don’t know that that’s enough to claim innocence. It hasn’t been for individuals.
The question is whether they are actually transformative. Just being different is not enough. I can’t use Disney IP to make my new movie, for instance.