It gets banned/blocked, or sued for noncompliance for allowing Australian users without age verification. They’ll play whack a mole for decades, just like they have been for P2P file sharing.
Like a lot of post-911 legislation, it’s anti-privacy surveillance disguised as a way to ‘protect the children’. It’s absolute shit and we should absolutely be taking measures to anonymize our open source social media platforms further.
As someone with a large capacity for addiction and medicated ADHD, thank you for this warning.
Sometimes, it’s easy to be lulled into a sense of wellbeing, only to fall headfirst into a hyperactive episode on some seemingly innocuous hobby or game.
I’d rather obsess over a productive self-interest if I’m going to lose sleep over it
Not to mention that the metric of ‘sexual deviance’ is ill-defined and multi-variate. If sexual deviance is of a sexual health and safety orientation, then the obvious confounding factor is the historical use of abstinence-only education in this cohort (from 67-95). If the definition is speaking towards sexual violence and improper consent, then I think the conversation should include how healthy and consenting behaviors are being properly depicted outside of pornography as well as within, because simply not ever being exposed to sexual depictions doesn’t address the origins of anti-social attitudes toward the opposite gender and sexual frustrations of involuntarily celibate men. Domestic violence exists even outside a sexual context.
Not addressing those issues is how you end up with senile men like Dennis Prager who believe rape is morally permissible inside a heterosexual marriage.
There is so much work out there for free, with no copyright
There’s actually a lot less than you’d think (since copyright lasts for so long), but even less now that any online and digitized sources are being locked down and charged for by the domain owners. But even if it were abundant, it would likely not satisfy the true concern here. If there was enough data to produce an LLM of similar quality without using copyrighted data, it would still threaten the security of those writers. What is to say a user couldn’t provide a sample of Stephen King’s writing to the LLM and have it still produce derivative work without having trained it on copyrighted data? If the user had paid for that work, are they allowed to use the LLM in the same way? If they aren’t who is really at fault, the user or the owner of the LLM?
The law can’t address the complaints of these writers because interpreting the law to that standard is simply too restrictive and sets an impossible standard. The best way to address the complaint is to simply reform copyright law (or regulate LLM’s through some other mechanism). Frankly, I do not buy that the LLM’s are a competing product to the copyrighted works.
The biggest cost in training is most likely the hardware
That’s right for large models like the ones owned by OpenAI and Google, but with the amount of data needed to effectively train and fine-tune these models, if that data suddenly became scarce and expensive it could easily overtake hardware cost. To say nothing for small consumer models that are run on consumer hardware.
capitalists just stealing whatever the fuck they want “move fast and break things”
I understand this sentiment, but keep in mind that copyright ownership is just another form of capital.
Copyright is already just a band-aid for what is really an issue of resource allocation.
If writers and artists weren’t at risk of loosing their means of living, we wouldn’t need to concern ourselves with the threat of an advanced tool supplanting them. Nevermind how the tool is created, it is clearly very valuable (otherwise it would not represent such a large threat to writers) and should be made as broadly available (and jointly-owned and controlled) as possible. By expanding copyright like this, all we’re doing is gatekeeping the creation of AI models to the largest of tech companies, and making them prohibitively expensive to train for smaller applications.
If LLM’s are truly the start of a “fourth industrial revolution” as some have claimed, then we need to consider the possibility that our economic arrangement is ill-suited for the kind of productivity it is said AI will bring. Private ownership (over creative works, and over AI models, and over data) is getting in the way of what could be a beautiful technological advancement that benefits everyone.
Instead, we’re left squabbling over who gets to own what and how.
And you almost certainly leave thinking you aren’t being careful enough with your privacy and you should look into getting a VPN. Works the same with any ad, or even a promoted social media post. “You’ll like this thing because of how we know you think of yourself.”
It’s pernicious and erodes everyone’s ability to be happy and content, no matter how resistant you think you are to advertisements.
Targeted ads are designed to make you feel inadequate or incomplete. Even if it doesn’t convince you to buy the product advertised, it can still shift your expectations and world-view just by normalizing a certain type of consumption (or attitude, or media, ect).
Just because you don’t spend money, doesn’t mean ads aren’t still subtly manipulating your expectations.
It is a trillion dollar a year industry for a reason.
They wouldn’t go after the users, just the domains and the host servers. Similar to shutting down TPB or other tracker site, they’d go after the site host. True enough, there wouldn’t necessarily be risk to users of those sites, but if they escalated things enough (like if an authoritarian got elected and was so motivated…) they could start taking more severe punitive action. Who knows, they could amend the regulation to go after the users if they wanted - it’s a dangerous precedent either way. Especially when the intent is to ‘protect children’, there’s no limit to how far they might take it in the future.
I’m not familiar with Australian law but I don’t think this really applies. Most countries with internet censorship laws don’t have any guaranteed right to uncensored information. At least in the US, they don’t have ‘censorship’ per se, but they do sometimes ‘block’ an offending site by seizing domains/servers/equipment, and they can force search engines de-list them if the offense is severe enough. If the server is beyond their reach, they can prosecute or sanction the person hosting the site to pressure them into compliance. I can imagine a social media site who refuses to age verify and that hosts pornographic content (cough cough lemmy cough cough) be pursued like a CSAM site.
That doesn’t mean they can’t throw their weight around and bully self-hosters/small-time hobbyists and scare them into compliance. Any western country enacting a law like this could pressure their western trade partners to comply with enforcement efforts. And anyway it isn’t necessarily about the practicality of enforcing the law, so much as giving prosecutors a long leash to make a lot of noise and scare small-time hobbyists out of hosting non compliant sites. Most people can’t afford the headache, even if it isn’t enforceable where they live.