
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Which was revealed when Anthropic made their very simple demands and the administration threw a tantrum.
No fully autonomous murder; a human must make the final decision.
No domestic mass surveillance.
Pretty simple demands that should have been easy to agree to.
I’m still convinced this was just pure theatre to present them as the good and independent company that’s totally not working with the fascist state.
I wouldn’t say “pure” theatre. I think the demands were serious while also being a common sense PR move that would also motivate (or not demotivate) their employees.
But they were supposed to just be easy to agree to. “Okay, yeah, we’ll have an overseer on the murder bots, and we won’t use it to spy on Americans.” Even with an optional wink wink nudge nudge, it still mostly works for Anthropic’s PR.
It was supposed to be a layup, and the administration airballed it.
it seems like the whole drama didn’t really impact usage of their models by government agencies though
That’s exactly what I was thinking. Why would they want to turn down the ability to collect surveillance data on all their enemies?