In an abrupt shift, the company may release future AI models without ironclad safety guarantees

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.

But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.

*Misanthropic

@[email protected]
link
fedilink
2
edit-2
5d

After sharing entire disk drive with AI agent Anthropic asks it’s users to bend over to the camera once per day.

Ulrich
link
fedilink
English
77d

I’m shocked!

That is a real thing because something happened that was out of line with my expectations!

Greed will one day be our end!

Scrubbles
link
fedilink
English
507d

Wen Google had to drop literally “don’t be evil”, something I would assume is supposed to be a given, I lost hope for all corporations.

I admit the “don’t be evil” slogan was very effective on me. I fell for it, but never again.

Scrubbles
link
fedilink
English
36d

It definitely had me trust them for way too long. To be fair, I trusted the original CEOs and company though, who from what I can tell were decent enough people.

It really was the final symbolic stamp for the overhaul we saw across silicon valley.

☂️-
link
fedilink
46d

gotta make line go up when they finally dominated global tech

When a company makes a pledge it’s either enforced by a court order or abandoned the second it’s no longer useful. It is known.

They could probably never actually do this. It seems that a trained model is some big mysterious thing that nobody really understands. They take some maths that’s so complicated barely anyone can understand it, feed it all the data they can possibly lay their hands on, then pump insane amounts of computational power through it. It’s the modern day equivalent of Frankenstein’s monster.

@[email protected]
link
fedilink
4
edit-2
6d

Yeah Anthropic has a whole research department for this

https://www.anthropic.com/research/team/interpretability

https://www.anthropic.com/research/tracing-thoughts-language-model

And you’re exactly right. Models at this point are like a trillion floats in complex vectorized matrix math and we don’t really know how that works to produce the output we see

surprised_pikachu.jpg

Article dated 2/24/26

What the hell is going on in here.

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 34 users / day
  • 135 users / week
  • 461 users / month
  • 1.49K users / 6 months
  • 1 subscriber
  • 4.78K Posts
  • 52K Comments
  • Modlog