help-circle
rss




Let It Flow: Agentic Crafting on Rock and Roll, Building the ROME Model within an Open Agentic Learning Ecosystem
The core thesis of this paper is that the AI community needs to stop treating autonomous agents as just another text generation problem and start building comprehensive infrastructure to support closed loop learning. The authors argue that achieving reliable agentic behavior requires a full stack ecosystem that unifies data synthesis with sandboxed execution and specialized reinforcement learning. To prove this point they introduce the Agentic Learning Ecosystem which consists of an RL framework called ROLL alongside a sandbox manager named ROCK and an agent interface known as iFlow CLI. They believe that isolating models in static training environments is a dead end for solving complex real world workflows. The team developed an open source model named ROME using a tightly integrated training pipeline with reproducible execution environments which allowed a relatively small 30 billion parameter model to rival or beat massive proprietary models exceeding 100 billion parameters on difficult software engineering benchmarks. A big part of their argument rests on the idea that credit assignment in reinforcement learning needs to change. They propose a novel algorithm called Interaction Perceptive Agentic Policy Optimization which shifts the reward focus from individual text tokens to broader semantic interaction chunks. This chunk level optimization stabilizes the training process over long horizons and prevents the policy collapse often seen in complex tool use scenarios. We're increasingly seeing a shift of priorities away from raw data scale and focus on the systematic infrastructure as the actual bedrock of next generation models.
fedilink




cross-posted from: https://lemmy.ml/post/44059967 > for those not familiar with [Mark Pilgrim](https://en.wikipedia.org/wiki/Mark_Pilgrim), he is/was a prolific author, blogger, and hacker who abruptly disappeared from the internet in 2011. > > cross-posted from: https://lemmy.bestiver.se/post/968527 > > > [HN comments](https://news.ycombinator.com/item?id=47259177)
fedilink


BREAKING FREE – Pathways to a fair technological future
"Breaking Free: Pathways to a fair technological future" is a new report from Forbrukerrådet. The report itself is a light read: it's in English, and while it is 100 pages long [PDF], it is in fact enjoyable and even amusing – we laughed quite a few times when reading it. For one thing, it contains a surprising number of puns and the occasional starred-out swearword, such as "Do androids dream of electric s***." A stodgy bureaucratic report this is not. https://youtu.be/T4Upf_B9RLQ
fedilink





💻🧼 The best way to clean my laptop…?
I had been using a magic eraser (never on the screen) and have not yet suffered any ill effects as a result (although I also rarely clean my laptop at all), but I gather that this is not recommended any longer (if it ever was). Alcohol wipes are good for the screen, but not as effective for the keyboard and other non-screen parts. Any suggestions?
fedilink

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME. In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology. But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.
fedilink




GrapheneOS Collaboration With Motorola Mobility
cross-posted from: https://lemmy.ml/post/43923170 > We're happy to announce a long-term partnership with Motorola. We're collaborating on future devices meeting our privacy and security standards with official GrapheneOS support. > > [https://motorolanews.com/motorola-three-new-b2b-solutions-at-mwc-2026/](https://motorolanews.com/motorola-three-new-b2b-solutions-at-mwc-2026/)
fedilink

An Amazon Web Services data center in the United Arab Emirates suffered a multi-hour outage on Sunday after unidentified “objects” struck the facility and triggered a fire. The incident occurred around 4:30 a.m. local time and affected the availability zone mec1-az2 in the ME-CENTRAL-1 region. The fire department cut power to combat the flames, resulting in significant disruptions to cloud services. Given the simultaneous Iranian retaliatory attacks on the Gulf states, suspicion arises that the impacting objects may have been missiles or drones. Amazon has not confirmed anything on its part.
fedilink



# The Banality of Artificial Intelligence ### What happens when an AI hallucination leads to bombing an elementary school? **By Michael Altfield** License: CC BY-SA 4.0 https://tech.michaelaltfield.net/ It appears likely that the US government is using Anthropic, OpenAI, Google and/or xAI data models for processing [signals intelligence](https://en.wikipedia.org/wiki/Signals_intelligence) (SIGINT), for AI-generated "kill lists" to determine where to drop their bombs. | [![Image shows a nazi german chemical war factory on the left in black-and-white (with logos of companies Bayer and BASF overlaying it) and an image of a new AI datacenter on the right (with logos of companies OpenAI and Anthropic overlaying it). In the middle of the two industrial sites is an equal sign. On the right is a question mark.](https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Ftech.michaelaltfield.net%2Fwp-content%2Fuploads%2Fsites%2F5%2Fai-venezuela-iran_featuredImage1.jpg)](https://tech.michaelaltfield.net/2026/03/03/ai-venezuela-iran/) | |:--:| | [right] This AI datacenter is a machinery of war. Its LLM hallucinations decide which children to assassinate [left] This IG Farben (Bayer/BASF) factory in Auschwitz produced Zyklon B for the Nazis, who murdered over a million children | In Apr 2024, +972 (an Israeli news outlet) [published a \>9,000 word article](https://www.972mag.com/lavender-ai-israeli-army-gaza/) describing how **the Israeli military had been using Artificial Intelligence to decide which (residential) buildings, hospitals, and schools to bomb** in Gaza. In Feb 2026, **the US (and Israel) bombed Iran -- [killing over 100 schoolchildren](https://www.dropsitenews.com/p/iran-minab-elementary-girls-school-bombing-schoolgirls-killed-us-israel-war)** (and [Ali Khamenei](https://www.aljazeera.com/news/2026/2/28/irans-supreme-leader-ali-khamenei-killed-in-us-israeli-attacks-reports)). In Mar 2026, **it appears that the US has likely built a similar system**, leveraging US AI companies' tech to decide which (school) buildings to bomb, false-positive hallucinations be damned. **Who [targeted](https://www.dropsitenews.com/p/iran-minab-elementary-girls-school-bombing-schoolgirls-killed-us-israel-war) the Shajareh Tayyiba girls' elementary school in Minab, Iran? Could it have been an AI hallucination?** A false-positive? ... --- Read the [full article](https://tech.michaelaltfield.net/2026/03/03/ai-venezuela-iran/) here: * https://tech.michaelaltfield.net/2026/03/03/ai-venezuela-iran/
fedilink



Broken clock from an AI company or outright lying and already made an agreement in private, you think?
fedilink



Silicon Valley Rallies Behind Anthropic in A.I. Clash With Trump
cross-posted from: https://lemmy.ml/post/43810526 > *Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.* > > Feb. 27, 2026 > > https://archive.ph/hwHbe > > Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.” > > More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations. > > And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon. > > Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”
fedilink


Absolutely brilliant [campaign](https://www.forbrukerradet.no/breakingfree) (in English) by the Norwegian Consumer Council.
fedilink


cross-posted from: https://hexbear.net/post/7782405 > cross-posted from: https://news.abolish.capital/post/31069 > > > ![](https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Fhexbear.net%2Fapi%2Fv3%2Fimage_proxy%3Furl%3Dhttps%253A%252F%252Fwww.commondreams.org%252Fmedia-library%252Fthe-detonation-of-the-atomic-bomb-nicknamed-smokey-part-of-operation-plumbbob-in-the-nevada-desert-1957-it-was-detonated-at.jpg%253Fid%253D65019305%2526width%253D1024%2526height%253D820%2526coordinates%253D0%25252C0%25252C0%25252C0) > > > > > > An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed. > > > > Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, [revealed](https://www.kcl.ac.uk/shall-we-play-a-game) last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder. > > > > The results, he said, were "sobering." > > > > "Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use *strategic* nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications." > > > > Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps." > > > > "If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together." > > > > Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences. > > > > "No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying." > > > > Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, [said](https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/) in an interview with *New Scientist* published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions. > > > > While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict. > > > > "Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI." > > > > Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another. > > > > “It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them." > > > > The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes. > > > > As *CBS News* [reported](https://www.cbsnews.com/news/hegseth-anthropic-full-access-claude-ai-model/) on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities. > > > > If Anthropic doesn't agree to his demands, *CBS News* reported, the Pentagon may invoke the Defense Production Act and seize control of the model. > > > > --- > > > > **From [Common Dreams](https://www.commondreams.org/feeds/news.rss) via [This RSS Feed](https://www.commondreams.org/feeds/news.rss).**
fedilink

Palantir Technologies has a permanent desk at the U.S.-led Civil Military Coordination Center (CMCC) headquarters in southern Israel, three sources from the diplomatic community inside the CMCC told Drop Site News. According to the sources, the artificial intelligence data analytics giant is providing the technological architecture for tracking the delivery and distribution of aid to Gaza. The presence of Palantir and other corporations—along with recent changes banning non-profits unwilling to give data to Israeli authorities—is creating a situation in which the delivery of aid is taking a backseat to the pursuit of profit, investment, and the training of AI products, experts say. “The United Nations already has a humanitarian architecture in place to step in during crises, abiding by humanitarian principles and grounded in international law,” UN Special Rapporteur for the occupied Palestinian territory Francesca Albanese told Drop Site. “This profit-driven parallel system involving companies like Palantir, already linked to Israel’s unlawful conduct, can only be regarded as a monstrosity.”
fedilink


Reddit has been fined more than £14 million (€16 million) by the UK’s information watchdog, accusing the social media giant of failing to protect children and leaving them vulnerable to "inappropriate and harmful content". Following an investigation, the Information Commissioner’s Office (ICO) found that the American company neglected to implement robust age-verification tools. Reddit told Euronews Next that it intends to appeal the decision. Instead, Reddit relied heavily on "self-declaration"—allowing users to simply state their age without further proof—a method the watchdog deems insufficient for protecting children.
fedilink



    Create a post

    This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


    Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


    Rules:

    1: All Lemmy rules apply

    2: Do not post low effort posts

    3: NEVER post naziped*gore stuff

    4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

    5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

    6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

    7: crypto related posts, unless essential, are disallowed

    • 1 user online
    • 33 users / day
    • 135 users / week
    • 461 users / month
    • 1.49K users / 6 months
    • 1 subscriber
    • 4.78K Posts
    • 52K Comments
    • Modlog
    Lemmy
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is Lemmy.ml

    Rules

    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful, especially when disagreeing. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: