help-circle
rss




Absolutely brilliant [campaign](https://www.forbrukerradet.no/breakingfree) (in English) by the Norwegian Consumer Council.
fedilink



Silicon Valley Rallies Behind Anthropic in A.I. Clash With Trump
cross-posted from: https://lemmy.ml/post/43810526 > *Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.* > > Feb. 27, 2026 > > https://archive.ph/hwHbe > > Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.” > > More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations. > > And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon. > > Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”
fedilink

Broken clock from an AI company or outright lying and already made an agreement in private, you think?
fedilink

Palantir Technologies has a permanent desk at the U.S.-led Civil Military Coordination Center (CMCC) headquarters in southern Israel, three sources from the diplomatic community inside the CMCC told Drop Site News. According to the sources, the artificial intelligence data analytics giant is providing the technological architecture for tracking the delivery and distribution of aid to Gaza. The presence of Palantir and other corporations—along with recent changes banning non-profits unwilling to give data to Israeli authorities—is creating a situation in which the delivery of aid is taking a backseat to the pursuit of profit, investment, and the training of AI products, experts say. “The United Nations already has a humanitarian architecture in place to step in during crises, abiding by humanitarian principles and grounded in international law,” UN Special Rapporteur for the occupied Palestinian territory Francesca Albanese told Drop Site. “This profit-driven parallel system involving companies like Palantir, already linked to Israel’s unlawful conduct, can only be regarded as a monstrosity.”
fedilink

cross-posted from: https://hexbear.net/post/7782405 > cross-posted from: https://news.abolish.capital/post/31069 > > > ![](https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Fhexbear.net%2Fapi%2Fv3%2Fimage_proxy%3Furl%3Dhttps%253A%252F%252Fwww.commondreams.org%252Fmedia-library%252Fthe-detonation-of-the-atomic-bomb-nicknamed-smokey-part-of-operation-plumbbob-in-the-nevada-desert-1957-it-was-detonated-at.jpg%253Fid%253D65019305%2526width%253D1024%2526height%253D820%2526coordinates%253D0%25252C0%25252C0%25252C0) > > > > > > An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed. > > > > Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, [revealed](https://www.kcl.ac.uk/shall-we-play-a-game) last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder. > > > > The results, he said, were "sobering." > > > > "Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use *strategic* nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications." > > > > Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps." > > > > "If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together." > > > > Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences. > > > > "No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying." > > > > Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, [said](https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/) in an interview with *New Scientist* published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions. > > > > While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict. > > > > "Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI." > > > > Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another. > > > > “It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them." > > > > The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes. > > > > As *CBS News* [reported](https://www.cbsnews.com/news/hegseth-anthropic-full-access-claude-ai-model/) on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities. > > > > If Anthropic doesn't agree to his demands, *CBS News* reported, the Pentagon may invoke the Defense Production Act and seize control of the model. > > > > --- > > > > **From [Common Dreams](https://www.commondreams.org/feeds/news.rss) via [This RSS Feed](https://www.commondreams.org/feeds/news.rss).**
fedilink

Instant LLM Updates with Doc-to-LoRA and Text-to-LoRA
Regular LoRA training is basically a standard gradient descent optimization loop where you have to curate a dataset, run backpropagation, and slowly update the low-rank matrices over many steps. It is computationally expensive and tedious every single time you want to teach the model a new trick or feed it a new document. What Sakana AI built with Doc-to-LoRA completely bypasses that repetitive training loop at deployment time by introducing a hypernetwork. They shifted the massive computational burden upfront through a meta-training phase where a separate neural network actually learns how to predict the correct LoRA weights directly from an input document or task description. Once that hypernetwork is trained, generating a new LoRA adapter only takes a single sub-second forward pass instead of a full fine-tuning run. You just feed a document into the frozen base model to get its token activations, and the hypernetwork instantly spits out the custom LoRA weights. This is incredibly effective for solving the long-term memory bottleneck in large language models. Instead of shoving a massive document into the context window for every single query, which completely eats up your VRAM and spikes latency, you permanently internalize that knowledge into a tiny adapter footprint of under fifty megabytes. They also designed a clever chunking mechanism that processes the document in small segments and concatenates the resulting adapters. This allows the model to perfectly recall information from documents that are tens of thousands of tokens longer than its actual native context limit. It essentially turns a slow and expensive engineering pipeline into a cheap and instant forward pass. source code https://github.com/SakanaAI/Doc-to-LoRA
fedilink



Nvidia results show path towards AI bubble pop. $45B+ increase ($95B total) in supply commitments at all time high RAM prices
DRAM pricing is what it is because AI investment frenzy is so intense. Western/NVIDIA centered AI will be more expensive too, because they are chasing so hard all of the memory (mostly) and TSMC capacity. Hurting all other computer companies. They can extort US/western customers even harder, making AI either more expensive or losing more money for their customers, by diverting/dumping H200/memory supply to abundantly powered Chinese customers, to try and slow down Huawei sales. Chinese models have significantly closed the frontier gap, while far exceeding the value proposition of LLM service, and a cost increase for US customers will make the gap worse, and require a Skynet program to bail out the too big to fail AI bubble.
fedilink


Reddit has been fined more than £14 million (€16 million) by the UK’s information watchdog, accusing the social media giant of failing to protect children and leaving them vulnerable to "inappropriate and harmful content". Following an investigation, the Information Commissioner’s Office (ICO) found that the American company neglected to implement robust age-verification tools. Reddit told Euronews Next that it intends to appeal the decision. Instead, Reddit relied heavily on "self-declaration"—allowing users to simply state their age without further proof—a method the watchdog deems insufficient for protecting children.
fedilink


Machine learning community has been stuck on the autoregressive bottleneck for years, but a new paper shows that it's possible to use diffusion models to work on discrete at scale. The researchers trained two coding focused models named Mercury Coder Mini and Small that completely shatter the current speed and quality tradeoff. Independent evaluations had the Mini model hitting an absurd throughput of 1109 tokens per second on H100 GPUs while the Small model reaches 737 tokens per second. They are essentially outperforming existing speed optimized frontier models by up to ten times in throughput without sacrificing coding capabilities. On practical benchmarks and human evaluations like Copilot Arena the Mini tied for second place in quality against huge models like GPT-4o while maintaining an average latency of just 25 ms. Their model matched the performance of established speed optimized models like Claude 3.5 Haiku and Gemini 2.0 Flash Lite across multiple programming languages while decoding exponentially faster. The advantage of diffusion relative to classical autoregressive models stems from its ability to perform parallel generation which greatly improves speed. Standard language models are chained to a sequential decoding process where they must generate an answer exactly one token at a time. Mercury abandons this sequential bottleneck entirely by training a Transformer model to predict multiple tokens in parallel. The model starts with a sequence of pure random noise and applies a denoising process that iteratively refines all tokens simultaneously in a coarse to fine manner until the final text emerges. Because the generation happens in parallel rather than sequentially the algorithm achieves a significantly higher arithmetic intensity that fully saturates modern GPU architectures. The team paired this parallel decoding capability with a custom inference engine featuring dynamic batching and specialized kernels to squeeze out maximum hardware utilization.
fedilink



Browse the read-only demo: - [Imageboard](https://sriracha.rocket9labs.com/img/) - [Forum](https://sriracha.rocket9labs.com/forum/) Sriracha is available under under [GNU LGPL](https://codeberg.org/tslocum/sriracha/src/branch/main/LICENSE). [Docker images](https://hub.docker.com/r/tslocum/sriracha) are available for simple and easy deployment.
fedilink



Fight or die!
fedilink



Prediction markets change the world, not only predicts it
A new technology is not simply another tool at our disposal. It changes us as well. Since prediction markets makes it possible to legally make money of the expected outcomes. Both insider trading and changing outcomes becomes ways of legally making money. This means that it not only predicts, but makes those in power able to change the world in order to make money of a prediction. Outcomes of bets is altered by those with power. Which means bets on the likelihood of a Greenland annexation is heavily affected by Trumps speeches. Since so many people bet on this outcome, even a small change of % moves huge amounts of money. This means that the true likelihood is unknown and not actually predicted. It just gives that illusion. This is the true nature of prediction markets. Transferring more wealth to those with money and power. Regulations might stifle this somewhat in the future. Trump and his admin does not want this of course.
fedilink



cross-posted from: https://hexbear.net/post/7691747 > cross-posted from: https://news.abolish.capital/post/29115 > > > ![](https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Flemmy.zip%2Fapi%2Fv3%2Fimage_proxy%3Furl%3Dhttps%253A%252F%252Fhexbear.net%252Fapi%252Fv3%252Fimage_proxy%253Furl%253Dhttps%25253A%25252F%25252Fwww.commondreams.org%25252Fmedia-library%25252Fguests-including-mark-zuckerberg-lauren-sanchez-jeff-bezos-sundar-pichai-and-elon-musk-attend-the-inauguration-of-donald-j.jpg%25253Fid%25253D61435131%252526width%25253D1024%252526height%25253D683%252526coordinates%25253D0%2525252C0%2525252C0%2525252C0) > > > > > > Big Tech firms are coming under greater scrutiny for the proliferation of child sexual abuse material generated by artificial intelligence-powered chatbots on their social media platforms. > > > > Ireland's Data Protection Commission (DPC) [announced](https://www.dataprotection.ie/en/news-media/press-releases/data-protection-commission-opens-investigation-x-xiuc) on Tuesday that it was invoking the European Union's data privacy regulations to open an investigation into [Grok](https://www.commondreams.org/news/elon-musk-grok-investigation), the AI chatbot featured on Elon Musk's X platform, after it was used to generate nonconsensual deepfake images, including sexualized images of children. > > > > In announcing the investigation, DPC Deputy Commissioner Graham Doyle said that the commission has been in contact with X for weeks after reports first emerged of Grok being used to generate child sexual abuse material (CSAM). > > > > Doyle said DPC has since decided to launch "a large-scale inquiry which will examine [X's] compliance with some of their fundamental obligations" under European privacy laws. > > > > Spanish President Pedro Sánchez [said](https://x.com/sanchezcastejon/status/2023654632866660688) on Tuesday that his government would ask Spain's Public Prosecution Service to "investigate the crimes that X, Meta, and TikTok may be committing through the creation and dissemination of child pornography by means of their AI." > > > > "These platforms are attacking the mental health, dignity, and rights of our sons and daughters," Sánchez emphasized. "The state cannot allow it. The impunity of the giants must end." > > > > The probes announced by Ireland and Spain mark just the latest actions by European governments against US-based tech giants. Earlier in February, law enforcement authorities in France raided the office of X in Paris, which the Paris prosecutor’s office said was part of an investigation aimed at "ensuring that the X platform complies with French laws, insofar as it operates on national territory." > > > > The UK government's Information Commissioner's Office has also announced an investigation into X that the agency said encompasses "their processing of personal data in relation to the Grok artificial intelligence system and its potential to produce harmful sexualized image and video content." > > > > --- > > > > **From [Common Dreams](https://www.commondreams.org/feeds/news.rss) via [This RSS Feed](https://www.commondreams.org/feeds/news.rss).**
fedilink

[Video] China showcases humanoid robot kung-fu performance at Chinese Lunar New Year show
[Article](https://www.reuters.com/business/media-telecom/chinas-humanoid-robots-ready-lunar-new-year-showtime-2026-02-16/)
fedilink

This paper is honestly one of the most creative takes on LLM reasoning I’ve seen in a while. The team at ByteDance basically argues that we should view Long Chain-of-Thought as a macromolecular structure with internal forces that hold the logic together. They found that when we try to teach a model to reason by simply distilling keywords from a teacher, it fails because it’s like trying to build a protein by looking at a photo of it rather than understanding the atomic bonds. Their Molecular Structure of Thought hypothesis breaks reasoning down into three specific bond types that behave similarly to their chemical counterparts. Deep reasoning acts like covalent bonds, forming the rigid primary backbone where each logical step must strictly justify the next. Self-reflection functions like hydrogen bonds, creating folding patterns where the model looks back 100 steps to audit an earlier premise, which keeps it from hallucinating. Finally, you have self-exploration acting like van der Waals forces, these are low-commitment bridges that let the model probe different ideas without getting stuck in a rigid path too early. They found that most synthetic reasoning data is actually trash because it lacks this distribution. They proved that models don't actually learn the keywords themselves, but the characteristic reasoning behaviors those keywords represent. In one experiment, they replaced keywords like wait with arbitrary synonyms or removed them entirely, and the models still learned the reasoning structure just fine. It turns out that building these stable thought molecules is what creates the basis for Long CoT, as opposed to just mimicking a specific vibe or prompt format. They built MOLE-SYN to address the problem. Instead of just copying teacher outputs, it uses a distribution transfer graph to walk through four behavioral states to synthesize traces that have the correct bond profile from the start. Their approach makes reinforcement learning much more stable because the model starts with a balanced skeleton instead of a bunch of fragmented logic. The paper challenges the whole more data is better mindset to argue that it's the geometry of the information flow that really matters.
fedilink


Nvidia close to investing $30 billion in OpenAI’s mega funding round, source says
Just FYI the previous deal was $100 billion, will we see the bubble pop soon? Can I buy RAM or any fricking component now?
fedilink


ICE more than tripled the amount of data it holds on Microsoft servers between July 2025 and January 2026, at the same time as the agency’s crackdown on migrants broke new records and sparked mass protests across the United States. Whereas last July the agency was storing around 400 terabytes of data in Microsoft’s cloud platform, Azure, by the end of January that had risen to almost 1,400 terabytes — equivalent to approximately 490 million images. ICE employs a powerful arsenal of surveillance technology, reportedly using facial recognition software, drones, phone location tracking, mobile spyware, and even tapping school cameras. The leaked documents show ICE is using Microsoft’s AI video analysis tools including Azure AI Video Indexer and Azure Vision, which enable customers to analyze images, read text, and detect certain words, faces, emotions, and objects in audio and video files. The agency is also understood to have significantly expanded its access to Microsoft’s suite of productivity apps, which include document management tools and an AI chatbot. However, the files do not specify whether ICE’s vast surveillance trove is being stored on Azure, or whether the agency is using the cloud platform for other operations instead, such as running detention centers or coordinating deportation flights.
fedilink



    Create a post

    This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


    Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


    Rules:

    1: All Lemmy rules apply

    2: Do not post low effort posts

    3: NEVER post naziped*gore stuff

    4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

    5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

    6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

    7: crypto related posts, unless essential, are disallowed

    • 1 user online
    • 20 users / day
    • 69 users / week
    • 448 users / month
    • 1.47K users / 6 months
    • 1 subscriber
    • 4.75K Posts
    • 51.8K Comments
    • Modlog
    Lemmy
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is Lemmy.ml

    Rules

    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful, especially when disagreeing. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: