help-circle
rss

Google has reinstated support for the JPEG XL image format in the open source Chromium code base, reversing a decision it made in 2022 to remove it. The update allows Chromium to recognize, decode, and render JPEG XL images directly, without extensions or external components. This change applies at the browser engine level, meaning it will affect future versions of Google Chrome and other Chromium-based browsers when they are released.
fedilink




>Every year, at the end of January and the beginning of February, thousands of people from Europe and around the world gather in Brussels to discuss open source and open technologies. The main attraction is FOSDEM, Europe’s largest open source conference, which has inspired a range of side events, social activities, and workshops. For those interested in open technology, digital policy, and EU developments, OpenForum Europe’s EU Open Source Policy Summit brings together open source leaders and policymakers. Together, these events make up the EU Open Source Week.
fedilink


Suppliers of parts for Nvidia’s H200 have paused production after Chinese customs officials blocked shipments of the newly approved artificial intelligence processors from entering China, according to a report. Nvidia had expected more than one million orders from Chinese clients, the report said, adding that its suppliers had been operating around the clock to prepare for shipping as early as March. Chinese customs authorities this week told customs agents that Nvidia’s H200 chips were not permitted to enter the country, Reuters reported. Sources have also said government officials summoned domestic tech firms to warn them against buying the chips unless it was necessary.
fedilink






The paper argues that we have been wasting a lot of expensive GPU cycles by forcing transformers to relearn static things like names or common phrases through deep computation. Standard models do not have a way to just look something up so they end up simulating memory by passing tokens through layer after layer of feed forward networks. DeepSeek introduced a module called Engram which adds a dedicated lookup step for local N-gram patterns. It acts like a new way to scale a model that is separate from the usual compute heavy Mixture of Experts approach. The architecture uses multi head hashing to grab static embeddings for specific token sequences which are then filtered through a context aware gate to make sure they actually fit the current situation. They found a U shaped scaling law where the best performance happens when you split your parameter budget between neural computation and this static memory. By letting the memory handle the simple local associations the model can effectively act like it is deeper because the early layers are not bogged down with basic reconstruction. One of the best bits is how they handle hardware constraints by offloading the massive lookup tables to host RAM. Since these lookups are deterministic based on the input tokens the system can prefetch the data from the CPU memory before the GPU even needs it. This means you can scale to tens of billions of extra parameters with almost zero impact on speed since the retrieval happens while the previous layers are still calculating. The benchmarks show that this pays off across the board especially in long context tasks where the model needs its attention focused on global details rather than local phrases. It turns out that even in math and coding the model gets a boost because it is no longer wasting its internal reasoning depth on things that should just be in a lookup table. Moving forward this kind of conditional memory could be a standard part of sparse models because it bypasses the physical memory limits of current hardware.
fedilink




Alternative smart phones in the USA?
I've been looking into potentially switching to an alternative smart phone, something like the Jolla phone with sailfish OS, the fairphone, etc. I think something like that would be excellent for what I want my smart phone to do, and I'm tired of the one I have. However from what I've read online, they don't really seem to work well in the USA. I am on T mobile which seems to work better than other providers, but I was wondering if any of the fine people on Lemmy have actual experience they can share.
fedilink



cross-posted from: https://hexbear.net/post/7329892 > cross-posted from: https://news.abolish.capital/post/19564 > > > ![](https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Flemmy.zip%2Fapi%2Fv3%2Fimage_proxy%3Furl%3Dhttps%253A%252F%252Fhexbear.net%252Fapi%252Fv3%252Fimage_proxy%253Furl%253Dhttps%25253A%25252F%25252Fwww.commondreams.org%25252Fmedia-library%25252Fsecretary-of-defense-pete-hegseth-stands-with-elon-musk-at-the-headquarters-of-his-company-spacex-in-starbase-texas-on-january.jpg%25253Fid%25253D62722691%252526width%25253D1200%252526height%25253D400%252526coordinates%25253D0%2525252C37%2525252C0%2525252C631) > > > > > > Elon Musk, the world's richest man and the owner of the social media app X, has faced a mountain of outrage in recent weeks as his platform's artificial intelligence chatbot "Grok" has been used to generate sexualized deepfake images of nonconsenting women and children, and Musk himself has embraced open white nationalism. > > > > But none of this seems to be of particular concern to Defense Secretary Pete Hegseth. Despite the swirl of scandal, he [announced](https://www.npr.org/2026/01/13/nx-s1-5675781/pentagon-musks-grok-ai-chatbot-global-outcry) on Monday that Musk's chatbot would be given intimate access to reams of military data as part of what the department [described](https://www.war.gov/News/Releases/Release/Article/4376420/war-department-launches-ai-acceleration-strategy-to-secure-american-military-ai/) as its new "AI acceleration strategy." > > > > During a speech at the headquarters of SpaceX, another company owned by Musk, Hegseth stood alongside the billionaire and announced that later this month, the department plans to “make all appropriate data” from the military’s IT systems available for “AI exploitation,” including “combat-proven operational data from two decades of military and intelligence operations.” > > > > As the *Associated Press* [noted](https://apnews.com/article/artificial-intelligence-pentagon-hegseth-musk-7f99e5f32ec70d7e39cec92d2a4ec862), it's a departure from the more cautious approach the Biden administration took toward integrating AI with the military, which included bans on certain uses "such as applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons." > > > > While it's unclear if those bans remain in place under President Donald Trump, Hegseth said during the speech he will seek to eschew the use of any AI models "that won't allow you to fight wars" and will seek to act "without ideological constraints that limit lawful military applications," before adding that the Pentagon's AI will not be "woke” or “equitable.” > > > > He [added](https://www.war.gov/News/Releases/Release/Article/4376420/war-department-launches-ai-acceleration-strategy-to-secure-american-military-ai/) that the department “will unleash experimentation, eliminate bureaucratic barriers, focus our investments, and demonstrate the execution approach needed to ensure we lead in military AI. He added that ”we will become an ‘AI-first’ warfighting force across all domains. > > > > > — (@) > > > > Hegseth's embrace of Musk hardly comes as a surprise, given his role in the Trump [administration's dismantling](https://www.commondreams.org/news/doge-doesn-t-exist) of the administrative state as head of its so-called "Department of Government Efficiency" (DOGE) last year, and his [record $290 million](https://www.cnn.com/2025/02/01/politics/elon-musk-2024-election-spending-millions) in support for the president's 2024 election campaign. > > > > But it is quite noteworthy given the type of notoriety Grok has received of late after it introduced what it called “spicy mode” for the chatbot late last year, which “allows users to digitally remove clothing from images and has been deployed to produce what amounts to child pornography—along with other disturbing behavior, such as sexualizing the deputy prime minister of [Sweden](https://www.commondreams.org/tag/sweden),” according to a [report](https://www.ms.now/opinion/grok-musk-ai-chatbot-spicy-mode-deepfakes-child-porn-europe) last month from *MS NOW* (formerly *MSNBC*). > > > > It's perhaps the most international attention the bot has gotten, with the United Kingdom's media regulator launching a [formal investigation](https://www.nytimes.com/2026/01/12/world/europe/grok-ai-images-x-elon-musk-uk.html) on Monday to determine whether Grok violated the nation's Online Safety Act by failing to protect users from illegal content, including child sexual abuse material. > > > > The investigation could result in fines, which, if not followed, [could lead](https://www.nytimes.com/2026/01/12/world/europe/grok-ai-images-x-elon-musk-uk.html) to the chatbot being banned, as it was over the weekend in Malaysia and Indonesia. Authorities in the European Union, France, Brazil, and elsewhere are also reviewing the app for its spread of nonconsensual sexual images, according to the *New York Times.* > > > > > One example of how Grok is being used to target women. Swedish Deputy Prime Minister Ebba Busch being sexualised, degraded, and humiliated step-by-step by Grok. All the images accurately reflect the prompts provided. > > > > > > [[image or embed]](https://bsky.app/profile/did:plc:2whlowi5jjjqrdrrj4lxh2lx/post/3mboy3hmcxs2q?ref_src=embed) > > > — Eliot Higgins ([@eliothiggins.bsky.social](https://bsky.app/profile/did:plc:2whlowi5jjjqrdrrj4lxh2lx?ref_src=embed)) [January 5, 2026 at 12:37 PM](https://bsky.app/profile/did:plc:2whlowi5jjjqrdrrj4lxh2lx/post/3mboy3hmcxs2q?ref_src=embed) > > > > It's only the latest scandal involving the Grok, which Musk pitched as an "anti-woke" and "truth-seeking" alternative to applications like ChatGPT and Google's Gemini. > > > > At several points last year, the chatbot drew attention for its sudden tendency to launch into racist and antisemitic tirades—[praising](https://www.politico.com/news/magazine/2025/07/10/musk-grok-hitler-ai-00447055) [Adolf Hitler](https://www.commondreams.org/tag/adolf-hitler), accusing Jewish people of [controlling](https://www.cnn.com/2025/07/08/tech/grok-ai-antisemitism) Hollywood and the government, and [promoting](https://www.pbs.org/newshour/world/france-will-investigate-musks-grok-after-ai-chatbot-posted-holocaust-denial-claims) Holocaust denial. > > > > Before that, users were baffled when the bot began [directing](https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide) unrelated queries about everything from [cats](https://www.politico.com/newsletters/digital-future-daily/2025/05/15/groks-white-genocide-glitch-and-the-ai-black-box-00352709#%3A%7E%3Atext=a+cat+playing+in+a+sink+received) to [baseball](https://80000hours.org/videos/mechahitler/#%3A%7E%3Atext=Grok+starts+using+that+as+an+opener+when+answering+questions+about+baseball) back to discussions about Musk's [factually dubious](https://www.currentaffairs.org/news/south-africas-white-genocide-is-a-lie) pet theory of "white genocide" in South Africa, which the chatbot later [revealed](https://www.cnbc.com/2025/05/15/grok-white-genocide-elon-musk.html#%3A%7E%3Atext=Musk%27s+Grok+AI+chatbot+says+it+%27appears%2Cany+answers+promoting+or+endorsing+harmful+ideologies.%22) it was "instructed" to talk about. > > > > Hegseth’s announcement on Monday also comes as Musk has completed his descent into undisguised support for a white nationalist ideology over the past week. > > > > The billionaire's steady lurch to the far-right has been a years-long process—capped off last year, with his [enthusiastic support](https://www.currentaffairs.org/news/one-of-the-most-powerful-men-in-america-is-a-nazi-sympathizer) for the neofascist Alternative for Germany Party and apparent [Nazi salute](https://www.commondreams.org/news/musk-salute-trump-inauguration) at Trump's second inauguration. > > > > But his racist outlook was left impossible to deny last week when he [expressed support](https://x.com/Sturgeons_Law/status/2009288378005496058) for a pair of posts on X stating that white people must "reclaim our nations" or "be conquered, enslaved,*removed*d, and genocided" and that "if white men become a minority, we will be slaughtered," necessitating "white solidarity." > > > > > — (@) > > > > While details about the expansiveness of Grok’s use by the military remain scarce, Musk's AI platform, xAI, announced in July that it had [inked a deal](https://www.commondreams.org/news/elon-musk-ai) with the [Pentagon](https://www.commondreams.org/tag/pentagon) worth nearly $200 million (notably just a week after the bot infamously referred to itself as “MechaHitler”). > > > > In September, reportedly following [direct pressure](https://www.wired.com/story/white-house-elon-musk-xai-grok/) from the White House to roll it out "ASAP," the General Services Administration announced a "OneGov" agreement, making Grok available to every federal agency for just $0.42 apiece. > > > > That same month, Sen. Elizabeth Warren (D-Mass.) sent a [letter](https://www.warren.senate.gov/imo/media/doc/letter_to_pentagon_regarding_integration_of_grok_91025.pdf) to Hegseth warning that Musk, who'd also used Grok extensively under DOGE to [purge](https://www.reuters.com/technology/artificial-intelligence/musks-doge-using-ai-snoop-us-federal-workers-sources-say-2025-04-08/) disloyal government employees, was "gaining improper advantages from unique access to DOD data and information." She added that Grok's propensity toward "inaccurate outputs and misinformation" could "harm DOD's strategic decisionmaking." > > > > Following this week's announcement, JB Branch, the Big Tech accountability advocate at Public Citizen, [said](https://www.citizen.org/news/groks-record-of-sexual-exploitation-and-ongoing-international-investigations-should-disqualify-it-from-federal-or-classified-use/) on Tuesday that, "allowing an AI system with Grok’s track record of repeatedly generating nonconsensual sexualized images of women and children to access classified military or sensitive government data raises profound national security, civil rights, and public safety concerns." > > > > "Deploying Grok across other areas of the federal government is worrying enough, but choosing to use it at the Pentagon is a national security disgrace," he added. "If an AI system cannot meet basic safety and integrity standards, expanding its reach to include classified data puts the American public and our nation’s safety at risk.” > > > > --- > > > > **From [Common Dreams](https://www.commondreams.org/feeds/news.rss) via [This RSS Feed](https://www.commondreams.org/feeds/news.rss).**
fedilink





Is a Dell Latitude 7430 any good for general school stuff?
I recently bought a Dell Latitude 7430 with an i7-1265u, 10 cores, 1.8Ghz, 16gb of (I think) DDR4 RAM, and a 256GB SSD for 250$. I still have time to return the machine. I was wondering whether I got a good deal here or not. My purpose is mostly for general school stuff. Spreadsheets, docs, Zoom meetings, and the like. I might be getting into the world of CS, but I'm not at a point yet where I would need much power. Still, the 256GB of storage worry me. And unfortunately it can't be upgraded. Still, if I'm not doing much besides all of the basic tasks expected of a work laptop, do I really need more? Should I consider returning it and try to get another deal? Keep it? Or something else altogether?
fedilink

This paper is one of the more interesting takes on context extension I have seen in a while because it challenges the assumption that we need explicit positional encodings during inference. The authors make a case that embeddings like RoPE act more like scaffolding during construction rather than a permanent load bearing wall. The idea is that these embeddings are crucial for getting the model to converge and learn language structure initially, but they eventually turn into a hard constraint that prevents the model from generalizing to sequence lengths it has never seen before. The methodology is surprisingly straightforward since they just take a pretrained model and completely drop the positional embeddings before running a very quick recalibration phase. This process essentially converts the architecture into a NoPE or No Positional Embedding model where the attention mechanism has to rely on the latent positioning it learned implicitly. It turns out that once you remove the explicit constraints of RoPE the model can extrapolate to context windows significantly longer than its training data without the perplexity explosions we usually see. It is pretty wild to see this outperform techniques like YaRN on benchmarks like Needle In A Haystack while using a fraction of the compute. I think this suggests that Transformers are much better at understanding relative positions from semantic cues than we give them credit for. If this holds up it means we might be wasting a lot of resources trying to engineer complex interpolation methods when the answer was just to take the training wheels off once the model knows how to ride.
fedilink

Jan 14 (Reuters) - Chinese authorities have told domestic companies to stop using cybersecurity software made by roughly a dozen firms from the U.S. and Israel due to national security concerns, two people briefed on the matter said. Broadcom-owned VMware, Palo Alto Networks Fortinet, are among the U.S. firms whose cybersecurity software has been banned, while Check Point Software Technologies is among the Israeli companies, they said.
fedilink

In my view, this is the exact right approach. LLMs aren't going anywhere, these tools are here to stay. The only question is how they will be developed going forward, and who controls them. Boycotting AI is a really naive idea that's just a way for people to signal group membership. Saying I hate AI and I'm not going to use it is really trending and makes people feel like they're doing something meaningful, but it's just another version of trying to vote the problem away. It doesn't work. The real solution is to roll up the sleeves and built an a version of this technology that's open, transparent, and community driven.
fedilink


As if constantly pushing more AI slop into their software while making no real improvements wasn’t enough…
fedilink







Most people in the field know that models usually fall apart after a few hundred steps because small errors just keep adding up until the whole process is ruined. The paper proposes a system called MAKER which uses a strategy they call massively decomposed agentic processes. Instead of asking one big model to do everything they break the entire task down into the smallest possible tiny pieces so each microagent only has to worry about one single move. For their main test they used a twenty disk version of the Towers of Hanoi puzzle which actually requires over a million individual moves to finish. They found that even small models can be super reliable if you set them up correctly. One of the main tricks they used is a voting system where multiple agents solve the same tiny subtask and the system only moves forward once one answer gets a specific number of votes more than the others. This acts like a safety net that catches random mistakes before they can mess up the rest of the chain. Another interesting part of their approach is red flagging which is basically just throwing away any response that looks suspicious or weird. If a model starts rambling for too long or messes up the formatting they just discard that attempt and try again because those kinds of behaviors usually mean the model is confused and likely to make a logic error. By combining this extreme level of task breakdown with constant voting and quick discarding of bad samples they managed to complete the entire million step process with zero errors. And it turns out that you do not even need the most expensive or smartest models to do this since relatively small ones performed just as well for these tiny steps. Scaling up AI reliability might be more about how we organize the work rather than just making the models bigger and bigger. They even did some extra tests with difficult math problems like large digit multiplication and found that the same recursive decomposition and voting logic worked there as well.
fedilink


Firms are trying to dress up layoffs as a good news story rather than bad news as attributing staff reductions to AI adoption conveys a more positive message to investors.
fedilink

    Create a post

    This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


    Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


    Rules:

    1: All Lemmy rules apply

    2: Do not post low effort posts

    3: NEVER post naziped*gore stuff

    4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

    5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

    6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

    7: crypto related posts, unless essential, are disallowed

    • 0 users online
    • 28 users / day
    • 105 users / week
    • 320 users / month
    • 1.37K users / 6 months
    • 1 subscriber
    • 4.58K Posts
    • 50.6K Comments
    • Modlog
    Lemmy
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is Lemmy.ml

    Rules

    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful, especially when disagreeing. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: