help-circle
rss


A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time
Qwen3-30B-A3B-Instruct-2507 device-optimized quant variants without output quality falling off a cliff. A 30B runs on a Raspberry Pi 5 (16GB) achieving 8.03 TPS at 2.70 BPW, while retaining 94.18% of BF16 quality. ShapeLearn tends to find better TPS/quality tradeoffs versus alternatives. What’s new/interesting in this one 1) CPU behavior is mostly sane On CPUs, once you’re past “it fits,” smaller tends to be faster in a fairly monotonic way. The tradeoff curve behaves like you’d expect. 2) GPU behavior is quirky On GPUs, performance depends as much on kernel choice as on memory footprint. So you often get sweet spots (especially around ~4b) where the kernels are “golden path,” and pushing lower-bit can get weird. models: https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF
fedilink

Sources: > Dec 16 - [Hillan Klein Named New Namecheap CEO](https://domaininvesting.com/hillan-klein-named-new-namecheap-ceo/) Full thread exposing Zionist links of the new CEO: https://xcancel.com/Archivepaletc/status/2007161924937552128
fedilink



NVIDIA started to discontinue its GeForce RTX 3060 GPUs back in 2024. The original lineup, which was introduced back in 2021, is still the most popular gaming graphics card on Steam, and while the 4060 & 5060 are picking up the pace, it looks like NVIDIA might once again open up production lines for this GPU. This indicates the extent to which the DRAM shortages have affected consumer GPUs. The GeForce RTX 5060 makes use of GDDR7 memory, and as DRAM costs rise, the RTX 5060 might not only be affected in terms of pricing but also in supply, since procuring the memory is also an issue due to poor supply. The 60-series product family is made for mass consumption, so NVIDIA will have to offer some alternative to its partners.
fedilink

This paper asks whether LLMs can estimate the probability of their own success before they start solving a task, and do these estimates become more accurate as the work progresses. Turns out this is a separate ability and a poorly developed one. The authors test it across three different scenarios, ranging from single-step problems to multi-step agentic processes. First, they use BigCodeBench, a set of 1,140 single-step Python tasks. For each task, the model is asked in advance to state the probability that it will succeed, and only then it actually attempts to solve the task. This allows a direct comparison between confidence and real performance. The result is consistent across all models: all of them are systematically overconfident. Predicted success probabilities are consistently higher than actual success rates. Importantly, increasing model capability does not guarantee better self-calibration. For GPT and LLaMA families, this does not meaningfully improve. Within the Claude family there is some reduction in overconfidence, but it never disappears. On average, they can distinguish easier tasks from harder ones better than chance. In other words, they have some sense of relative difficulty, but their absolute confidence remains inflated. The second experiment introduces a more realistic setting: contracts with risk. The model receives a sequence of nine tasks. Each success earns +1, each failure costs −1. Before each task, the model must decide whether to accept or decline the contract, based on its predicted probability of success. The tasks are chosen so that success probability is roughly 50/50 - blindly accepting everything does not yield an advantage. Here the core issue becomes clear. Even after a series of failures, models continue to believe that the next task will succeed. Their subjective probability of success stays above 0.5, despite the evidence. Some models (notably Claude Sonnet and GPT-4.5) do end up earning more, but not because they become better at judging which tasks they can solve. Instead, they simply accept fewer tasks overall, becoming more risk-averse. Their gains come from declining more often, not from better self-assessment. The authors also check whether the models’ decisions are rational given their own stated probabilities. And they largely are. The problem is not decision-making - it is that the probabilities themselves are too optimistic. The third experiment is the most relevant for agentic systems. Using SWE-Bench Verified, the authors evaluate real multi-step tasks involving tools. Models are given budgets of up to 70 steps. After each step, the model is asked to estimate the probability that it will ultimately complete the task successfully. For most models, overconfidence does not decrease, and for some it actually increases as the task unfolds. Claude Sonnet shows this particularly clearly: confidence rises during execution even when final success does not become more likely. Among all tested models, only GPT-4o shows a noticeable reduction in overconfidence over time. Notably, so-called reasoning models do not show an advantage in self-assessment. The ability to reason for longer does not translate into the ability to accurately judge one’s chances of success. The overall conclusion of the paper is blunt: LLMs are already fairly good at solving tasks, but still poor at understanding the limits of their own capabilities. They can act, but they cannot reliably tell when they are likely to fail.
fedilink

I hadn't heard of this before, but thought it looked interesting, I was wondering if anyone else had seen it or had thoughts about it? It was initially envisioned by Sir Tim Berners-Lee who created the WWW in 1989 and "urges a decentralized web to counter AI exploitation and ad-driven abuse" More info here about Bernes-Lee's thoughts: https://www.techspot.com/news/109661-tim-breners-lee-urges-decentralized-web-counter-ai.html From the website: Imagine having your own online storage, which you control. You store information once and decide who can access what, when you need services like mortgage applications or medical care. This is what Solid can do. It’s a bit like carrying all your data in a rucksack (backpack) with lots of pockets. To access the data, different apps can only open the pocket you allow them to open, rather than taking the whole rucksack. The rest stays private. Solid lets people take control of their data and combine it to achieve new results. It gives creators new collaborative tools while passing power back to users. It’s technology that returns the web to its original vision of serving people.
fedilink




This paper basically shows that treating the prompt as an external variable is a surprisingly effective way to handle massive contexts. The authors argue that instead of shoving ten million tokens directly into the model and hoping for the best, we should put the text into a Python REPL environment where the model can interact with it programmatically. This setup allows the LLM to write code that slices the text into manageable chunks and recursively calls new instances of itself to process those pieces individually. It is essentially the same logic as out-of-core algorithms which process datasets far larger than the available memory by fetching only what is needed at any given moment. One of the most interesting parts of the study is how it exposes the reality of context rot in frontier models like GPT-5. The results show that while base models handle simple needle-in-a-haystack tasks just fine, they fall apart completely on information dense tasks that require aggregating data across the entire input. For example, on the OOLONG-Pairs benchmark which has quadratic complexity, the base GPT-5 model scores less than 0.1 percent accuracy once the context gets long enough. Meanwhile, the recursive language model manages to hold steady even up to a million tokens and achieves a 58% score on that same difficult task. Turns out that for retrieval tasks like CodeQA, simply having the REPL to grep through files was enough to beat the base model because the model could filter data before reading it. Having the recursive capability turned out to be essential for reasoning tasks like OOLONG where the model needs to process every line. The version of the system that could not make subcalls performed significantly worse because it could not offload the thinking process to fresh contexts and prevent its own window from getting polluted. Since the model writes code to filter the text using tools like regex before it actually reads anything, it processes fewer tokens on average than a summary agent that is forced to read everything to compress it. The only downside is that the variance can be pretty wild since the model sometimes gets stuck in a loop or decides to verify its own answer multiple times in a row which blows up the compute cost for that specific run. We are clearly seeing a shift where inference time compute and smart context management are becoming more important than just having a massive raw context window. The fact that this method beats retrieval-based agents on deep research tasks suggests that giving the model a loop to think and code is the future for tasks that need a large persistent context.
fedilink







The Post-American Internet
From the article: >Rich, powerful people are, at root, solipsists. The only way to amass a billion dollars is to inflict misery and privation on whole populations. The only way to look yourself in the mirror after you've done that, is to convince yourself that those people don't matter, that, in some important sense, they aren't real.
fedilink




Here's the corresponding youtube video by Benn Jordan https://www.youtube.com/watch?v=vU1-uiUlHTo
fedilink


Europe faces a critical dependency on US cloud infrastructure, with 90% of its digital infrastructure controlled by American companies, according to competition expert Cristina Caffarra[^1]. This vulnerability has spurred concrete action, with public institutions in Austria, Germany, France and the International Criminal Court moving away from US providers. The core issue stems from the US CLOUD Act of 2018, which allows American authorities to access data held by US companies regardless of location, conflicting directly with EU privacy laws[^1]. This creates an "irreconcilable legal conflict" since any contract between European customers and US cloud providers is subordinate to US federal law. Several key developments highlight this shift: - Austria's Federal Ministry for Economy completed migration of 1,200 employees to European open-source platform Nextcloud[^1] - The International Criminal Court is replacing Microsoft office software with OpenDesk after its chief prosecutor was locked out of Outlook[^1] - Germany's Schleswig-Holstein state has moved 24,000 civil servants to open-source alternatives[^1] However, challenges remain. The acquisition of Dutch cloud provider Solvinity by US-based Kyndryl demonstrates how European alternatives can be undermined through foreign acquisition[^1]. Critics also warn about "sovereignty washing," where US hyperscalers market 'sovereign cloud' solutions that don't resolve the fundamental legal conflicts[^1]. [^1]: [The Register - Europe gets serious about cutting digital umbilical cord](https://www.theregister.com/2025/12/22/europe_gets_serious_about_cutting/)
fedilink












tl;dr: they successfully reached orbit, but failed to land
fedilink


A tape-based piece of unique Unix history may have been lying quietly in storage at the University of Utah for 50+ years. The question is whether researchers will be able to take this piece of middle-aged media and rewind it back to the 1970s to get the data off. See also https://archive.org/details/utah_unix_v4_raw TAR file http://squoze.net/UNIX/v4/
fedilink

    Create a post

    This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


    Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


    Rules:

    1: All Lemmy rules apply

    2: Do not post low effort posts

    3: NEVER post naziped*gore stuff

    4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

    5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

    6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

    7: crypto related posts, unless essential, are disallowed

    • 1 user online
    • 6 users / day
    • 63 users / week
    • 267 users / month
    • 1.35K users / 6 months
    • 1 subscriber
    • 4.52K Posts
    • 50.4K Comments
    • Modlog
    Lemmy
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is Lemmy.ml

    Rules

    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful, especially when disagreeing. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: