help-circle
rss









Over the past decade, the AI industry has come to exert an unprecedented economic, political and societal power and influence. It is therefore critical that we comprehend the extent and depth of pervasive and multifaceted capture of AI regulation by corporate actors in order to contend and challenge it. In this paper, we first develop a taxonomy of mechanisms enabling capture to provide a comprehensive understanding of the problem. Grounded in design science research (DSR) methodologies and extensive scoping review of existing literature and media reports, our taxonomy of capture consists of 27 mechanisms across five categories. We then develop an annotation template incorporating our taxonomy, and manually annotate and analyse 100 news articles. The purpose behind this analysis is twofold: validate our taxonomy and provide a novel quantification of capture mechanisms and dominant narratives. Our analysis identifies 249 instances of capture mechanisms, often co-occurring with narratives that rationalise such capture. We find that the most recurring categories of mechanisms are Discourse & Epistemic Influence, concerning narrative framing, and Elusion of law, related to violations and contentious interpretations of antitrust, privacy, copyright and labour laws. We further find that Regulation stifles innovation, Red tape and National Interest are the most frequently invoked narratives used to rationalise capture. We emphasize the extent and breadth of regulatory capture by coalescing forces -- Big AI and governments -- as something policy makers and the public ought to treat as an emergency. Finally, we put forward key lessons learned from other industries along with transferable tactics for uncovering, resisting and challenging Big AI capture as well as in envisioning counter narratives. **Full paper**: [PDF](https://arxiv.org/pdf/2605.06806) | [HTML](https://arxiv.org/html/2605.06806v1) | [TeX source](https://arxiv.org/src/2605.06806)
fedilink


AI companion apps have a hidden pricing problem nobody talks about
Most AI companion platforms advertise $9.99 or $12.99 per month. The real monthly cost for an active user is 2-5x that once token systems kick in. One major platform I tested after tracking every transaction for 30 days advertises $12.99 — regular users end up spending $25-60 monthly once image generation and voice tokens are factored in. The subscription price is the floor not the ceiling on most platforms. The ones with genuinely flat pricing where what you see is what you pay are rare. Full breakdown: medium.com/@companaya/i-spent-500-testing-ai-companion-apps-real-monthly-costs-revealed-2026-8a6c0532778d
fedilink



America’s Air Superiority Is Losing Altitude
https://archive.ph/1eBHM
fedilink













China claims to have developed the world’s first AI-designed processor — LLM turned performance requests into CPU architecture
Qi Meng is an AI system that designs entire processor chips end to end from natural language spec to to physical layout. Their QiMeng-CPU-v1 produced a 32-bit RISC-V CPU, matching Intel 486 performance with over four million logic gates, in just five hours. QiMeng-CPU-v2, rivals an Arm Cortex A53 from the 2010s, and the whole thing runs on a domain specific model that learns the graph structures of circuits the way GPT learns text. The appeal of Qi Meng is that this open-source effort has three key interconnected layers melding LLM chip design smarts, a hardware and software design agent, and various chip design apps. The paper shows that the system can do in days what takes human teams weeks to achieve. the paper https://arxiv.org/pdf/2506.05007
fedilink




Real talk: last month I was running a giveaway campaign for a client. The mechanic was simple — comment to enter, tag a friend for a bonus entry. 3,200 comments later, I was staring at a blank Google Sheet wondering how I was going to verify entries, remove duplicates, and pick a winner without losing my mind. Instagram doesn't give you any export functionality. Zero. You can view comments in the app, you can reply, you can delete — but you cannot export them in any structured way. This is apparently a deliberate product decision, and it's been this way for years. What I tried first: Manually copy-pasting — obviously not scalable past ~50 rows The official Instagram Graph API — requires app review, business account verification, and only returns data from your own posts anyway Third-party "Instagram data export" services — most of these ask for your password or OAuth credentials, which is a non-starter What actually worked: I ended up using a browser extension called [Instagram Comments Scraper](https://chromewebstore.google.com/detail/instagram-comments-scrape/hpfnaodfcakdfbnompnfglhjmkoinbfm) that runs entirely within your browser session. No password required — it just operates within your existing logged-in session, the same way you're already viewing the comments. The data is processed locally and never sent anywhere external. The output columns it gives you: comment ID, comment text, username, profile URL, profile pic URL, and timestamp. That's exactly what you need to do any meaningful analysis — filter by date, spot bot accounts, remove duplicates, identify authentic entries. The rate limiting situation: The part I didn't expect was how Instagram's rate limits work. There's no published threshold — it varies by IP and activity patterns. When the scraper hits a limit, it enters a cooldown mode automatically (the timer shows you how long), then doubles the cooldown if the limit persists. Once the cooldown clears and a request succeeds, it goes back to normal. This meant I could walk away and come back to a finished export rather than babysitting it. End result: 3,200 comments exported to Excel in about 40 minutes of unattended processing. Filtered to valid entries (tagged a user + original commenter had 10+ followers) in another 20 minutes using basic Excel formulas. Caveat I'd add for anyone doing this: Be reasonable about volume and timing. Don't run 10,000-comment scrapes back-to-back on the same IP. The human-like delay system in the tool helps, but bulk scraping in one long session still carries some account risk. Space it out if you're working with large datasets. Anyone else found better approaches to this problem? Especially curious if anyone's had success with the official API for use cases beyond your own posts.
fedilink



A GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B. ~1.98x mean over autoregressive on Qwen3.6 across HumanEval / GSM8K / Math500, with zero retraining. If you have CUDA 12+ and an NVIDIA GPU like RTX 3090 / 4090 / 5090, then all you need to do is clone the repo cd lucebox-hub/dflash cmake -B build -S . -DCMAKE_BUILD_TYPE=Release cmake --build build --target test_dflash -j fetch target (~16 GB) hf download unsloth/Qwen3.6-27B-GGUF Qwen3.6-27B-Q4_K_M.gguf --local-dir models/ matched 3.6 draft is gated: accept terms + set HF_TOKEN first hf download z-lab/Qwen3.6-27B-DFlash --local-dir models/draft/ run DFLASH_TARGET=models/Qwen3.6-27B-Q4_K_M.gguf python3 scripts/run.py --prompt "def fibonacci(n):" That's it. No Python runtime in the engine, no llama.cpp install, no vLLM, no SGLang. Luce DFlash will: 1. Load Qwen3.6-27B Q4_K_M target weights (~16 GB) plus the matched DFlash bf16 draft (~3.46 GB) and run DDTree tree-verify speculative decoding (block size 16, default budget 22, greedy verify). 2. Compress the KV cache to TQ3_0 (3.5 bpv, ~9.7x vs F16) and roll a 4096-slot target_feat ring so 256K context fits in 24 GB. Q4_0 is the legacy path and tops out near 128K. 3. Auto-bump the prefill ubatch from 16 to 192 for prompts past 2048 tokens (~913 tok/s prefill on 13K prompts). 4. Apply sliding-window flash attention at decode (default 2048-token window, 100% speculative acceptance retained) so 60K context still decodes at 89.7 tok/s instead of 25.8 tok/s. 5. Serve over an OpenAI-compatible HTTP endpoint or a local chat REPL. Running on RTX 3090, Qwen3.6-27B UD-Q4_K_XL (unsloth Dynamic 2.0) target, 10 prompts/dataset, n_gen=256: Bench AR tok/s DFlash tok/s AL Speedup HumanEval 34.90 78.16 5.94 2.24x Math500 35.13 69.77 5.15 1.99x GSM8K 34.89 59.65 4.43 1.71x Mean 34.97 69.19 5.17 1.98x
fedilink


Slovenia's Foreign Minister Tanja Fajon said she regretted the government's move not to join South Africa's genocide case against Israel at the ICJ, claiming that external "pressure" had contributed to the decision. Slovenia decided against participating due to "security risks". While Prime Minister Robert Golob had initially been inclined to give the proposal the green light, he was ultimately swayed against doing so by national security officials, local media reported. **They reportedly cautioned that joining the lawsuit could jeopardise Slovenia's national security, noting that many of the country's cyber defence systems are of Israeli origin.**
fedilink




    Create a post

    This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


    Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


    Rules:

    1: All Lemmy rules apply

    2: Do not post low effort posts

    3: NEVER post naziped*gore stuff

    4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

    5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

    6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

    7: crypto related posts, unless essential, are disallowed

    • 1 user online
    • 18 users / day
    • 79 users / week
    • 309 users / month
    • 1.48K users / 6 months
    • 1 subscriber
    • 4.99K Posts
    • 53.4K Comments
    • Modlog
    Lemmy
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is Lemmy.ml

    Rules

    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful, especially when disagreeing. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: