
some people experience crash on linux where stable on windows.
FUD much? I’m not saying it’s not true… but like the opposite has to be true too. So without actual data to say it’s significant it’s really just not help much in any way, just creating doubt.
very few games represent the majority of gametime, and a lot of them do not run on linux.
Same, which ones? What’s your dataset?

FWIW MSN is from Microsoft so IMHO might be better to link to the original source and, if need be, remind people they can either pay for journalist content or use services like archive.is which will bypass paywalls.
Funny to read this right after finishing an Elden Ring : Shadow of the Erdtree gaming session.
Yes, so it should. In fact I’m gaming on Linux on :
…and I don’t even think about it. I just play regarding of the game being AAA or indie, VR or “flat”. It just works.
Brand new example : “Skills” by Anthropic https://www.anthropic.com/news/skills even though here the audience is technical it is still a marketing term. Why? Because the entire phrasing implies agency. There is no “one” getting new skills here. It’s as if I was adding bash scripts to my ~/bin directory but instead of saying “The first script will use regex to start the appropriate script” I named my process “Theodore” and that I was “teaching” it new “abilities”. It would be literally the same thing, it would be functionally equivalent and the implement would be actually identical… but users, specifically non technical users, would assume that there is more than just branching options. They would also assume errors are just “it” in the process of “learning”.
It’s really a brilliant marketing trick, but it’s nothing more.
The word “hallucination” itself is a marketing term. It’s not because it’s been frequently used in the technical literature that it is free of any problem. It’s used because it highlights a problem (namely that some of the output of LLM are not factually correct) but the very name is wrong. Hallucination implies there is someone, perceiving and with a world model, who typically via heuristics (for efficient interfaces like Donald Hoffman suggests) do so incorrectly leading to bad decision regarding the current problem to solve.
So… sure, “it” (trying not to use the term) is structural but it is simply because LLM have no notion of veracity or truth (or anything else, to be clear). They have no simulation to verify from if the output they propose (the tokens out, the sentence the user gets) is correct or not, it is solely highly probably based on their training data.

Original article https://www.washingtonpost.com/technology/2025/10/30/tp-link-proposed-ban-commerce-department/ rather than MSN aggregator. No idea why people link to such places.

Still watching it but this shouldn’t be surprising.
The whole point of US politics was to isolate China out of the “AI revolution” by depriving it to top of the line chip.
Meanwhile China has been building the entire World electronic ecosystem bar few very specific high end components, leaving these to TSMC, ASML, etc or design mostly to the US.
Even before tariffs and sale bans (due to dual use concerns) China already had a chip independence plan dating back from at least 2000. Since then close to the entire World move production there, at least assembly, and most deals to do so included, or tried to, include IP transfer and at the very least learning with the partner, if not more but that’d be just speculation, to add industrial espionage on top (even though plenty of news on the topic).
So… sure, it’s happening. Now the question though I asked on such thread countless time is basically : what’s the yield?
Because producing 1 board to send to a tester is already an incredible feat but that doesn’t mean thousands or even millions can be produced. If they can, that also doesn’t mean they can be produced economically efficiently, regardless of subsidies.
PS: most interesting book on the topic IMHO : https://en.wikipedia.org/wiki/Chip_War
common intelligence at scale
I mean not even… sure it can surprise you on some stuff you know little, sure it can regurgitate random parts from an encyclopedia and might even not be wrong about it… but it can easily be “outsmarted” by a 5yo on some of the most basic and random questions, it only has to be outside if its dataset. That’s not intelligence.
Thanks but doesn’t seem to work, 1st link 403, 2nd link no play button (and download is audio only), 3rd link loads but never plays, 4th link doesn’t play at all and download doesn’t work. Again I appreciate alternatives but IMHO sharing YouTube links, so BigTech links, on Lemmy isn’t great. We should rely on federated alternatives for videos too.
Edit: I did disable JS Shelter just for this (because of Anubis, ironically enough based on the video content!) but it still didn’t work. So to be honest even if it did work (which it didn’t) it would still not be great.

Humanoid robots are only hyped because GenAI is hyped and GenAI is hyped because Altman and his other conmen are scamming the World out of resources on the promise of AGI because “Scale Is All You Need”, a very convenient trope for VCs as they have no other idea beside dominating a market by scaling.
So… if the GenAI bubble does pop (and the HBR article on workslop lets me hope that it has started) then I don’t see how humanoid robot would not. I also imagine manufacturers are less prone to hype because they have been robots already for decades. Even some services like hospitals do have robots for cleaning, delivery, etc. Sure they don’t look humanoid but it’s still a baseline to compare with in terms of performance and price.
Edit: if you want to explore the OSHW FLOSS side of this check https://github.com/huggingface/lerobot but my own understanding is that there is no radical progress. Sure we might be inching away at “solving” robotics but nothing changed except few components getting a bit cheaper thanks to smartphone then in turn drone or obviously GPUs.
Edit2: if you don’t feel like reading the article just watch the 2 videos by Roland Johansson on normal hand vs hand without touch, it’s fascinating.
FWIW could be done via https://github.com/openfoodfacts
Going to play devil’s advocate here but in theory, it’s not necessarily bad, namely it could display
so … honestly the “smart” can be potentially useful to the user.
The problem is not really the why IMHO but rather the how because sadly I have 0 trust that it will be done solely for the benefit of the user. Which is why I will not buy a proprietary version. If I could get a OSHW one with e.g. eInk and HomeAssistant and/or GadgetBridge support, I just might, until then I’m in no rush.

Can’t wait for this to be generalized so that Easy Anti-Cheat or PunkBuster tell EA or Bethesda to lock your GPU because their faulty launcher detected you tried to play offline twice.
Initially preventing “bad guys” (really big quotes here) to do “bag things” (AFAICT it’s mostly lame LLMs, not actual dangerous military stuff, which for those they actually already have supercomputers allocated) sounds like a good idea… until it inexorably tricky down (unlike money and power) to citizens worldwide.
I don’t think remotely control CPUs or GPUs can end well for citizens. It won’t be PC as in Personal Computer, rather remotely controlled terminals for whomever is in power.

why cell phones don’t authenticate the towers they connect to.
I believe it’s because they assume it’s not necessary because it was until now
… so I imagine there was no authentication because there was no practical threat beside few “fun” examples in CCC or DEF Con.

move somewhere I can get around with just a bicycle.
So… FWIW that’d be Brussels and I bet most European cities. By bike you can get your food in, get to the nearby Brico to fix pretty much anything in your house, get deliveries with national post service, but you can cycle all the way to the airport (if somehow you don’t want to use the train), park there and get… well pretty much anywhere else in the World.

conjure up an email summary within seconds that can shave off up to 5 whole minutes
… but can it? Like actually, can one do that?
Sure an LLM can generate something akin to a summary. It will look like it’s getting some of the points in a different form… but did it get the actual gist of it? Did it skip anything meaningful that once ignore will have far reaching consequences?
So yes sure an LLM can generate shorter text related to what was said during the meeting but if there is limited confidence in the accuracy and no responsibility, unlike somebody who would take notes and summarize potentially facing negative consequences, then wouldn’t the reliance on such a tool create more risk?

Well this next example isn’t about phones but e-bikes. Unfortunately unwise me bought a fancy designer bike made by a national startup (CowBoy, to name and shame them) and I’m now stuck with a fancy metal frame on wheels because the belt is not in stock. Ordered in February, supposed to arrive 60 days later, I’m still waiting, not even an email received, nothing in now late June.
So… yes my next e-bike will be very VERY boring, in the sense of relying on built that have easy to source replacement part.
Yes, it did take few a first relatively large mistake (even though I did use that bike daily for years already) but that’s what I meant by “only work once”. You try, make painful mistake, don’t repeat.

you might be able to get a replacement battery for your 200€ phone, but having to pay 200€ for it.
On the assumption that consumers are somehow rational and have some memory, that “trick” only work once.
Next time a consumer get stuck with a practically irreplaceable battery because it’s too expensive from a company, they will look at other companies selling equivalent products, AND how much they are charging for batteries. I also imagine a business of spare parts because just having to give the right data, e.g. specifications like cell, module, pack, C-rate, E-rate, SOC, DOD, voltage, capacity, energy, cycle life, but also connectors and just size, will probably open up dedicated spare part vendors.

Sad but unsurprising.
I did read quite a lot on the topic, including “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.” (2019) and saw numerous documentaries e.g. “Invisibles - Les travailleurs du clic” (2020).
What I find interesting here is that it seems the tasks go beyond dataset annotation. In a way it is still annotation (as in you take a data in, e.g. a photo, and your circle part of it to label i.e. e.g “cat”) but here it seems to be 2nd order, i.e. what are the blind spots in how this dataset is handled. It still doesn’t mean anything produced is more valuable or that the expected outcome is feasible with solely larger datasets and more compute yet maybe it does show a change in the quality of tasks to be done.