








You gotta love it when people start commenting on a topic they have no clue about. There is no reentry, this is a low flying missile. The whole point of it is that it’s a loitering missile that can fly around for months on end. That’s the whole reason for the panic in NATO, it’s not possible to track it at all. Time for you to stop embarrassing yourself in public.
AI is being developed in China in a very different way from the west because the material conditions in China are different. https://dialecticaldispatches.substack.com/p/the-ghost-in-the-machine
The reason Chinese companies release LLMs as open source isn’t actually confusing either. It’s not being treated as a product, but rather as foundational technology that things will be built upon. Think of it the same way as the current open source infrastructure that underpins the internet. Most companies aren’t trying to monetize Linux directly, rather they use it to build actual products on top of.
However, dragging the US into a tech race it can’t win is also a factor whether it’s done intentionally by China or not. https://dialecticaldispatches.substack.com/p/the-ai-race-isnt-about-chips-its










Frankly, I’ve never really understood the logic of bailouts. If a company is not solvent, but it’s deemed to be strategically important then the government should simply be taking a stake in it. That’s what would happen on the private markets with another company buying it out. The whole notion that the government should just throw money at the failing companies with no strings attached is beyond absurd.




Russia actually operates 8 nuclear powered ice breakers right now, and they’re making more. https://www.thebarentsobserver.com/news/here-comes-yakutia-russias-newest-nuclear-icebreaker/422559




I mean if you have a verifiable set of steps that build from the answer to first principles, that does seem to enable trust worthiness. Specifically because it makes it possible for a human to follow the chain and verify it as well. This is basically what underpins the scientific method and how we compensate for the biases and hallucinations that humans have. You have a reproducible set of steps that can explained and followed. And what they’re building is very useful because it lets you apply this method to many problems where it would’ve been simply too much effort to do manually.


Cory Doctorow had a good take on this incidentally https://pluralistic.net/2025/10/16/post-ai-ai/


@[email protected] kind of related to your recent post about hallucinations https://lemmy.ml/post/38324318


Think of it this way, the investors are basically like people going to a casino. They start with a bunch money, and they start losing that money over time. That’s what’s happening here. Right now, they still haven’t lost enough money to quit playing, they still think they’ll make their investment back. At some point they either run out of money entirely, or they sober up and decide to cut their losses. That’s what’s going to change between now and when the bubble starts to pop. We simply haven’t hit the inflection point when the investors start to panic.
It does actually
The economic nightmare scenario is that the unprecedented spending on AI doesn’t yield a profit anytime soon, if ever, and data centers sit at the center of those fears. Such a collapse has come for infrastructure booms past: Rapid construction of canals, railroads, and the fiber-optic cables laid during the dot-com bubble all created frenzies of hype, investment, and financial speculation that crashed markets. Of course, all of these build-outs did transform the world; generative AI, bubble or not, may do the same.
The scale of the spending is absolutely mind blowing. We’re talking about $400 billion in AI infrastructure spending this year alone, which is like funding a new Apollo program every 10 months. But the revenue is basically pocket change compared to the spending.
As the article notes, the reality check is already happening.
Much is in flux. Chatbots and AI chips are getting more efficient almost by the day, while the business case for deploying generative-AI tools remains shaky. A recent report from McKinsey found that nearly 80 percent of companies using AI discovered that the technology had no significant impact on their bottom line. Meanwhile, nobody can say, beyond a few years, just how many more data centers Silicon Valley will need. There are researchers who believe there may already be enough electricity and computing power to meet generative AI’s requirements for years to come.
The whole house of cards is propped up by this idea that AI will at some point pay for itself, but the math just doesn’t add up. These companies need to generate something like $2 trillion in AI revenue by 2030 to even break even on all this capex, and right now, they’re nowhere close. OpenAI alone is burning through cash like it’s going out of style, raising billions every few months while losing money hand over fist.
I expect that once it’s finally acknowledged that the US is in a recession, that’s finally going to sober people up and make investors more cautious. The VCs who were happily writing checks based on vibes and potential will start demanding to see actual earnings, and that easy money environment that’s been fuelling this whole boom is going to vanish overnight.
When a few big institutional investors get spooked and start quietly exiting their positions, it could trigger a full blown market panic. At that point, we’ll see a classic death spiral. The companies that have been living on investor faith, with no real path to profitability, are going to run out of cash and hit the wall leading to an extinction level event in the AI ecosystem.
If tech stocks fall because of AI companies failing to deliver on their promises, the highly leveraged hedge funds that are invested in these companies could be forced into fire sales. This could create a vicious cycle, causing the financial damage to spread to pension funds, mutual funds, insurance companies, and everyday investors. As capital flees the market, non-tech stocks will also plummet: bad news for anyone who thought to play it safe and invest in, for instance, real estate. If the damage were to knock down private-equity firms (which are invested in these data centers) themselves—which manage trillions and trillions of dollars in assets and constitute what is basically a global shadow-banking system—that could produce another major crash.
When that all actually starts happening ultimately depends on how long big investors are willing to keep pouring billions into these companies without seeing any return. I can see at least another year before reality starts setting in, and people realize that they’re never getting their money back.
Again, this is a very US centred perspective. I highly urge you to watch this interview with the Alibaba cloud founder on how this tech is being approached in China https://www.youtube.com/watch?v=X0PaVrpFD14
You’re such an angry little ignoramus. The GPT-NeoX repo on GitHub is the actual codebase they used to train these models. They also open-sourced the training data, checkpoints, and all the tools.
However, even if you were right that the weights were worthless, which they’re obviously not, and there were no open projects which there are, the solution would be to develop models from scratch in the open instead of screeching at people and pretending this tech is just going to go away because it offends you personally.
And nobody says LLMs are anything other than Markov chains at a fundamental level. However, just like Markov chains themselves, they have plenty of real world uses. Some very obvious ones include doing translations, generating subtitles, doing text to speech, and describing images for visually impaired. There are plenty of other uses for these tools.
I love how you presumed to know better than the entire world what technology to focus on. The megalomania is absolutely hilarious. Like all these researchers can’t understand that this tech is a dead end, it takes the brilliant mind of some lemmy troll to figure it out. I’m sure your mommy tells you you’re very special every day.
You seem to have a very US centric perspective on this tech the situation in China looks to be quite different. Meanwhile, whether you personally think the benefits are outweighed by whatever dangers you envision, the reality is that you can’t put toothpaste back in the tube at this point. LLMs will continue to be developed. The only question is how that’s going to be done and who will control this tech. I’d much rather see it developed in the open.
It’s worth noting that humans aren’t immune to the problem either. The real solution will be to have a system that can do reasoning and have a heuristic for figuring out what’s likely a hallucination or not. The reason we’re able to do that is because we interact with the outside world, and we get feedback when our internal model diverges from it that allows us to bring it in sync.


The approach China is taking is to invest in all kinds of different approaches, and then see what works. I imagine the answer is going to be that different types of energy storage will work best in different situations. Something like gravity storage might be useful for balancing short term fluctuations in the grid, it can be built anywhere, and it’s very safe.
These things only make sense at very large scale. China has already built some and they’ve approved more projects going forward. https://www.enlit.world/library/china-connects-gravity-storage-and-launches-three-new-projects
I love how you just lie about everything, but we should just take your word on it, bye https://www.mercomindia.com/china-targets-180-gw-of-energy-storage-capacity-by-2027
Concrete is certainly a lot more clean than lithium mining. Meanwhile, construction of specific things is obviously something that you learn to do better over time. For example, here’s how cost of nuclear reactor construction has dropped in China as they learned from building them. Absolutely incredible that the concept of getting better at doing things through practice escapes you.

Meanwhile they are already in production, hilarious how you didn’t even bother checking a link past 2018 before spewing more drivel. 🤣
Following the start of grid interconnection in September 2023, the 25MW/100MWh EVx GESS in Rudong achieved full interconnection after the completion of the final 4km 35kV overhead power line to a remote end substation, as planned with local state grid authorities.


It could be that it did happen before, but it was just individual cases and if the outage wasn’t long then probably wasn’t noteworthy. This time it was a whole bunch of people affected all at once for a prolonged period. And you’re likely right that there’s probably a series of states the device can be in, and it does calls to AWS as it moves through them, so probably got stuck at a particular stage and couldn’t move forward cause it couldn’t talk to the mothership.
Yeah, I’m mostly excited about LLMs that can be run locally. And it really does look like there’s a lot more optimizing that can happen going forward. Stuff like this is also really exciting. It’d be pretty amazing if we get to the point where models that perform as well as current 600+ billion parameter ones could run on a phone.