☆ Yσɠƚԋσʂ ☆
  • 1.37K Posts
  • 1.11K Comments
Joined 6Y ago
cake
Cake day: Jan 18, 2020

help-circle
rss

It’s like saying silicon chips being orders of magnitude faster than vacuum tubes sounds too good to be true. Different substrate will have fundamentally different properties from silicon.






How China built its ‘Manhattan Project’ to rival the West in AI chips
https://archive.ph/2025.12.17-154356/https://www.reuters.com/world/china/how-china-built-its-manhattan-project-rival-west-ai-chips-2025-12-17/
fedilink







What I find most unfortunate is that these scam companies convinced people that you can make AI speech detectors in the first place. Like the reason LLMs structure text in a certain way is because these are the patterns in human text that they’ve been trained on.



yeah that would work too assuming the disk was made out of sufficiently hard material that won’t degrade over time



Yeah, I don’t think billions of years is really a meaningful metric here. It’s more that it’s a stable medium where we could record things that will persist for an indefinite amount of time without degradation.


I mean, you can always make new hardware. The idea of media that basically lasts forever is really useful in my opinion. We currently don’t have anything that would last as long as regular paper. Most of the information we have is stored on volatile media. Using something like this to permanently record accumulated knowledge like scientific papers, technology blueprints, and so on, would be a very good idea in my opinion.





Incidentally, manual moderation is much easier to do on a federated network where each individual instance doesn’t grow huge. Some people complaining that Lemmy isn’t growing to the size of Reddit, but I see that as a feature myself. Smaller communities tend to be far more interesting and are much easier to moderate than giant sites.




It’s the logical end point of a particular philosophy of the internet where cyberspace is treated as a frontier with minimal oversight. History offers a pretty clear pattern here with any ungoverned commons eventually getting overrun by bad actors. These spam bots and trolls are a result of the selection pressures that are inherent in such environments.

The libertarian cyber-utopian dream assumed that perfect freedom would lead to perfect discourse. What it ignored was that anonymity doesn’t just liberate the noble dissident. It also liberates grift, the propaganda, and every other form of toxicity. What you get in the end is a marketplace of attention grabbing performances and adversarial manipulation. And that problem is now supercharged by scale and automation. The chaos of 4chan or the bot filled replies on reddit are the inevitable ecosystem that grows in the nutrient rich petri dish of total laissez-faire.

We can now directly contrast western approach with the Chinese model that the West has vilified and refused to engage with seriously. While the Dark Forest theory predicts a frantic retreat to private bunkers, China built an accountable town square from the outset. They created a system where the economic and legal incentives align towards maintaining order. The result is a network where the primary social spaces are far less susceptible to the botpocalypse and the existential distrust the article describes.

I’m sure people will immediately scream about censorship and control, and that’s a valid debate. But viewed purely through the lens of the problem outlined in the article which is the degradation of public digital space into an uninhabitable Dark Forest, the Chinese approach is simply pragmatic urban planning. The West chose to build a digital world with no regulations, no building codes that’s run by corporate landlords. Now people are acting surprised that it’s filled with trash, scams, and bots. The only thing left to do is for everyone to hide in their own private clubs. China’s model suggests that perhaps you can have a functional public square if you establish basic rules of conduct. It’s not a perfect model, but it solved the core problem of the forest growing dark.



Nobody is talking about defying laws of physics here. Your whole premise rests on fossil fuels running out and being essential for energy production. This is simply false.




Again, I’m explaining to you that society is a conscious and intentional construct that we make. USSR could have made changes in a similar way China did to move in a different direction. As your own chart shows, there was no shortage of energy as output rebounded. The problems were political and with the nature of the way the economy was structured.


Carbon footprint shows how much energy is being used per capita. Population density is way past the point where it’s practical for people to live off the land in some subsistence living scenario. What is more likely to happen is that we’ll see things like indoor farming being developed so that cities can feed themselves. This will become particularly important as climate continues to deteriorate, as indoor farms will make it possible to have stable environment to grow food in.


Having grown up in USSR, I know there was in fact a huge difference. The economy wasn’t structured around consumption, goods were built to last. People weren’t spending their time constantly shopping and consuming things. The idea that USSR was destined to collapse is also pure nonsense. There were plenty of different ways it could’ve developed. USSR certainly didn’t collapse because it was running out of energy.



The point is that capitalist relations are absolutely the problem here. Social systems do not have to be built around consumption. You’re also talking about natural systems that evolve based on selection pressures as opposed to systems we design consciously.


First of all, carbon footprint in China is already far lower than in any developed country. Second, as I already pointed out, most countries simply outsourced their production to China.




That’s just saying that China is one of the most populous countries in the world that also happens to be a global manufacturing hub. China still uses fossil fuels, but I think it’s fair to call it an electrostate at this point.

Finally, it’s also worth noting that China has a concrete plan for becoming carbon neutral, which it’s already ahead of


The fact of the matter is that air is an incredibly inefficient thermal conductor so data centers have to burn a massive amount of extra electricity just to run powerful fans and chillers to force that heat away. That extra energy consumption means an air cooled facility is responsible for generating significantly more total heat for the planet than a liquid cooled one.

When you put servers in the ocean you utilize the natural thermal conductivity of water which is about 24 times higher than air and allows you to strip out the active cooling infrastructure entirely. You end up with a system that puts far less total energy into the environment because you aren’t wasting power fighting thermodynamics. Even though the ocean holds that heat longer the volume of water is so vast that the local temperature impact dissipates to nothing within a few meters of the vessel.


Yes, it is a fallacy because the problem is with the economy system as opposed to a specific technology. The liberal tendency often defaults to a form of procedural opposition such as voting against, boycotting, or attempting to regulate a problem out of existence without seizing the means to effect meaningful change. It’s an idealist mindset that mistakes symbolic resistance for tangible action. Capitalism is a a system based around consumption, and it will continue to use up resources at an accelerating rate regardless of what specific technology is driving the consumption.


The fallacy here is the assumption that if LLMs didn’t exist then we wouldn’t find other ways to use that power.


seems like the opposite is happening in practice with models drastically increasing in efficiency


https://web.archive.org/web/20250804152948/https://www.scmp.com/news/china/science/article/3299313/chinas-subsea-data-centre-could-power-7000-deepseek-conversations-second-report
fedilink






Hypersonics cover a wide range of stuff, what this article discusses are cheap low end missiles as opposed to something like Oreshnik.




wait until you learn about economies of scale and the benefits of controlling the entire supply chain


That’s right, if one thing western analysts are famous for it’s never being wrong about China’s tech capabilities.







The Grothendieck’s Toposes as the Future Mathematics of AI
The paper argues that we are hitting a wall with current AI because we are obsessed with number crunching instead of structure. Belabes posits that modern AI is too focused on statistical minimization and processing speed, which reduces everything to collections of numbers that inherently lack meaning. You lose the essence of what you are actually trying to model when you strip away the context to get raw data. The author suggests a pivot to Alexandre Grothendieck's Topos theory, which provides a mathematical framework for understanding geometric forms and preserving the deep structure of data rather than just its statistical number crunching. Topos theory focuses on finding a new style space that acts as a bridge between different mathematical objects. Instead of just looking at points in a standard space, a topos allows us to look at the relationships and sheaves of information over that space, effectively letting us transfer invariants from one idea to another. It creates a way to connect things that seem totally unrelated on the surface by identifying their common essence. Belabes links this to the idea of conceptual strata where something that looks like noise or insignificant data in one layer might actually be critical structure in another layer. It's a move away from the binary notion of significant versus insignificant data and toward a relativistic view where significance depends on the conceptual layer you are analyzing. The author uses literary examples like Homer and Dostoevsky to show that authentic meaning often precedes the words used to express it, whereas our current digital systems treat language as a closed loop where words define other words. Current AI essentially simulates discourse without the underlying voice or intent. By adopting a Topos-based approach, we might be able to build systems that respect these layers of meaning and read slowly to extract the actual shape of the information. It is basically a call to stop trying to brute force intelligence with bigger matrices and start modeling the actual geometry of thought.
fedilink













It’s a completely different situation in China. This tech is being treated as open source commodity similar to Linux, and companies aren’t trying to monetize it directly. There’s no crazy investment bonanza happening in China either. Companies like DeepSeek are developing this tech on fairly modest budgets, and they’re already starting to make money https://www.cnbc.com/2025/07/30/cnbcs-the-china-connection-newsletter-chinese-ai-companies-make-money.html


I mean the paper and code are published. This isn’t a heuristic, so there’s no loss of accuracy. I’m not sure why you’re saying this is too good to be true, the whole tech is very new and there are lots of low hanging fruit for optimizations that people are discovering. Every few months some discovery like this is made right now. Eventually, people will pluck all the easy wins and it’s going to get harder to dramatically improve performance, but for the foreseeable future we’ll be seeing a lot more stuff like this.


Almost certainly given that it drastically reduces the cost of running models. Whether you run them locally or it’s a company selling a service, the benefits here are pretty clear.



I haven’t tried it with ollama, but it can download gguf files directly if you point it to a huggingface repo. There are a few other runners like vllm and llama.cpp, you can also just run the project directly with Python. I expect the whole Product of Experts algorithm is going to get adopted by all models going forward since it’s such a huge improvement, and you can just swap out the current approach.


I’ve literally been contextualizing the article throughout this whole discussion for you. At least we can agree that continuing this is pointless. Bye.


And once again, what the article is actually talking is how LLMs are being sold to investors. At this point, I get the impression that you simply lack basic reading comprehension to understand the article you’re commending on.


The title is not false. If you actually bothered to read the article, you’d see that the argument being made is that the AI tech companies are selling a vision to their investors that’s at odds with the research. The current LLM based approach to AI cannot achieve general intelligence.


The other aspect that’s worth keeping in mind is software. If Huawei focuses on optimizing the software side they can easily compensate for hardware being slower. Modern software is incredibly bloated, and there’s plenty of low hanging fruit there.