
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast


Folks here jest, but this business model is coming to PCs next. Bookmark my words!
(Literally, you can right-click the perma-link to this message and have quick access to it later so you can reply, “Damn, you were right!” And post the link to big PC vendors suddenly offering a similar services because DRAM and GPUs have become so expensive, normal people can’t afford to buy PCs anymore)


RVA23 is a big deal because it allows the big players (e.g. Google, Amazon, Meta, OpenAI, Anthropic, and more) to avoid vendor lock-in for their super duper ultra wicked mega tuned-to-fuck-and-back specialty software (not just AI stuff). Basically, they can tune their software to a generic platform to the nth degree and then switch chips later if they want without having to re-work that level of tuning.
The other big reason why RISC-V is a big deal right now is energy efficiency. 40% of a data center’s operating cost is cooling. By using right-sized RISC-V chips in their servers they can save a ton of money on cooling. Compare that to say, Intel Xeon where the chips will be wasting energy on zillions of unused extensions and sub-architecture stuff (thank Transmeta for that). Every little unused part of a huge, power hungry chip like a Xeon eats power and generates heat.
Don’t forget that vector extensions are also mandatory in RVA23. That’s just as big a deal as the virtualization stuff because AI (which heavily relies on vector math) is now the status quo for data center computing.
My prediction is that AI workload enhancements will become a necessary feature in desktops and laptops soon too. But not because of anything Microsoft integrates into their OS and Office suites (e.g. Copilot). It’ll be because of Internet search and gaming.
Using an AI to search the Internet is such a vastly superior experience, there’s no way anyone is going to want to go back once they’ve tried it out. Also, in order for it to work well it needs to run queries on the user’s behalf locally. Not in Google or Microsoft’s cloud.
There’s no way end users are going to pay for an inferior product that only serves search results from a single company (e.g. Microsoft’s solution—if they ever make one—will for sure use Bing and it would never bother to search multiple engines simultaneously).


Note that there’s more than one model to do pixel art and there’s pixel art LoRAs that do a decent job. There’s loads of flexibility when generating this kind of thing.
Also, you can just tell it to generate a thousand over like 10 minutes and pick the best one and use that as a base to improve upon. AI is just a single tool in the workflow.
I also want to point out that not everyone can just pay someone. Don’t be paternalistic: If people want to use AI in their workflow for any reason that’s their concern. To angrily throw your hands in the air and say, “I’m not touching it because AI!” is like giving free money to the big publishers.
You’re setting a completely unnecessary high bar, “you must be this rich to ride.”


This is my take at well, but not just for gaming… AI is changing the landscape for all sorts of things. For example, if you wanted a serious, professional grammar, consistency, and similar checks of your novel you had to pay thousands of dollars for a professional editor to go over it.
Now you can just paste a single chapter at a time into a FREE AI tool and get all that and more.
Yet here we are: Still seeing grammatical mistakes, copy & paste oversights, and similar in brand new books. It costs nothing! Just use the AI FFS.
Checking a book with an AI chat bot uses up as much power/water as like 1/100th of streaming a YouTube Short. It’s not a big deal.
The Nebula Awards recently banned books that used AI for grammar checking. My take: “OK, so only books from big publishers are allowed, then?”


If you use a pixel art export node in ComfyUI that won’t be a problem. There’s a whole guide about it here:
https://inzaniak.github.io/blog/articles/the-pixel-art-comfyui-workflow-guide.html


There’s going to be some hilarious memes/videos when these get deployed:


Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.
A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.
This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.
For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.
Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).
So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍


The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.
AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.
Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.


It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).
Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.


Most people—even obsessive gamers—don’t give two shits about AI. There’s a very loud minority that gets in everyone’s face saying all AI is evil like we’re John Connor or something. They are so obsessive and extreme about it, it often makes the news (like this article).
The market has already determined that if a game is fun, people will play it. How much AI was used to make it is irrelevant.


If the cost of using it is lower than the alternative, and the market willing to buy it is the same. If the current cloud hosted tools cease to be massively subsidized, and consumers choose to avoid it, then it’s inevitably a historical footnote, like turbine powered cars, Web 3.0, and laser disk.
There’s another scenario: Turns out that if Big AI doesn’t buy up all the available stock of DRAM and GPUs, running local AI models on your own PC will become more realistic.
I run local AI stuff all the time from image generation to code assistance. My GPU fans spin up for a bit as the power consumed by my PC increases but other than that, it’s not much of an impact on anything.
I believe this is the future: Local AI models will eventually take over just like PCs took over from mainframes. There’s a few thresholds that need to be met for that to happen but it seems inevitable. It’s already happening for image generation where the local AI tools are so vastly superior to the cloud stuff there’s no contest.


I’ve done a 3-hour session playing Beat Saber multiplayer with a friend. It was the most intense workout I’ve ever experienced.
The only break was in the middle to refill my enormous water bottle and to clean up the huge pool of sweat on the floor that was getting gross (I was wearing socks, LOL).
My arms hurt for like three days straight after that. I still played every night though 😁👍


To be fair, that’s what an AI video generator thinks an FPS is. That’s not the same thing as AI-assisted coding. Though it’s still hilarious! “Press F to pay respects” 🤣
For reference, using AI to automate your QA isn’t a bad idea. There’s a bunch of ways to handle such things but one of the more interesting ones is to pit AIs against each other. Not in the game, but in their reports… You tell AI to perform some action and generate a report about it while telling another AI to be extremely skeptical about the first AI’s reports and to reject anything that doesn’t meet some minimum standard.
That’s what they’re doing over at Anthropic (internally) with Claude Code QA tasks and it’s super fascinating! Heard them talk about that setup on a podcast recently and it kinda blew my mind… They have more than just two “Claudes” pitted against each other too: In the example they talked about, they had four: One generating PRs, another reviewing/running tests, another one checking the work of the testing Claude, and finally a Claude setup to perform critical security reviews of the final PRs.


Anthropic didn’t lose their lawsuit. They settled. Also, that was about their admission that they pirated zillions of books.
From a legal perspective, none of that has anything to do with AI.
Company pirates books -> gets sued for pirating books. Companies settles with the plaintiffs.
It had no legal impact on training AI with copyrighted works or what happens if the output is somehow considered to be violating someone’s copyright.
What Anthropic did with this settlement is attack their Western competitor: OpenAI, specifically. Because Google already settled with the author’s guild for their book scanning project over a decade ago.
Now OpenAI is likely going to have to pay the author’s guild too. Even though they haven’t come out and openly admitted that they pirated books.
Meta is also being sued for the same reason but they appear to be ready to fight in court about it. That case is only just getting started though so we’ll see.
The real, long-term impact of this settlement is that it just became a lot more expensive to train an AI in the US (well, the West). Competition in China will never have to pay these fees and will continue to offer their products to the West at a fraction of the cost.


Incorrect. No court has ruled in favor of any plaintiff bringing a copyright infringement claim against an AI LLM. Here’s a breakdown of the current court cases and their rulings:
https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
In both cases, the courts have ruled that training an LLM with copyrighted works is highly transformative and thus, fair use.
The plaintiffs in one case couldn’t even come up with a single iota of evidence of copyright infringement (from the output of the LLM). This—IMHO—is the single most important takeaway from the case: Because the only thing that really mattered was the point where the LLMs generate output. That is, the point of distribution.
Until an LLM is actually outputting something, copyright doesn’t even come into play. Therefore, the act of training an LLM is just like I said: A “Not Applicable” situation.


Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.


Microsoft is trying to make Xbox into Windows: Where 3rd parties make the hardware and then license the platform from Microsoft. It’s a vastly more profitable model. Especially if they get all those end users signed up for a subscription service.
The problem is that the world thinks of “Xbox” as a console (and a specific kind of controller). To pull this off Microsoft is going to have to re-brand Xbox entirely by making people think of it more like a game-specific app store that runs on Windows and special handheld hardware. It won’t be easy.
There’s a bigger problem with this plan though: No real coordination with the Windows OS team. Windows on handhelds sucks. The past twenty fucking years of Windows development has been almost entirely focused on improving enterprise features with very little attention paid to end users or gaming.
Growth in Windows gaming has come despite Microsoft’s investments. Not because of them. In fact, I’d argue that if it weren’t for Steam, Windows—as a gaming platform—would be a fraction of what it is today.
Don’t get me wrong, though! I love this new Xbox roadmap! Windows gaming has been holding back Linux desktop adoption for far too long. The latest benchmarks that show games on SteamOS vastly outperforming the new Xbox-branded handhelds pretty clearly demonstrates all that bashing of Windows by Linux nerds was deeply accurate.
It turns out that Linux on the desktop really is superior! 🤣


Mods on Xbox only exist for games where the game itself officially added mod support. I mean, sure it’s great when a game maker does that but usually it’s not as good as community-made mod support because community mods don’t require approval and can’t get censored/removed because the vendor doesn’t like it.
Remember: Microsoft’s vision of mods is what you get with the Bedrock version of Minecraft. Yet the mods available in the Java version are so vastly superior the difference is like night and day.
Console players—that are used to living without mods—don’t understand. Once mods become a regular thing that you expect in popular games going without them feels like going back into the dark ages.


They’re not just bismuth! They’re bismuth and selenium with some oxygen mixed in (to connect those elements together, I think).
The reason I point this out is because this means that not only can the chips of the future perform blazingly fast calculations they can also cure your tummy ache and prevent dandruff!
Once this technology becomes mainstream it’ll be bismuth as usual. We’ll all be getting down to bismuth.
A whole new era of puns is upon us! The product of the selenium.
Hall effect sensor expert here! No, the magnets in the joycons that are used to attach to the display/body of the Switch 2 would not interfere with hall effect analog sticks.
Two reasons:
Regardless, it would be trivial to place a tiny little piece of ferromagnetic blocking tape wherever necessary to prevent interference.


Oh if I wanted to show off my BS skills I’d post this replay:
https://replay.beatleader.com/?scoreId=20047904
😁


I just remembered that I made a gif demonstrating how to spin in Beat Saber!


Yes! I play on a hard floor wearing socks. I spin all the time! 🕺
In fact, sometimes I slide into position to play! 🤣
Have some more spinning!
https://replay.beatleader.com/?scoreId=19582636
This is another… It’s an older replay but it still checks out:


Beat Saber is the best! If you use the BeatLeader mod it automatically saves and uploads replays whenever you beat your previous score on any given map.
Here’s my Christmas present to everyone to demonstrate this fantastic ability:
https://replay.beatleader.com/?scoreId=20010657
(Make sure to watch the whole silly thing in all it’s glory or you might miss some of the “special moments” 🤣)
I’m 46 and I play every day 👍
It is impossible to do these types of checks on serverside.
If the client can make a determination as to whether or not to draw a player the server can too (and refuse to send those packets). It’s not impossible, just more computationally intensive and thus, more expensive (on the server side of things).
Naive way: Render exactly what the player will see on the server. Do this for every client and only send the data to the client if the another player enters the view.
More intelligent way: Keep track of the position and field of view of each player and do a single calculation to determine if that player can see another. If not, don’t send the packets. It will require some predictions but that’s no different than regular, modern game-specific network programming which already has to do that.
Servers these days have zillions of cores. It really isn’t too much to ask to dedicate a thread per player to do this kind of thing. It just means that instead of one server being able to handle say, 500 simultaneous players you can only handle say, 100-250 (depending on the demands of your game).
If your players host their own servers then it’s really no big deal at all! Plenty of cores for their personal matches with their friends or randos from the Internet. If you’re a big company with a game like Fortnite then it’s a huge burden compared to the low-effort system currently in place.


That doesn’t seem like much of a cheat. That’s more like a crutch.