
I agree with all of that. The 5 million goal was a bad design, in other extraction shooters the week/days leading up to the wipe is always full of people doing crazy things with gear that’s getting deleted anyway. Having to save every piece of equipment until the very end just feels bad.
I’d like to see more gear along the lines of the Hullbreaker. Items that are specifically for fighting the ARC and, because of their properties are less useful against players, they could give you ‘gamebreaking’ abilities/damage/etc without worrying about the weapons being used to dominate PvP.
Then maybe some kind of frontline PvE ARC raid with species of ARC that are more dangerous than the ones in the plains/foothills.

Ohhh, those are UEFI cheats. This is the reason that kernel anti-cheat games require Secure Boot.
You can, when Secure Boot is disabled, use the UEFI to load a driver that can perform DMA actions prior to loading the Windows kernel. A user could then run an innocuous piece of software that would communicate with the driver and send the data to the USB device which would run the cheat software and do the mouse manipulation (and you would configure the devices from the gaming PC over the same USB interface). e: This could technically be detected because there is still software running on the user’s PC that the anti-cheat software could detect and a USB device that could, if the firmware is not properly flashed to a firmware pretending to be something innocuous (typically a NIC or Audio device).
This let anybody willing to install a UEFI driver of unknown origin have access to DMA without needing to buy an expensive card. This is only possible on any game that doesn’t mandate Windows 11 and Secure Boot (though there was a recent exploit discovered with some motherboards [CVE-2025-11901, CVE-2025-14302, CVE-2025-14303 and CVE-2025-14304] that allowed an attacker to obtain DMA access prior to the IOMMU being properly initialized (which would restrict DMA access).
This would allow an attacker to run software on a second PC that would use this lapse to inject a hacked UEFI driver via a hardware DMA device, then you could just send the memory data over USB to a second cheating device.

The class of hacks that use trained object detection networks (like YOLO) can be run on lightweight(-ish) hardware. It still needs to be able to run the object recognition loop quickly, the faster your hardware the less latency you will experience but it can work on Raspberry Pi.
In order to get ESP/wallhacks, you need to be able to read the game memory on the gaming PC. While there are software ways to do this, they are all detectable (assuming they’re using Secure Boot to prevent UEFI cheats). The most reliable way is to use Direct Memory Access hardware to read the system memory via hardware without going through the operating system, which means that not even the kernel anticheats can see when this is happening.
If you’re going to use ESP, you also need to be able to see the information. You could run a second monitor, but the preferred way is to use a fuser which merges two video streams, one from the game from the gaming PC and another from the PC rendering the ESP data (bounding boxes).
Then you need some kind of hardware to receive the mouse input and pretend to be a mouse to the gaming PC. This can be something like a Raspberry Pi, but a product called Kmbox is purpose designed for it.
The full hardware kit is probably around $300-400 (not counting the PC/Pi) and then you have to buy/subscribe to the software that actually runs the cheats.

It is cheaper now, but a full DMA setup costs about as much as an entry-level PC and you need a second PC to run all of the cheats.
What he’s demonstrating, using image recognition, is pretty cheap and probably even more undetectable but you ‘only’ get aimbotting and there is a bit of latency due to the neural network processing step. DMA cheats gives you aimbotting and all of the ESP info instantly.

This is true, online games have always been full of aimbotters and cheaters.
If they seemed less plentiful it’s because of the Admin:Player ratios. Back in the day, the game company didn’t host the servers they just provided you with a server executable and instructions. So any servers that you played on were paid for by a person or group and that person or group usually moderated them pretty actively.
It wouldn’t be unusual to play on a server where 2/3rds of the players had the ability to kickban cheaters. Now, you’re lucky if a human admin ever sees a single match that you’ve played.

Zero chance of this becoming a real product. This is more like StuffMadeHere on YT, just making wildly impractical stuff that is still interesting.
You don’t need a motorized pad, you can just run your video output and mouse through an external PC which does the image recognition and target acquisition then edits the mouse’s input stream to insert the proper movement in order to hit the target, afterwards it’s passed to the clean PC with the cheating PC pretending to be a mouse.
This is basically what all of the hardware cheat products do that don’t use DMA access to read memory from the clean PC.

It’s funny that you think these things are operating on napkin math. I guess it takes tens of thousands of GPUs and months of compute time to work out some napkin math.
You’re confusing a lot of things and it sounds like when you say ‘AI’ you mean ‘ChatGPT’ or generative LLM/Diffusion models. Because those large models are the only ones that use a large amount of computer resources. That doesn’t represent the entirety of neural network based machine learning (AI)
My phone uses an LLM for spellcheck, it even runs fine tuning (the thing that you’re referring to that requires ‘tens of thousands of GPUs’) on the phone processor. Writing and training text recognition AI from scratch is literally an exercise for Computer Science students to do on their personal computer. AI running object detection models runs on tiny microprocessors inside of doorbells.
My response was that you cannot eliminate AI because the algorithms that are required to build it are already known by millions of experts who can re-create it from scratch.
All of these years of AI research resulted in some napkin math that we were missing all along
As much as you’re trying to be sarcastic. Yes, that is correct.
This is not unusual in science, here’s some other napkin formulas that required years, decades and centuries to discover.
E=MC2 – Years of knowledge and research to discover how mass and energy were related
√-1 = i – 1800 years is how long it took for this to be accepted
F = ma – Newton’s force equation
eiπ + 1 = 0 – Euler’s Identity
E = hf – Plank’s discovery describing how photon energy links to frequency.
etcetc.
What does ‘AI dying’ even look like to you? If you were the dictator of Earth, how would you eliminate the knowledge from the minds of millions of experts across the world?
It’s not just math or if it is, we don’t understand the math. Math is deterministic. These models are not deterministic, an input does not always produce the same output and you can’t feed a response backwards through a model to produce the query. We struggle to make even remotely predictable changes to a model when it does something we don’t like.
Once again you’re confusing topics and also definitions. Determinism, interpretability and explainability are different things.
Neural networks are completely deterministic. A given input will always return the same output.
You’re probably referring to the trick that the LLM chatbots use where they take the output of the the model, which is a list of tokens with a score of which is most likely, and randomly select a token from that list before feeding it back into the model. This is a chatbot trick, it doesn’t happen in the model.
No machine learning model uses randomness during inference. Often people pass in random noise so the output varies, but if you pass in the same random noise then you get the same output.
‘We don’t know how it works’ isn’t exactly true. There’s a lot of work in this field (https://en.wikipedia.org/wiki/Explainable_artificial_intelligence) and there’s nothing fundamentally unknowable about these systems.
The AI industry can die like any other.
Yes, like I said, ‘The AI industry’ that produced generative LLMs and diffusion models is what you’re upset about, you’re upset that capitalists are using AI to fire workers and destroy jobs.
You’re not upset that we’ve discovered the ability to create universal approximation functions or use machines to learn from data.
You’re mad at capitalism, not AI.
Using your logic, anyone and everyone can build nukes, they’re just math and physics and the materials (like GPUs and Power) are easy to come by. We can’t erase the knowledge so let’s all sit back and enjoy the fallout.
This isn’t an argument, this is: https://en.wikipedia.org/wiki/Reductio_ad_absurdum
I said you can’t eliminate the knowledge of AI from society since anyone with a laptop can train one from scratch and the knowledge to do so is available to everyone. The knowledge for making nuclear weapons was also not eliminated despite being far more dangerous and widely condemned.
Also, irrelevant to my point but, in what world are the materials to make nuclear weapons easy to come by?
but AI is a threat even without capitalism. So far they’re making life more expensive in multiple ways for all of us with no real benefit for their existence.
No real benefit? Once again, you seem to be talking about generative AI. That’s a product, created by capitalists, using AI. It isn’t the entirety of the AI field, it isn’t even the field with the most AI workers.
AI is used in science and has facilitated incredible discoveries already.
AlphaFold revolutionized structural biology by predicting the shape of every known protein which has massively accelerated drug discovery. I’m not sure if you’re aware but there are now TWO AIDS vaccines in human trials and the researchers use machine-learning models to mine clinical data in order to spot patterns in immune responses leading promising therapies.
AI is being used to plan, execute and interpret lab work, this is a tedious and laborious task that requires a highly trained person, typically a grad student. This isn’t something that you can scale up 100x or 1000x because you can’t magic 1000 graduate students into existence. Now you can use AI to do the tedious work and a grad student can now run multiple times are labs which are at the heart of almost every scientific discovery.
Diagnostic AI, which is more objectively more accurate than human experts, is used to annotate diagnostic images in order to indicate to a doctor areas to examine. This results in lower error rates and earlier detection than human-only review.
It’s discovered new plasma physics which are key for having working fusion power. (https://www.pnas.org/doi/10.1073/pnas.2505725122)
So, while you may not think these are worth the downsides, it’s disingenuous to say that there has been no benefit.

People should keep bitching about AI until it either dies or finds an entirely new business model based on not being pieces of shit.
How can AI die? What does that even mean?
It’s math. You can write the algorithms on a napkin from memory. It cannot ‘die’. You’re tilting at windmills, there’s nothing to kill.
You’re mad at the people who are using the productivity gains resulting from this new technology and eliminating jobs for people.
That isn’t an AI problem. The same thing happens every time there is a new productivity saving device. It doesn’t result in the workers earning more money from increased productivity, it results in a huge amount of people getting fired so profits can go up.
You’re not mad at AI, you’re mad at capitalism but it sounds like you lack the perspective to understand that.

I’m not even sure what you mean by equivalent. Is an airplane equivalent to aerospace engineering? They’re two different things.
AI models, the neural network ones, are essentially just a bunch of tensor multiplication. Tensors are a fundamental part of linear algebra and I hope I don’t have to keep explaining the joke.
The point is that no amount of being angry and toxic on the Internet will make AI disappear.
In addition, what most people are complaining about (the exploitative way that AI is being used) is not an AI problem, it is a capitalism problem. So, not only is the rage and anger useless but it is pointed at the wrong target.

You’re exactly right.
The unusual thing here is that production is not following demand.
It isn’t the case that RAM manufacturers are unable to buy more RAM manufacturing equipment. They’re simply choosing not to invest in new RAM manufacturing equipment because, collectively, they seem to agree that the demand is a bubble which will collapse before the investment will break even.
Since that sector typically targets a 3-5 year payback window, it means that the market is not expecting demand to continue rising long-term.
The article is simply AMD pricing the bubble uncertainty into their product. We’ll likely see the Steam Machine have a similarly inflated price (and also due to tariff uncertainty)

It’s not a shortage if production is normal but some greedy assholes keep buying them all. It’s a racket.
Your entire premise is built on “if production is normal” and yet in the 2nd paragraph of the article (which you read, right?) it says that production isn’t normal.
Manufacturers are intentionally not ramping up to increase production to follow the demand because of the bubble risk.
So, the price increase is created by a supply-side problem because production isn’t normal.
The supply-chain disruption centres on memory devices—especially those used in graphics-cards and AI-accelerated systems—where manufacturers remain wary of ramping up production after past crashes. The result: constrained supply, elevated costs, and a decision by AMD to transmit some of that burden across its GPU product lineup.

NVIDIA’s RTX series of cards have two fixed-function blocks that sit beside the regular CUDA/shader cores.
They have RT Cores which are optimized to accelerate the Bounding-volume-hierachy (BVH) traversal and ray/triangle intersection tests, speeding up raytracing operations.
There are also Tensor Cores which are NVIDIA’s “AI” cores, they’re optimized for mixed-precision matrix multiplication. DLSS 3 uses a Convolutional Neural Network (CNN) for upscaling and that is, essentially, a bunch of matrix multiplications.
These offload some computation onto dedicated hardware so the CUDA cores that handle the bulk shading/rasterizing are not tied up with these calculations resulting in lower time to render a frame which equates to higher FPS.
AMD cards, in the RDNA2/3 chips have Ray Accelerators, which accelerate the ray/triangle tests but the bulk of the RT load (BVH, shading and denoising) are ran on the regular shader cores. They’ve just announced (this month) that they’re adding ‘Radiance Cores’ in future hardware, which will handle all raytracing functions like the RT Cores.
AMD doesn’t have an equivalent of a Tensor Core, FSR is done in software on the standard shader compute units.
So on NVIDIA cards, DLSS upscaling is ‘free’ in the sense that it doesn’t take time away from the shader cores and RT is accelerated similarly.
This is a good video explaining how Raytracing works if some of the terms are strange to you: https://www.youtube.com/watch?v=gsZiJeaMO48
As an aside, this video is from the 3Blue1Brown ‘Summer of Math Exposition’ video collection where every year there is a contest for who can make the best and most interesting math explainer videos and this video is one of the winners of the 1st year’s contest, the playlists on are 3blue1brown’s YT. 3b1b is great all around, if you’re into that kind of thing.

I was looking into this, it’s weird that it isn’t on ProtonDB
Future Linux Converts:
If you wonder “Will the game that I play work on Linux?”, there’s a website for that:

I’m not sure I understand the point that you’re trying to make.
If you use Linux you can use more power to get higher clock rates, have a longer battery life, more stable framerate and a suspend feature that works.
It seems reasonable to say “ROG Xbox Ally runs better on Linux than the Windows it ships with”
It’s like claiming a race car is only faster because it produces more horsepower… yes, that’s the entire point and what we want.
May save you a click:
No price yet, similar products have launched at $699