• 0 Posts
  • 110 Comments
Joined 10M ago
cake
Cake day: Feb 10, 2025

help-circle
rss

It’s funny that you think these things are operating on napkin math. I guess it takes tens of thousands of GPUs and months of compute time to work out some napkin math.

You’re confusing a lot of things and it sounds like when you say ‘AI’ you mean ‘ChatGPT’ or generative LLM/Diffusion models. Because those large models are the only ones that use a large amount of computer resources. That doesn’t represent the entirety of neural network based machine learning (AI)

My phone uses an LLM for spellcheck, it even runs fine tuning (the thing that you’re referring to that requires ‘tens of thousands of GPUs’) on the phone processor. Writing and training text recognition AI from scratch is literally an exercise for Computer Science students to do on their personal computer. AI running object detection models runs on tiny microprocessors inside of doorbells.

My response was that you cannot eliminate AI because the algorithms that are required to build it are already known by millions of experts who can re-create it from scratch.


All of these years of AI research resulted in some napkin math that we were missing all along

As much as you’re trying to be sarcastic. Yes, that is correct.

This is not unusual in science, here’s some other napkin formulas that required years, decades and centuries to discover.

E=MC2 – Years of knowledge and research to discover how mass and energy were related

√-1 = i – 1800 years is how long it took for this to be accepted

F = ma – Newton’s force equation

e + 1 = 0 – Euler’s Identity

E = hf – Plank’s discovery describing how photon energy links to frequency.

etcetc.


What does ‘AI dying’ even look like to you? If you were the dictator of Earth, how would you eliminate the knowledge from the minds of millions of experts across the world?


It’s not just math or if it is, we don’t understand the math. Math is deterministic. These models are not deterministic, an input does not always produce the same output and you can’t feed a response backwards through a model to produce the query. We struggle to make even remotely predictable changes to a model when it does something we don’t like.

Once again you’re confusing topics and also definitions. Determinism, interpretability and explainability are different things.

Neural networks are completely deterministic. A given input will always return the same output.

You’re probably referring to the trick that the LLM chatbots use where they take the output of the the model, which is a list of tokens with a score of which is most likely, and randomly select a token from that list before feeding it back into the model. This is a chatbot trick, it doesn’t happen in the model.

No machine learning model uses randomness during inference. Often people pass in random noise so the output varies, but if you pass in the same random noise then you get the same output.

‘We don’t know how it works’ isn’t exactly true. There’s a lot of work in this field (https://en.wikipedia.org/wiki/Explainable_artificial_intelligence) and there’s nothing fundamentally unknowable about these systems.

The AI industry can die like any other.

Yes, like I said, ‘The AI industry’ that produced generative LLMs and diffusion models is what you’re upset about, you’re upset that capitalists are using AI to fire workers and destroy jobs.

You’re not upset that we’ve discovered the ability to create universal approximation functions or use machines to learn from data.

You’re mad at capitalism, not AI.

Using your logic, anyone and everyone can build nukes, they’re just math and physics and the materials (like GPUs and Power) are easy to come by. We can’t erase the knowledge so let’s all sit back and enjoy the fallout.

This isn’t an argument, this is: https://en.wikipedia.org/wiki/Reductio_ad_absurdum

I said you can’t eliminate the knowledge of AI from society since anyone with a laptop can train one from scratch and the knowledge to do so is available to everyone. The knowledge for making nuclear weapons was also not eliminated despite being far more dangerous and widely condemned.

Also, irrelevant to my point but, in what world are the materials to make nuclear weapons easy to come by?

but AI is a threat even without capitalism. So far they’re making life more expensive in multiple ways for all of us with no real benefit for their existence.

No real benefit? Once again, you seem to be talking about generative AI. That’s a product, created by capitalists, using AI. It isn’t the entirety of the AI field, it isn’t even the field with the most AI workers.

AI is used in science and has facilitated incredible discoveries already.

AlphaFold revolutionized structural biology by predicting the shape of every known protein which has massively accelerated drug discovery. I’m not sure if you’re aware but there are now TWO AIDS vaccines in human trials and the researchers use machine-learning models to mine clinical data in order to spot patterns in immune responses leading promising therapies.

AI is being used to plan, execute and interpret lab work, this is a tedious and laborious task that requires a highly trained person, typically a grad student. This isn’t something that you can scale up 100x or 1000x because you can’t magic 1000 graduate students into existence. Now you can use AI to do the tedious work and a grad student can now run multiple times are labs which are at the heart of almost every scientific discovery.

Diagnostic AI, which is more objectively more accurate than human experts, is used to annotate diagnostic images in order to indicate to a doctor areas to examine. This results in lower error rates and earlier detection than human-only review.

It’s discovered new plasma physics which are key for having working fusion power. (https://www.pnas.org/doi/10.1073/pnas.2505725122)

So, while you may not think these are worth the downsides, it’s disingenuous to say that there has been no benefit.


People should keep bitching about AI until it either dies or finds an entirely new business model based on not being pieces of shit.

How can AI die? What does that even mean?

It’s math. You can write the algorithms on a napkin from memory. It cannot ‘die’. You’re tilting at windmills, there’s nothing to kill.

You’re mad at the people who are using the productivity gains resulting from this new technology and eliminating jobs for people.

That isn’t an AI problem. The same thing happens every time there is a new productivity saving device. It doesn’t result in the workers earning more money from increased productivity, it results in a huge amount of people getting fired so profits can go up.

You’re not mad at AI, you’re mad at capitalism but it sounds like you lack the perspective to understand that.


I’m not even sure what you mean by equivalent. Is an airplane equivalent to aerospace engineering? They’re two different things.

AI models, the neural network ones, are essentially just a bunch of tensor multiplication. Tensors are a fundamental part of linear algebra and I hope I don’t have to keep explaining the joke.

The point is that no amount of being angry and toxic on the Internet will make AI disappear.

In addition, what most people are complaining about (the exploitative way that AI is being used) is not an AI problem, it is a capitalism problem. So, not only is the rage and anger useless but it is pointed at the wrong target.


Sorry Bill.

Sincerely,

- You already know everything about me.


You’re exactly right.

The unusual thing here is that production is not following demand.

It isn’t the case that RAM manufacturers are unable to buy more RAM manufacturing equipment. They’re simply choosing not to invest in new RAM manufacturing equipment because, collectively, they seem to agree that the demand is a bubble which will collapse before the investment will break even.

Since that sector typically targets a 3-5 year payback window, it means that the market is not expecting demand to continue rising long-term.

The article is simply AMD pricing the bubble uncertainty into their product. We’ll likely see the Steam Machine have a similarly inflated price (and also due to tariff uncertainty)


I didn’t say it wasn’t caused by AI.

I said that the people that show up in these threads are unusually toxic and irrational on the topic and share the same ridiculous framing that if they simply spew enough toxins on social media then Linear Algebra will uninvent itself.


Anti-ai bots are literally bots (ironic) or otherwise children/idiots who think in memes instead of rationally.


Yup. Like I just grabbed a nice laptop for $150 that was $1200 in 2025 because Microsoft dictated that every computer is obsolete to their OS.


These kinds of posts always bring out the anti-ai bots, repeating the same FUD memes without reading the article or basing them in reality.


It’s not a shortage if production is normal but some greedy assholes keep buying them all. It’s a racket.

Your entire premise is built on “if production is normal” and yet in the 2nd paragraph of the article (which you read, right?) it says that production isn’t normal.

Manufacturers are intentionally not ramping up to increase production to follow the demand because of the bubble risk.

So, the price increase is created by a supply-side problem because production isn’t normal.

The supply-chain disruption centres on memory devices—especially those used in graphics-cards and AI-accelerated systems—where manufacturers remain wary of ramping up production after past crashes. The result: constrained supply, elevated costs, and a decision by AMD to transmit some of that burden across its GPU product lineup.


Id be willing to bet the price increase won’t be shared by the AI industry.

Sounds like you’re giving in to conspiratorial thinking…

Does the AI industry buy computer components on Earth still? Then they’ll be affected by price increases.


@1440p?

I get 70-80 @4k with a 3080, same processor. I think the RT is on high though.

The game runs incredibly well for how good it looks.


This game is older than some of my coworkers



The game engines are programmed to use them as part of the rendering cycle.

If you’re using DLSS or RT, they’re being used.


NVIDIA’s RTX series of cards have two fixed-function blocks that sit beside the regular CUDA/shader cores.

They have RT Cores which are optimized to accelerate the Bounding-volume-hierachy (BVH) traversal and ray/triangle intersection tests, speeding up raytracing operations.

There are also Tensor Cores which are NVIDIA’s “AI” cores, they’re optimized for mixed-precision matrix multiplication. DLSS 3 uses a Convolutional Neural Network (CNN) for upscaling and that is, essentially, a bunch of matrix multiplications.

These offload some computation onto dedicated hardware so the CUDA cores that handle the bulk shading/rasterizing are not tied up with these calculations resulting in lower time to render a frame which equates to higher FPS.

AMD cards, in the RDNA2/3 chips have Ray Accelerators, which accelerate the ray/triangle tests but the bulk of the RT load (BVH, shading and denoising) are ran on the regular shader cores. They’ve just announced (this month) that they’re adding ‘Radiance Cores’ in future hardware, which will handle all raytracing functions like the RT Cores.

AMD doesn’t have an equivalent of a Tensor Core, FSR is done in software on the standard shader compute units.

So on NVIDIA cards, DLSS upscaling is ‘free’ in the sense that it doesn’t take time away from the shader cores and RT is accelerated similarly.

This is a good video explaining how Raytracing works if some of the terms are strange to you: https://www.youtube.com/watch?v=gsZiJeaMO48

As an aside, this video is from the 3Blue1Brown ‘Summer of Math Exposition’ video collection where every year there is a contest for who can make the best and most interesting math explainer videos and this video is one of the winners of the 1st year’s contest, the playlists on are 3blue1brown’s YT. 3b1b is great all around, if you’re into that kind of thing.


There’s still a gear grind so you can progress in your item score, but you don’t have to kill 30 rats and run around farming herbs for 2 hours just to do a raid.


5800X3D

It may be the gains from having dedicated hardware to run DLSS and RT.

Of course, It does drop into the 70s during combat and in some outdoor areas.


I played this morning before work, worked just fine.

It uses EAC, which may be a kernel anticheat on Windows, but on Linux it runs in user space.



Check out Fellowship, it’s WoW Mythic+ dungeons without the MMO grind.



I was looking into this, it’s weird that it isn’t on ProtonDB

Future Linux Converts:

If you wonder “Will the game that I play work on Linux?”, there’s a website for that:

https://www.protondb.com/


I’m sad the FPS-RTS hybrid really never took off.

Savage 1/2 and the Half Life mod Natural Selection were kind of popular for a while but the genre just kind of faded away.

I did like HoN better than LoL though


I’m not sure I understand the point that you’re trying to make.

If you use Linux you can use more power to get higher clock rates, have a longer battery life, more stable framerate and a suspend feature that works.

It seems reasonable to say “ROG Xbox Ally runs better on Linux than the Windows it ships with”

It’s like claiming a race car is only faster because it produces more horsepower… yes, that’s the entire point and what we want.


I’ve been playing it with friends (just past the looooooooooong ramp).

It’s a fun game to grab and play over the weekend and then forget about, not much replayability but just go in blind with some friends and enjoy the hilarity that ensues.


Even if it were a fair comparison, 32% is still cherry picked. It’s the best result, versus the average result of +6–7% (sorry, that was the fps increase) +13–14%. 🤷‍♂️

Oh sure. Headlines are always going for maximum clickbait.

The bottom line, from the article and video is that the experience is much better in Linux, outside of the kernel anticheat games.


Did we read the same article?

So sure, if you run one device at 21 TDP and the other at 17, one will do better!

From the article:

Unless you’re really splitting hairs, 31.91% = 32%. Using the same TDP on both.

It’s due to a bug not apples to apples comparison.

There’s only two mentions of bugs, and both are used to describe Bazzite’s rapid development, nothing to do with performance.

So, what are you talking about?



RPing when shot and going down. “Help me sarge! Gah. It-it hurts!”

The chopper pilots that spent the entire match ferrying people from base to the front while blasting the weirdest song mix you’ve ever heard.


Even if the application doesn’t support it, you can just bind mount a directory on the storage drive into /var as the directory where it is trying to save and the application won’t even know.



People complain about performance and then complain about a patch. I’m starting to think social media only cares about outrage.


It’s the JIT shader compilations that need to be rebuilt in the state cache, other games do the same thing it’s why there are tiny stutters the first time you see a new effect in some games.


The argument is that this tech is being used by both the manufacturer and game devs to be lazy and market lies not how can we ever get to 1000hz with path tracing.

Yeah, marketing lies. I mentioned this in the last paragraph.

The whole 500hz benefits are skeptical and subjective at best considering even going from 144 to 240 you’re already seeing diminishing returns on but that’s really a whole other argument about monitor BS currently.

You’re skeptical of the benefits, that is obvious.

You’re wrong about it being subjective though. There are peer reviewed methods of creating photographs that display motion blur as a human eye would experience it. People have been using these techniques to evaluate monitors for years now. Here’s a very high level overview of the state of objective testing: https://blurbusters.com/massive-upgrade-with-120-vs-480-hz-oled-much-more-visible-than-60-vs-120-hz-even-for-office/ . We are seeing diminishing returns because it, roughly, takes a doubling in the refresh rate to cut the motion blur in half. 60-120 is half as blurry, 144 to 240 is only 25% less blurry.

If you want to keep seeing noticeable gains, up to being imperceptible, then display refresh rates need to continue to double and there have to be new frames generated for each of those refresh rates. Even if a card can do 480fps on some limited games, it can’t do 1000fps, or 2000fps.

We need exponential increases in monitor refresh rates in order to achieve improvements in motion blur, but graphics cards have not been making exponential increases in power in quite some time.

Rasterization and Raytracing performance growth is sub-exponential while the requirements for reducing motion blur are exponential. So either monitor companies can decide to stop improving (not likely since TCL just demoed a 4k 1000hz monitor) or there has to be some technological solution for filling the gap.

That technological solution is frame generation.

Unless you know of some other way to introduce exponential growth in processing power (if you did you would win multiple Nobel prizes), then we have to use something that isn’t raw rendering. There is no way for a game to ‘optimize’ its way into having 10x framerate, or 100x framerate.

Being a complex solution doesn’t make it a good solution and frame gen is not a good solution for making sure your game doesn’t run like ass.

Yes, game companies are lazy and they cover the laziness by marketing their game with a lot of upscaling so that they can keep producing crazier and crazier graphics despite graphics cards performance growth not keeping up. This is the fault of gaming companies and their marketing and not of upscaling and frame generation technology

Frame generation is supposed to help older cards get better “FPS” and smooth out motion, you know what would help that over having new games use frame generation as a big ass crutch? Optimizing your damn game so you don’t stutter like a drunken sailor with a speech impediment in the first place and not adding a crap ton of latency with fake frames.

Frame generation gives all cards better FPS, which objectively smooths out motion. Going from 30 to 60 fps cuts motion blur in half. Nothing supposed about it.

A developer’s choice to optimize their game and their choice to support upscaling and frame generation are not mutually exclusive choices. There are plenty of examples of games which run well natively and also support frame generation and upscaling.

Also, frame generation only adds latency when the frame time is long (low FPS). As the source framerate increases the input latency and the frame time converge. In addition, it’s possible to use frame generation to reduce input delay (blur busters: https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/). Input latency is a very solvable problem.


My point is that you’re not understanding the trajectory of display hardware development vs the graphics card performance growth and presenting frame generation and upscaling as some plot by game developers and graphics card designers so that they can produce worse products.

It’s conspiracy nonsense.


Frame generation objectively reduces motion blur and frame consistency.

Neural network-based upscaling is a far better alternative. Previously, in the time of the dinosaurs, we’d get better frame rate by turning the resolution down and letting the monitor handle upscaling. This looked bad but higher frame rate often is more important for image quality than resolution. Now we get the same performance boost with much less loss of visual clarity, and some antialiasing for free on top of it.

Upscaling and frame generation are good technologies. People are upset at the marketing of graphics cards which abuse these technologies to announce impressive FPS numbers when the hardware isn’t as big of an upgrade as implied.

Marketing departments lying about their products isn’t new, but for some people this is the first time that they’ve noticed it affecting them. Instead of getting mad at companies for lying, they’re ignorantly attacking the technologies themselves.


Frame generation is a requirement if we’re going to see very high refresh rate (480hz+) displays become the norm. No card is rasterizing an entire scene 500 times per second.

Calling it fake frames is letting Internet memes stand in place of actual knowledge. There’s a lot of optimizations done in the rendering pipeline which use data from previous frames to generate future frames, generating an intermediate frame while waiting for the GPU to finish rendering the previous frame is just one trick.

The generated frame increases the visual clarity of motion, you can see at https://testufo.com/photo.

We’re not going to have cards that can pathtrace at 4k@1000hz anytime soon, frame generation is one of the techniques that will make it possible.

It’s one thing to be upset at companies marketing teams who try to confuse people with FPS numbers by tweaking up scaling and frame generation. Directing that frustration at the technology itself is silly.

e: a downvote, great argument


No matter how optimized a game is, there will be someone with hardware that can barely run it.

For those people, having access to upscaling in order to gain performance is a plus.