• 0 Posts
  • 25 Comments
Joined 4M ago
cake
Cake day: Sep 13, 2024

help-circle
rss

Hardware hasn’t changed in the way you think it has for quite a while. For shits i span up a compatability check on my fifteen year old file server and it qualifies for w10.

Your 15 years old system is Windows 10 compatible because Windows 10 was released in 2015, meaning that it is actually a 5 years old system when compared to Windows 10. You can run Windows 11 with any device from 2019.

The big wank issue with win10/11 is microsoft trying to enforce corporate hardware requirements on home users. Mostly so they can start trying to garden wall their shit.

You keep saying this line about corporate hardware. What’s “corporate” about TPM 2.0?


It is not the same. The government won’t even allow you to drive a car without a seatbelt if you somehow managed to buy one. Anyways most cars will provide you support for 5 years. A car is worth tens of thousands of dollars yet you get less support than a computer and nobody is complaining about that. Just like you can use your car with its 10 years old software, you can use your 10 years old computer with your old OS (Windows 10). This is a very simple problem.


I mean, in an academic sense, if you possess the ability to implement the method, sure you can make your own code and do this yourself on whatever hardware you want, train your own models, etc.
But from a practical standpoint of an average computer hardware user, no, you I don’t think you can just use this method on any hardware you want with ease, you’ll be reliant on official drivers which just do not support / are not officially released for a whole ton of hardware.
Not many average users are going to have the time or skillset required to write their own inplementations, train and tweak the AI models for every different game at every different resolution for whichever GPUs / NPUs etc the way massive corporations do.
It’ll be a ready to go feature of various GPUs and NPUs and SoCs and whatever, designed and manufactured by Intel, reliant on drivers released by Intel, unless a giant Proton style opensource project happens, with tens or hundreds or thousands of people dedicates themselves to making this method work on whateverhardware.

Yes, this was never intended for the average user, the average user doesn’t even understand what is being explained in the paper. This is for video game studios to include with their games, or driver and OS developers to implement this system wide. The user gets provided a working product as usual. How many users do you think go and play with the FSR code which is totally open source? Not many (I’m inclined to say zero).

I think at one point someone tried to do something like this, figuring out how to hackily implement DLSS on AMD GPUs, but this seems to require compiling your own dlls, and is based off of such a random person’s implementation of DLSS, and is likely quite buggy and inefficient compared to an actual Nvidia GPU with official drivers.

I’m not aware of somebody trying DLSS on AMD, but I don’t think it will ever work. Anyways, this is precisely why this isn’t intended for the average user, because even the average developer doesn’t know how to work these things. There’s very few people who know what to do with the information that was provided, as is the case with most academic papers.

Which would mean that the practical upshot for an average end user is that if they’re not using a GPU architecture designed with this method in mind, the method isn’t going to work very well, which means this is not some kind of magic ‘holy grail’, universal software upgrade for all old hardware (I know you haven’t said this, but others in this thread have speculated at this

Yes, new technologies are never guaranteed to work with old hardware. That’s just how things are unfortunately.

And also the overhead of doing the calculation of predicting pipeline render times vs extrapolated frame render times is not being figured in with this paper, meaning that the article based on the paper is at least to some extent overstating this method’s practical quickness to the general public.

The real-time arbitration is not the focus of this paper so that’s expected. Here they describe the framework, and the patent is just a particular use case for it.

I think the disconnect we are having here is that I am coming at this from a ‘how does this actually impact your average gamer’ standpoint, and you are coming at it from much more academic standpoint, inclusive of all the things that are technically correct and possible, whereas I am focusing on how that universe of technically possible things is likely to condense into a practical reality for the vast majority of non experts.

I guess that makes sense.

What is a single word that means ‘this method is a feature that is likely to only be officially, out of the box supported and used by specific Intel GPUs/NPUs etc until Nvidia and/or AMD decide to officially support it out of the box as well, and/or a comprehensive open source team dedicates themselves to maintaining easy to install drivers that add the same functionality to non officially supported hardware’?

Unfortunately that’s the case with any advanced technology, no matter how open it is. We depend on companies who are willing to pay somebody to figure it out.


TPM is required for Windows 11 because it is used for security purposes. The world is filled with things that aren’t “technically required” but they are actually required because they help prevent things. The web doesn’t “technically require” HTTPS, but modern websites require an HTTPS connection. A seatbelt isn’t “technically required” to drive a car, but you are required to wear one anyways.


I would disagree given that two of the most efficient computer chips are based on phone SOCs (Qualcomm and Apple). Anyways, the fact that your system is powerful doesn’t mean anything from a support standpoint. Supporting old hardware means you need different versions for devices with different capabilities and architectures, which is not feasible for a company that also wants to focus on new technologies. Again, out of all top operating systems, Windows is giving you the most support.


I feel this is a bit of an overstatement, otherwise you’d only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.

Well, you would need a “history” of frames, so 1 wouldn’t be enough. Anyways, that’s fully possible, but then you would be generating garbage.

Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you’d quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.

That’s correct, nobody said otherwise. This is to help increase frame rate, so you need a source of frames to increase. Regular frames are still rendered as fast as the GPU can.

Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to… after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.

Because that’s implementation specific. As specified on the paper, once you have a history of frames, you can use the latest frame t_n to generate up to the t_(n+i) frame, where i is how many frames you want to generate. The higher i is, the higher the frame rate but also the more likely it is to be garbage.

So, that would be a 1:1 ratio of extrapolated frame to reference frame.
This doesn’t appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.

I didn’t watch the video, but that’s completely possible. After you have a couple of frames generated, you can start alternating between a real frame and a generated one with this method. So you can’t have 60 fps at the beginning, but you can after a few frames.

It seems to just be running already captured videos as input, and then rendering double FPS videos as output.

The only difference between watching a movie and playing a video game is that the movie isn’t polling your input. This framework only cares about the previously rendered frames, and from a technical standpoint, they’re both just a bunch of pixels.

I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to… all I’m seeing is basically ‘we look at the history buffer.’

Yes that’s because it is the implementer’s choice. I don’t know if they say what ratio they used, but it doesn’t matter because you don’t have to use their ratio. Anyone can implement this as they want and tune for quality/performance.

Unfortunately they don’t actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of ‘control’ column where all the scores for the various ‘visual difference/error from standard fully rendered frames’ are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.

Yes that’s a thing they seem to have missed. Would have been nice to see how it compared to actual rendering.

Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.
I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.

Yes that’s true for now. But remember that Windows started a trend with Copilot where manufacturers are now encouraged to include NPUs in their CPUs. Every modern laptop (M series, Qualcomm, latest Intel/AMD) now include NPUs in them (although underpowered ones, but these are first generation devices so it will inevitably get better), so in the near future these could run on the NPU that would come in almost all computers. Once NPUs are more common, this could easily become a driver.

To be fair, when I wrote it originally, I used ‘apparently’ as a qualifier, indicating lack of 100% certainty.
But uh, why did I assume this?
Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.
Much research done by Universities also ends up proprietary as well.

Yes Intel will not give the source code, but that’s not needed to recreate this experiment. Corporate funded academic research can be proprietary, but if it is published to the public then anyone is free to use that knowledge. The whole point of academic journals is to share the knowledge, if you wanted to keep it private you simply don’t publish it.

This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it’s comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.

Yes, because the method is all you need to recreate this. Intel is a for profit company so they might keep their own implementation to themselves. Pages 4:7 tell you exactly what you need to do to replicate this with details, they even give the formulas they used where needed. Remember this is supposed to be a general and modular framework that can be tuned depending on your goals, so the method needs to reflect that generality to allow for experimentation.

Sure, you could try to implement this method based off of reading this paper, but that’s a far cry from ‘here’s our MIT liscensed alpha driver, go nuts.’

They might publish it in the future, they might not, but if they don’t nothing is lost and they get a head start on implementing research that they paid for.

Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.
This one appears to be focused on the machine learning / frame gen method, the software:
https://patents.justia.com/patent/20240311950

This patent is about hardware configuration of a system designed to run such a model in a way that Intel considers optimal. So I guess they’re considering designing SOCs specialized on these things (maybe for handhelds?). But this is not related to the paper, since this doesn’t affect your ability to train and run this model on your RTX like they did on the paper.

And this one appears to be focused on the physical design of a GPU, the hardware made to leverage the software.
https://patents.justia.com/patent/20240311951
So yeah, looks to me like Intel is certainly aiming at this being proprietary.
I suppose its technically possible they do not actually get these patents award

This one is more tricky, but it also does not affect your ability to implement your own model. What they are doing here is akin to a real-time kernel operation but for graphics. You set a maximum time for a frame to be rendered (ideally monitor refresh rate), if the algorithm decides that the GPU won’t meet that deadline, then you generate the frame and discard whatever the GPU was doing. It’s basically a guarantee to meet the display update frequency (or proper v-sync). Also they aren’t likely to get this one because they’re trying to patent the logic: if time1 is less than tmax, pick option one; else pick option two.

These patents do not affect the paper in any way, since they do not cover what is needed for this method (RTX 4070 Ti, Ryzen 9 5900X, Pytorch, TensorRT, and NVIDIA Falcor) or their alternatives.


The requirements are 7 year old hardware. While not everyone upgrades their PC every 7 years, I don’t think it’s unreasonable to stop supporting 7 years old hardware. Apple requires iPhone XS (6 years old) for iOS 18, Google requires Pixel 6 (3 years old) for Android 15, MacOS Sequoia requires 6 years old laptops. Turns out Microsoft is the one giving the most support.


Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn’t introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.

The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.

IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively… this frame gen would be pointless.

Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI). Remember time to run a model is dependent on GPU performance, so a faster GPU will be able to run this model faster. I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.

I… guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good

It can. This method only needs access to the frames, which can easily be accessed by the OS.

But … this is GPU tech.

This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.

And is apparently proprietary to Intel… so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.

Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.


Second page of the paper explains the shortcomings of warping and hole filling.


If when you applied to the position it said “Remote”, I support this. Otherwise, this is unjustified and will make things worse for everyone down the line.




Remember Microsoft support is in terms of businesses. A business will not buy parts from AMD or MSI and then proceed to build the computer, they buy prebuilt computers from manufacturers, and these are in fact forced to pick parts that support TPM 2.0 since Windows 10. Microsoft could not care less if you and I get hacked, because the fact is we don’t make Microsoft any money.

Also, chances are your motherboard does support TPM 2.0. Remember most manufacturers are lazy and don’t have a dedicated TPM module and instead use firmware TPM which depends on CPU. So even if your motherboard supports TPM 2.0, you need a compatible CPU.


Please go watch any of countless Louis Rossman videos about how apple claims a device irrepairable and he fixes it for 5 dollars.

Louis Rossman complains about Apple not supplying 3rd party repairmen with the parts needed to do the repairs. He has acknowledged multiple times that you cannot expect Genius bar employees to know how to do board-level repair.

Etc. Soldered RAM

Soldered RAM is not an Apple only thing. It is for manufacturers who don’t want to support people with unstable systems due to installing unsupported RAM. Remember Apple devices are mostly one SOC with everything soldered to reduce possible points of failure.

faulty switches , bad display cable, all easy fixes that the geniuses will suggest you buy new because it will cost as much to fix. Apple is an e waste producing company.

These are design defects, every company ran by humans is allowed to make them. These are easy fixes if you know what the issue is, it is not cost effective for Apple to have a Genius Bar employee open every device and check all components with an oscilloscope to find out if it’s a faulty display cable or a missing capacitor. It is more efficient for Apple to just replace the entire mainboard, and this is expensive for the consumer because you are essentially getting a brand new computer. Yes, this is bad practice, but don’t confuse this with creating e-waste. When you hand in your computer, all recyclable parts are salvaged and used for future Apple devices, and newer devices are more recyclable than older ones.

As for scanning peoples data, it already proved that it did more harm than good. CSAm people just change behaviors ur,

Like I said before nothing is foolproof, and I don’t advocate for these measures. However, the point of these is to force CSAM people to use other services, and if all cloud services implement this, all of a sudden CSAM people have to go around sharing thumb drives or magnet links, which lowers their ability to share the files.

and you have legit people having their accounts frozen and police called when their doctor during covid asked for photos of skin rashes. It is hard for an innocent guy to live down arrest for false child porn.

Yes it is not possible to differentiate between CSAM and pictures for a doctor, and that Google incident is why Apple didn’t proceed with the iCloud scanning. Again I don’t advocate for these measures, as I’m completely against espionage, but people like to pretend like these technologies are made with the sole purpose of spying on you and that “for the kids” is just an excuse. People like that are deeply unserious because they seem to forget that if the company wants to look at your data, they will and don’t even have to tell you about it.


What I wrote there is too generalized. OEMs are the ones required to ship TPM 2.0 enabled devices since 2016, you could still build your own PCs without TPM 2.0. Remember main Microsoft customer is companies who don’t build their own PCs but buy them from manufacturers.


Apple will happily throw away a good machine to sell you a new one, their eco friendlyness and repairability scores are self scored bullshit.

That’s beside the point. They make their machines with recycled materials, and it’s a fact. There are people using 10 year old MacBooks and iPads, so I don’t think anybody is being forced to “throw away” a good machine.

Having police access to everyone’s phone would not make people safer. You would not have enough police to monitor and it is a backdoor for hacking.

That was an example. What they would do is have computer scan your data for illegal content (like they planned to do with iCloud), and any flagged data would get checked by an actual person. If you think this wouldn’t help protect people, you are lying to yourself. Whether this is a privacy issue or not is not the point, the point is that “it’s for the children” is a valid concern for implementing this kind of stuff and not just something to be skeptical about.

Just like Intel Management Engine that gave hackers passwordless entry into machines. Having control like that is not safety.

You are still evading the issue at hand. I never claimed backdoors are not a security issue, I said they would definitely help protect the children, as I repeated above.

Plus anyone with physical access is going to defeat security anyway.

Obviously. The point of things like TPM is to prevent remote hacking. Who claimed otherwise? You cannot guarantee the safety of any system if the attacker has physical access. I assume your computer doesn’t have a log in password since anyone with physical access can defeat it, right?

My linuxOS has a MOC signed by microsoft, an OS can work on TPM with a signature…hackers will find a way to spoof into it

Yes, nothing is foolproof. Should we stop advancing security just because it’s not perfect? Should we stop using SSL/TLS because BREACH and POODLE exploits exist? Should we stop using passwords because someone can brute force them? Maybe we should also throw away memory and thread safe languages because there are some corner cases where they can be used in an unsafe manner? Listen to yourself.


it’s one of those things where it does legitimately improve security, but for them to require it the way they did when almost no hardware at the time has it is pretty transparent.

Windows has been requiring hardware manufactures to include TPM 2.0 support since July 2016 , and Windows 11 was released in October 2021. The truth is Microsoft did everything they could to wait for people to get their hands on new hardware (5 years). Data shows that 83% of businesses were victims of firmware attacks, which is exactly what TPM helps with. Like it or not Microsoft’s primary customer are businesses, since they are the one who buy hundreds of licenses and pay for technical support. TPM requirement was not a surprise to anyone:

In fact, in the 55 pages of minumum specifications for Windows 10 hardware TPM is mentioned 60 times.

A quote from the link above.

there are plenty of other hardware requirements that could improve security if they arbitrarily decided to require them. they did this for the rain you describe, but have the plausible deniability of saying that it’s for security.

What other hardware could they require to prevent firmware attacks?

“it’s for security” - no it’s not, as a for profit company chances are pretty good we can prove you don’t actually give a shit about customer date if we look close enough at your practices. it’s for profit.

As shown in the link above, it is for security. The profit comes when businesses keep buying Windows instead of moving to MacOS for lack of security in Windows machines.

“it’s for the environment” - admirable thought, too bad that’s not profitable. I don’t believe you mr. for profit company.

Apple has shown you can have products made of recycled material while still being high quality and highly profitable. If you want environmentally friendly products you need to pay more, because like you said, it is not profitable to sell those products at the same price as before. So you either complain about price or about the environment, can’t have both.

“for the kids”- it you have ever tried to talk to a parent after the subject of their kids safety comes up you’ll see why they always do for this in. it’s the deepest, most primal, and least logical part of our brain. most parents become slovering fucking cavemen the second you disagree with whatever they’ve been programmed to believe will protect their kids. it’s just too easy to manipulate people with. if you say you’re great to protect kids I’m instantly skeptical and need a lot of proof.

The truth is most surveillance technologies will help protect the kids. This is a fact. If you gave the police access to everyone’s phone all the time kids would objectively be safer on the internet. Yes, this is used as an excuse to attack our privacy, but it does work, and there’s no reason to be skeptical. Anyways, this is not on topic to windows TPM.


I honestly cannot trust game reviewers after seeing some of the reviews for this exceptional game

I don’t understand your point. Do you want them to ignore the bugs and performance issues? Would you be happier if the reviews lied to you and said the game was perfect? If they took that route, I guarantee you’d be complaining about that as well.

where the fuck was that when Starfield came out?

“They didn’t complain about that game, so they shouldn’t complain about any game ever!”


A Russian official has said that the game could face a total ban in Russia

The whole article is based off this unnamed “Russian official” btw.


Steam client needs the XWayland translation layer to work on any modern DE, plus 32-bit libraries (which are not installed by default).


They don’t support new technologies (Wayland), why would they drop support for old ones?



Never trust 1 person on Youtube. Watch multiple videos from different channels.



That’s like saying clock rate and core count are fake terms. Sure, by themselves they might not mean much, but they’re part of a system that directly benefits from them being high.

The issue with teraflops metric is that it is inversely proportional (almost linearly) to the bit-length of the data, meaning that teraflops@8-bit is about 2x(teraflops@16-bit). So giving teraflops without specifying the bit-length it comes from is almost useless. Although you could make the argument that 8-bit is too low for modern games and 64-bit is too high of a performance trade off for accuracy gain, so you can assume the teraflops from a gaming company are based on 16-bit/32-bit performance.