Here’s the quote, for people allergic to reading the update in the article.
Update: Nvidia sent us a statement: “We are aware of a reported performance issue related to Game Filters and are actively looking into it. You can turn off Game Filters from the NVIDIA App Settings > Features > Overlay > Game Filters and Photo Mode, and then relaunch your game.”
We have tested this and confirmed that disabling the Game Filters and Photo Mode does indeed work. The problem appears to stem from the filters causing a performance loss, even when they’re not being actively used. (With GeForce Experience, if you didn’t have any game filters enabled, it didn’t affect performance.) So, if you’re only after the video capture features or game optimizations offered by the Nvidia App, you can get ‘normal’ performance by disabling the filters and photo modes.
So, TomsHW (is at least claiming that they) did indeed test this, and found that its the filters and photo mode causing the performance hit.
Still a pretty stupid problem to have, considering the old filters did not cause this problem, but at least there’s a workaround.
… I’m curious if this new settings app even exists, or has been tested on linux.
Neither of the two articles are well sourced.
But you acted like yours was credible, until I presented another one, whereupon you admitted they are both equally valid.
That’s assuming that the random article you found is correct, the veracity of which I can’t verify any more than the interesting engineering article…
That is to say, you cannot verify either of these articles at all, ie, they are both of dubious legitimacy.
You accused someone of being racist based of an article you admit you cannot verify, posted a bunch of related research papers that indicate, sure, they’re trying to develop the thing your article claimed they did… but doesn’t indicate that they actually developed it.
…
I can link you a patent for a triangular shaped aircraft, listed as filed by a US Navy Scientist that claims to outline how to create an electromagnetic, gravity negating field around the craft.
That would not be evidence that the US Navy officially announced that they basically built a UFO, that it works, and there’s a video of it, all officially documented and released.
But to you, it would be, if China had done all those things.
…
I am not saying China certainly has or has not developed a hypersonic passenger liner.
I am saying your source for this claim is dubious.
I am saying that you believe(d?) it credulously, without any skepticism, got very hostile with people who doubted its claim less tactfully than I did, and now you admit you got hostile based on a claim that you now admit is dubious, and shifted the burden of proof from the article making the claim to the skeptic questioning it.
Again, this is the logic of a fanatic.
If we just pick which dubiously sourced claims we believe based on vibes, truth stops existing.
So, you just assumed a unsourced, unverified story is true because you have a bias in favor of China, and put the burden of proof onto the other person to disprove it, and are completely fine with calling the other person a ‘sad racist’, despite now admitting that the veracity of the claim they are skeptical of is in fact not well established.
This is the argument/personality style of a fanatic, a religious fundamentalist, a QAnon adherent, an Elon Musk simp.
This is how we got ‘the Trump assasination attempt was staged!’
Please stop posting trash tier misinformation as ‘technology news’, please stop jumping to ‘everyone who disagrees with me is rascist’, this level of unjustified vitriol only makes you appear manic.
Ok so 24+ hours later and I now see a few different websites I’ve never heard of before that basically have the same article as this:
https://scienceinfo.net/chinese-hypersonic-aircraft-prototype-reaches-mach-6-speed.html
Still no actual link to the apparently original source somewhere on some social media site.
Now whats being said is that this was a flight test that actually occured 3 years ago, and was classified until now.
And they do provide an image, and credit it to CAS (without an actual link, I still can’t find this on CAS’ english site, but again maybe they are still writing a proper English post?)
This is a test article, that doesn’t appear to have any intakes for scramjet. I think I can make out two small rocket bells inside the thing, but the image quality is very low.
It’s just a test article, launched by a rocket, that Inwould guesstimate to have a wingspan of about… 4 meters, ish?
This new article also mentions that Cui, the team lead, did not mention anything about the current status of the hypersonic passenger jet which this was a test article for.
So… this test article got up to mach 6.5, 3 years ago.
Absolutely nothing about whether or not a successful test flight of a passenger jet sized craft achieved hypersonic speeds with an air breathing turbo ramjet / scram jet or something like that.
Completely different than the originally report.
… This is why I wanted an actual source.
If this very poorly sourced article from this random, clickbait style website is more accurate than the OP article (another poorly sourced article from another clickbait style website) is more accurate, that would mean SCMP, and everyone in this thread saying China has built an air breathing hypersonic jet liner is wrong, and everyone saying that this is basically comparable to the X15 is correct.
(Differences being the X15 was carried up to 45 thousand feet by a B52 instead of a rocket, and the X15 was manned, and this test article is presumably unmanned.)
I mean sure, thats a related research paper, but that isn’t the same thing as an official press announcement or video saying ‘Hey we actually built this thing, it works, take a look.’
I know that the CAS has specifically been researching/developing a hypersonic, passenger liner sized craft for around a decade… and the US has been doing the same with the SR 72, both attempting to develop … something like turbo ramjet that transitions to scramjet at high speeds/altitudes.
But a link to a research paper from 6 years ago is not actually a primary source to what your original link claims, but does not actually source.
I am willing to believe that this may have actually been developed…
But a better source sure would be neat.
Interesting/Wonderful Engineering both claim this was posted on ‘Social Media’ by the Chinese Academy of Sciences… with no link.
South China Morning Post also claims a video posted by CAS on social media… with no link, no video.
The english version of the CAS website is updated every couple of days, but this isn’t on it.
Granted, they could be taking their time doing a proper translation.
Does anybody know where to see this video?
… I mean, in an academic sense, if you possess the ability to implement the method, sure you can make your own code and do this yourself on whatever hardware you want, train your own models, etc.
But from a practical standpoint of an average computer hardware user, no, you I don’t think you can just use this method on any hardware you want with ease, you’ll be reliant on official drivers which just do not support / are not officially released for a whole ton of hardware.
Not many average users are going to have the time or skillset required to write their own inplementations, train and tweak the AI models for every different game at every different resolution for whichever GPUs / NPUs etc the way massive corporations do.
It’ll be a ready to go feature of various GPUs and NPUs and SoCs and whatever, designed and manufactured by Intel, reliant on drivers released by Intel, unless a giant Proton style opensource project happens, with tens or hundreds or thousands of people dedicates themselves to making this method work on whatever hardware.
…
I think at one point someone tried to do something like this, figuring out how to hackily implement DLSS on AMD GPUs, but this seems to require compiling your own dlls, and is based off of such a random person’s implementation of DLSS, and is likely quite buggy and inefficient compared to an actual Nvidia GPU with official drivers.
https://github.com/PotatoOfDoom/DLSS/tree/981fff8e86274ab1519ecb4c01d0540566f8a70e
https://github.com/PotatoOfDoom/CyberFSR2
https://docs.google.com/spreadsheets/d/1XyIoSqo6JQxrpdS9l5l_nZUPvFo1kUaU_Uc2DzsFlQw/htmlview
Yeah, looks like a whole bunch of compatibility issues and complex operations for a ‘i just want play game’ end user to figure out.
…
Also hey! Your last bit there about the second patent I listed seems to describe how they’re going to do the real time moderation between which frames are fully pipeline rendered and which ones are extrapolated: use the described GPU kernel operation to estimate pipeline frame rendering times along with a target FPS/refresh rate, do extrapolation whenever FPS won’t hit target FPS.
… Which would mean that the practical upshot for an average end user is that if they’re not using a GPU architecture designed with this method in mind, the method isn’t going to work very well, which means this is not some kind of magic ‘holy grail’, universal software upgrade for all old hardware (I know you haven’t said this, but others in this thread have speculated at this)…
And that means the average end user is still in a state of comparing cost vs performance/features of an increasingly architecture divergent selection of future GPUs/NPUs/SoCs/APUs.
And also the overhead of doing the calculation of predicting pipeline render times vs extrapolated frame render times is not being figured in with this paper, meaning that the article based on the paper is at least to some extent overstating this method’s practical quickness to the general public.
…
I think the disconnect we are having here is that I am coming at this from a ‘how does this actually impact your average gamer’ standpoint, and you are coming at it from much more academic standpoint, inclusive of all the things that are technically correct and possible, whereas I am focusing on how that universe of technically possible things is likely to condense into a practical reality for the vast majority of non experts.
Maybe ‘propietary’ was not exactly the technically correct term to use.
What is a single word that means ‘this method is a feature that is likely to only be officially, out of the box supported and used by specific Intel GPUs/NPUs etc until Nvidia and/or AMD decide to officially support it out of the box as well, and/or a comprehensive open source team dedicates themselves to maintaining easy to install drivers that add the same functionality to non officially supported hardware’?
Either way, I do enjoy this discussion, and acknowledge that you seem to be more knowledgeable in the technicalities than myself.
The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.
I feel this is a bit of an overstatement, otherwise you’d only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.
Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you’d quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.
Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to… after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.
So, that would be a 1:1 ratio of extrapolated frame to reference frame.
This doesn’t appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.
It seems to just be running already captured videos as input, and then rendering double FPS videos as output.
…But I could be wrong about that?
I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to… all I’m seeing is basically ‘we look at the history buffer.’
Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI).
Thats a good point, I missed that, and it’s worth mentioning they ran this on a 4070ti.
I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.
Unfortunately they don’t actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of ‘control’ column where all the scores for the various ‘visual difference/error from standard fully rendered frames’ are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.
If you control for a single given GPU then sure, other than edge cases, this method will almost always result in greater FPS for a slight degredstion in quality…
…but there’s almost no way this method is not proprietary, and thus your choice will be between price comparing GPUs with their differing rendering capabilities, not something like ‘do i turn MSAA to 4x or 16x’, available on basically any GPU.
More on that below.
This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.
Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.
I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.
Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.
To be fair, when I wrote it originally, I used ‘apparently’ as a qualifier, indicating lack of 100% certainty.
But uh, why did I assume this?
Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.
Much research done by Universities also ends up proprietary as well.
This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it’s comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.
Sure, you could try to implement this method based off of reading this paper, but that’s a far cry from ‘here’s our MIT liscensed alpha driver, go nuts.’
…And, now that you bring it up:
Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.
This one appears to be focused on the machine learning / frame gen method, the software:
https://patents.justia.com/patent/20240311950
And this one appears to be focused on the physical design of a GPU, the hardware made to leverage the software.
https://patents.justia.com/patent/20240311951
So yeah, looks to me like Intel is certainly aiming at this being proprietary.
I suppose its technically possible they do not actually get these patents awardes to them, but I find that extremely unlikely.
EDIT: Also, lol video game journalism processional standards strike again, whoever wrote the article here could have looked this up and added this highly relevant ‘Intel is pursuing a patent on this technology’ information to their article in maybe a grand total of 15 to 30 extra minutes, but nah, too hard I guess.
The paper includes the following chart for average frame gen times at various resolutions, in various test scenarios they compared with other frame generation methods.
Here’s their new method’s frame gen times, averaged across all their scenarios.
540p: 2.34ms
720p: 3.66ms
1080p: 6.62ms
Converted to FPS, by assuming constant frametimes, thats about…
540p: 427 FPS
720p: 273 FPS
1080p: 151 FPS
Now lets try extrapolated pixels per frametime to guesstimate an efficiency factor:
540p: 518400 px / 2.34 ms = 221538 px/ms
720p: 921600 px / 3.66 ms = 251803 px/ms
1080p: 2073600 px / 6.62 ms = 313233 px/ms
Plugging pixels vs efficiency factor into a graphing system and using power curve best fit estimation, you get these efficiency factors for non listed resolutions:
1440p: 361423 px/ms
2160p: 443899 px/ms
Which works out to roughly the following frame times:
1440p: 10.20 ms
2160p: 18.69 ms
Or in FPS:
1440p: 98 FPS
2160p: 53 FPS
… Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn’t introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.
(I want to again stress here this is very rough math, but I am ironically forced to extrapolate performance at higher resolutions, as no such info exists in the paper.)
IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively… this frame gen would be pointless.
I… guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good.
But … this is GPU tech.
Which, like DLSS, requires extensive AI training sets.
And is apparently proprietary to Intel… so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.
Its not gonna somehow be a driver/chipset upgrade to existing Intel CPUs.
Basically this seems to be fundamental to Intel’s gambit to make its own new GPUs stand out. Build GPUs for less cost, with less hardware devoted to G Buffering, and use this frame gen method in lieu of that.
It all depends on the price to performance ratio.
Here at NZXT, we strive to deliver the best products and services to customers, but sometimes, we don’t fulfill that desire.
Sure, we spent thousands upon thousands of man hours intentionally designing a pc rental service to rent out falsely advertised, subpar pcs, at stupendously high cost to you, our loyal customers…
And sure, we’re only issuing this statement because we got caught, and even while apologizing, we are heavily qualifying our actions and emphasizing things we think we so well…
But what is most important is that we want you to know:
If you want a Battlefield game that actually revolves around teamwork and communication, I suggest Squad.
The team largely started off modifying BF2 into Project Reality… eventually became their own studio and made their realism / teamwork version of BF2 in UE.
Its not as milsim as the Arma series, but its not as casual as Battlefield.
You know, I think you are right.
I meant to say when the discussion forums were integrated and basically autogenerated for any game, when Steam went from ‘this is our game launcher’ to ‘soon we will sell every pc game that has ever and will ever exist.’
But when it comes to hiring people to moderate things?
Insanity.
Facebook does this by hiring tens, hundreds of thousands of moderators in economically undeveloped nations, managed by a few thousand based in the US or EU.
Its a horror show sweatshop of constant exposure to the most horrible content imaginable, which basically drives many employees to suicide or insanity.
There is no AI that can do this.
… Valve could maybe? probably? afford to hire hundreds of thousands of low cost moderators following Facebook’s model.
But I’m pretty sure that they would basically go, oh, we are now legally responsible for what is said on our platform?
Fuck it, nuke 99% of it from orbit, do a bit of redesign, hire a much smaller cadre of moderators, who will manage a vastly stripped down and more cumbersome and more restrictive ability to comment on or discuss things.
… What would be the downside to that?
12 year olds and morons with no impulse control now use discord instead of the steam forums?
People maybe go back to making their own game based community websites/forums?
… Who is going to stop using Steam because the discussion forums dissapear?
Because all the default comment posting and viewing settings for all the other ways you can leave a public message now flip to being restrictive and time delayed?
I really do stand by my other statement in this thread: You could erase everything that is not from a human, manually pinned discussion thread and nothing of value would be lost.
Yeah, it has always had a cartoonish, exaggerated art style, almost every character has limbs, hands and feet that are far more voluminous than a more realistic character, color schemes are usually quite bold and highly contrasting, and all weapons and armor also have comically accentuated proportions and excess to them.
Its a shame Blizzard seems to have just decided that all its IPs now basically follow the WoW style conventions, abandoning the earlier, distinct styles of their other IPs.
It is especially weird because their concept art for much of their stuff… often doesn’t have the weird cartoony proportions … but by the time it ends as an ingame asset, it does.
Even though EAC and BattlEye have both supported linux for 3 years now, and the devs don’t actually have to do anything as Proton functionally ports the game from Windows to Linux automatically at no cost to them.
… They’re lying.
Maybe for a smaller game studio, I could actually believe they don’t know these things.
But massive AAA studios that have direct business ties to MSFT?
They’re lying.
They’re saying anything they can to slow down linux adoption, because MSFT wants to dominate as a PC gaming OS.
They used to just ignore, play dumb, feign ignorance or perhaps just actually be incompetent… now they’re just lying to our faces.
Sure Apex. Show us your stats for how many cheaters you caught who were running on Windows vs running on Linux, and show us how at least a smidgen of methodology you used to determine the bare metal OS of someone running on a VM.
Cheating software running on Linux is more challenging to detect than Windows-based kernel-level tools, and they require an increasingly higher level of attention from the Apex Legends team.
So, for starters, this is not a direct quote (of the interviewed Apex dev), so this is basically just the author’s opinion.
More to the point: Purchasable cheats that currently defeat AC on Apex are far, far more easy to find for Windows machines.
… and they defeat Kernel level AC on Windows all the time.
Also, Apex uses EAC which uh… supports linux, has for 3 years.
EDIT 2: The article states Apex uses BattlEye, not EAC for AC… but all the info I can find on Apex says they use EAC? Maybe there was a recent change?
Either way, BattlEye supports linux/SteamDeck as well, also for 3 years now.
https://store.steampowered.com/news/group/4145017/view/3104663180636096966
https://www.pcgamesn.com/steam-deck/proton-battleye-anti-cheat-support
I mean maybe there is some truth to cheat developers preferring to develop their cheats on linux…many programmers prefer to develop things on linux… but they develop them for Windows users.
Like… I obviously do not support cheating, so I won’t post the links… but a quick web search very, very easily reveals that all the cheats one can purchase… well they work on Win 10/11… no support for linux is indicated.
Granted I am no uberl33th4x0r, but I don’t see any Apex cheats which are easily acquirable which support linux.
…
EDIT: Oh right, it is probably also worth mentioning that after CrowdStrike Y2K’d half of the world’s enterprise Windows machines… through pushing a malformed update… that interfaces directly with the Windows kernel…
… MSFT is now re-evaluating giving kernel level access to 3rd parties, and is looking to create higher level APIs (above the Kernel) that are less likely to expose Windows to massive system stability errors from 3rd parties, and looks to want to at the very least have much more involvement with reviewing any 3rd party code that accesses the kernel:
Maybe these Kernel level AC proponents are a bit worried about their Kernel access on Windows being either much more stringently reviewed, or limited, and are making a fuss about it by scapegoating linux, you know, as a misdirect?
Just a thought.
EDIT 3:
A quote from the article I linked pertaining to BattlEye
BattlEye’s Steam Deck compatibility is great news, but its arrival on the handheld comes with small print. According to the anti-cheat solution’s clarification, developers will have to “opt-in”, suggesting that specific games could forgo compatibility. While it’s hard to think of a compelling reason why a company would want to do this, Valve’s PC competitors could, in theory, use the option to their advantage.
Pff, what an outlandish notion, that giant AAA studios (who all have massive business ties to MSFT) would exert pressure to limit linux marketshare/adoption, what a baseless and silly worry.
=P
I mean, anybody could verify it by spending a few hours each on the respective games… But yes, any empirical data would be nice.
No, thats an anecdotal experience, and all it would tell you is the players’ perception of how prevalent cheating is… not how prevalent it actually is, not how effective an anti cheat system is at blocking cheaters.
But yes, any empirical data would be nice. For example, a study on the amount of blatant hackers found on lobbies joined in comparable ranks
“It would be great if there was any valid data/research to back up or disprove that thing I said earlier, but there isn’t, therefore I am completely justified in saying whatever as I want and acting as if its indisputable!”
Anyway, this isn’t exactly misinformation to anybody who has played both games at any decent rank. It’s unproved but immediately discernible information.
Again, no.
You made a claim that a particular anti cheat system is better than another.
You keep saying that ‘oh anyone can just tell’.
No.
What you are describing is again, at best, player perceptions of cheating prevalence.
The logic you are using is exactly the same logic that people who believe in astrology or woo woo nonsense medical treatments use to justify their efficacy.
… You have nothing but vibes and anecdotes, which you admit are unproved and have no basis in fact, beyond ‘i think this is obvious’.
You’re just bullshitting.
It is indeed pointless to attempt to get a bullshitter to admit they are bullshitting, when they’ve already backpedalled by moving goal posts, dismissing the importance of the discussion after being called out for making a specific claim which they can’t back up.
You could just admit that ah well shit yeah, I guess I don’t have any actual valid reasoning or data to back up my claim, but nope you keep trucking on, doing everything you can to talk around that point instead of addressing it.
So … your previous assertion that OW2’s AC is superior to VAC was in fact just based on vibes.
Anti Cheat developers typically do not like to explain how exactly they work, how effective they actually are.
Their data is proprietary, trade secrets.
There will almost certainly never be a way to actually conduct the empirical study you wish for, save for (ironically) someone hacking into the corporate servers of a bunch of different anti cheat developers to grab their own internal metrics.
But that should be obvious to anyone with basic knowledge of how Anti Cheats work, both technically and as a business.
… None of that matters to you though, you have completely vibes based anecdotes that you confidently state as fact.
Please stop doing that.
When someone has no clue what they’re talking about, but confidently makes a claim about a situation because it feels right, this is typically called misinformation.
… Buuut you can still defeat Kernel level Anti Cheats.
https://m.youtube.com/watch?v=RwzIq04vd0M&t=2s&pp=2AECkAIB
Which means that you still have to end up relying on reviewing a player’s performance and actions as recorded by the game servers statistically via complex statistical algorithms or machine learning to detect impossibly abnormal activity.
… Which is what VAC has been doing, without kernel level, for over a decade.
All that is gained from pushing AC to the kernel level is you ruin the privacy and system stability of everyone using it.
You don’t actually stop cheating.
It is not possible to have a 100% full proof anti cheat system.
There will always be new, cleverer exploitation methods, just as there are with literally all other kinds of computer software, which all have new exploits that are detected and triaged basically every day.
But you do have a choice between using an anti cheat method that is insanely invasive and potentially dangerous to all your users, and one that is not.
Oh it was initially classed as insanely intrusive malware when kernel level AC was introduced about a decade ago, by anyone with a modicum of actual technical knowledge about computers.
Unfortunately, a whole lot of corpo shills ran propaganda explaining how actually its fine, don’t worry, its actually the best way to stop cheaters!
Then the vast, vast majority of idiot gamers believed that, or threw their hands up and went oh well its the new norm, trying to fight it is futile and actually if you are against this that means you are some kind of paranoid privacy freak who hates other people having fun.
Or, even better, when you let a whole bunch of devs have acces to the kernel…
… sometimes they just accidentally fuck up and push a bad update, unintentionally.
This is how CrowdStrike managed to Y2K an absurd number of enterprise computers fairly recently.
Its also why its … you know, generally bad practice to have your kernel just open to fucking whoever instead of having it be locked down and rigorously tested.
Funnily enough, MSFT now appears to be shifting toward offering much less direct access to its kernel to 3rd party software devs.
VAC is not kernel level, because surprise you don’t actually need kernel level to do anti cheat well.
VAC games would just get the standard AC message banner, not the scary yellow kernel level warning banner.
… I am pretty sure VAC games have indicated on their store page that they use VAC for well over a decade.
However, it’s only being forced for kernel-level anti-cheat. If it’s only client-side or server-side, it’s optional, but Valve say “we generally think that any game that makes use of anti-cheat technology would benefit from letting players know”.
I will always love Valve for their ability to use corpospeak against corpos.
Your game has anti-cheat?
Wonderful!
I’m sure that always only results in an improved experience for all gamers, lets let them all know!
=D
Well it was pretty much just openly saying it.
Her YT channel was a mix of fairly long, sort of livestream clips of development progress and notes… and then just clips of her livestreams where she wasn’t really developing and was just responding to chats, going off about political topics, ‘white genocide’ this, ‘every democrat is a f@g communist’ that, a whole bunch of other dogwhistles or overt references to memes and slogans of various extremist right wing militia type groups…
Also, for clarity, I am not talking about OP’s link, I’m talking about some very obscure other person I found months ago. Much , much lower view/su count, not even monetized. Also unlike OP’s link, this person actually wrote code instead of just having opinions about things.
There is a difference between being on the conservative side of a culture war and being balls deep into extremist right wing ideology.
At one point I was looking for some help with implementing something akin to UE’s Metahumans but in Godot.
Somehow I stumbled across this one, single person on youtube who had a half decent implementation of what I was looking for…
… and after watching some other videos on her channel… yep, she’s an open NeoNazi.
That was enough internet for me, that day.
I hate it I hate it I hate it.
This AI hallucinated frame crap is bullshit.
Their own demos show things like the game is running at 30ish fps, but we are hallucinacting that up to 240!
Ok…great.
I will give you that that is wonderful for games that do not really depend on split second timing / hit detection and/or just have a pause function as part of normal gameplay.
Strategy games, 4x, city/colony builders, old school turn based RPGs… slow paced third person or first person games…
Sure, its a genuine benefit in these kinds of games.
But anything that does involve split second timing?
Shooters? ARPGs? Fighting games?
Are these just… all going to be designed around the idea that actually your input just has a delay?
That you’ll now be unable to figure out if you missed a shot or got shot from a guy behind a wall… due to network lag, or your own client rendering just lied to you?
…
I am all onboard with intelligent upscaling of frames.
If you can render natively at 5 or 10 or 15 % of the actual frame you see, and then upscale those frames and result in an actually higher true FPS?
Awesome.
But not predictive frame gen.