Since Ubisoft introduced us to the term AAAA game with Skull and Bones, my attempt at giving an actual, solid definition to differentiate a AAA game from a AAAA game has had this as a fundamental aspect:
The game gets stuck in development hell, analagous to a movie that keeps needing reshoots and rewrites, and ends up requiring so much money thrown at chasing the sunk cost fallacy that it negatively impacts not only its own development, but impacts the development of other games by the same studio/publisher, and/or the overall financial solvency / employment headcount of the overarching parent company.
Basically, what a AAAA game actually is, is analagous to a bank or large corporation that is Too Big To Fail… but video game companies largely are not going to be bailed out by the government.
So, by that metric, we’ve got:
Skull and Bones
Concord
Suicide Squad
If you go back further in gaming history, you could probably find more games that fit typical AAA criteria (Large-Huge numbers of actual developers, aiming at a high level of graphical fidelity, financed by a large corporate publisher that controls a plethora of studios, all these measured relative to the timeframe of development)…
… and then also hits the AAAA criteria, that the development drags on forever, a sunk cost fallacy mindset sets in amongst management, management then gets high on its own supply, and the game draws in so many manpower and financial resources that it endangers entire other projects and teams not directly connected to this particular game’s development if this Too Big To Fail game does actually fail.
As you say, it goes back even further that SBMM, to the large scale abandonment of the dedi server paradigm in favor of auto match making.
Nearly no online games even have actual server browsers now.
Back in the late 90s through the 2000s to early to mid 2010s…
Nearly every online game was a dedicated server, or at least you throwing open a temporary server with your own custom control over maps and gamemodes.
Many dedicated servers were run by a person or community… and this enabled communities to form around them, enabled lasting relationships to be made, hell, probably most mods or clans for most of those kinds of games arose from that, and a lot of those went on to later become massively more expansive, start their own game studio and put out their own games.
Now thats almost all gone.
You… used to be able to get onto a Battlefield server and know the regulars, like a bar.
That promotes at least a baseline of basic manners and etiquette.
Now all you can do is look at a general conception of an entire game’s community, because the player has no agency to actually choose to associate or not associate with certain people or groups.
This is basically completely unrelated to CoD… but…
Halo Infinite released some Juneteenth (a holiday celebrating the end of slavery in the US/ themed cosmetics a year or two back.
The color scheme for armor was named ‘Bonobo’ somewhere in the filename or metadata… which is a monkey.
Oops.
https://kotaku.com/halo-infinite-juneteenth-bonobo-freedom-emblem-nameplat-1849065291
… MegaCorps are not your friend, they’re just doing performative marketing.
No, come on man…!
What we need…
Is another… open world survival pvp crafting game.
Preferably with zombies.
And season passes.
And 83939583 in game cosmetics.
…
About a month ago, as a joke, I said that like the most frustrating, evil game I could imagine would be basically a game that is nothing but shitty NPC escort quests, through an active warzone with other players in PvP, where the NPC is fragile, annoying and whiney and pretentious as possible, moves slower than you run but faster than you walk…
…and every time they get wounded or just scared or drop something or trip or see a butterfly, you go into a bethesda death stare with them where you have to get through a 10+ step dialogue tree that is different everytime and only has a single success state, all others result in you having to retry…
… and its all still an active real time combat zone while you are locked into this, you and idiot NPC still vulnerable to other players.
The state of video gaming is such that within minutes, someone said this would actually be a game they’d want to play, that its an actually novel idea, sounds fun.
When I read that, my mouth dropped, in a dazed stupor.
EDIT: Fuck, all you’d have to do is call it Puppy Girl Escort Quest, and make the person you’re escorting be varying kinds of kawaii waifus, surprise, the game is actually a harem anime.
Like did my original comment give you the impression that I didn’t know people rule 34 every goddamn thing possible?
In the context of the reply you were replying to, basically yes.
I read what you said as hyperbolic, but generally dismissive of the idea that characters in media have sex appeal and/or vicarious romance appeal, often generally, and often to such an extent that it drives people to make and share their own erotic spin offs.
I think it is silly to not realize how much a popularity of a show these days can be reinforced and strengthened by appealing to the fanfic crowd, to not realize that many network execs have realized that they have a better shot at cultivating a… well, cult like fanbase, if they make their show in a way that appeals to that kind of crowd.
Maybe I’m really old but you suggesting that anime is the reason we stuff sex into shit is just so funny to me. I once heard someone say “every generation is the first generation to think they invented sex.”
I didn’t suggest anime was the reason we stuff sex shit into more western media for more broad audiences.
I said that doing that, stuffing overtly sexual and will-they-won’t-they, romantic tension type material into media, is called ‘fan-service’ within the realm of anime, and an analagous or similar thing is happening outside of anime.
The comment I replied to was like why romance if porn exist which was also funny
I think the unstated thing that I could have said, but did not, because I assumed it was common knowledge, would basically be:
For quite a long time, men have been the main consumers of video and still image pornography.
And women have been the primary consumers of erotic novels, in the US at least.
The American media approach to attempting to make the viewer feel aroused thus differs based on the sex they are appealing to:
Straight men, as a market demo, generally go for visual sex appeal and sex acts, overt or covert lewd gestures, tight fitting or revealing clothes etc.
Straight women, as a market demo, generally go for on screen romance, for the build up to and moments of dramatic sexual tension, will they won’t they scenarios, the context around a relationship that builds up to the actual sex, etc.
In general, for men, sex appeal is direct, physical, literal, and for women, it is more cerebral, more about mental framing and constructing scenarios that feature, or could potentially feature wish fulfilment, being desired by a person with preferred character traits, etc.
These are of course not absolute truths with no exceptions, there are many exceptions, and yes I am aware that creating the media environment in this way reinforces the norms themselves.
Nonetheless, this is still quite true in general, in aggregate, when you run the numbers.
So… thats why it makes sense to conflate sex and romance from the perspective of a media exec designing a show.
Sex sells, you just package it differently if you’re appealing to men or women.
To most media execs:
Men want the sex.
Women want the story about why the sex is happening.
… Have you not heard of Tumblr? DeviantArt?
Gooning or swooning to shipping fanfiction and fan made eroge / nsfw art has been a huge component of many fandoms, of all kinds, for at least two decades.
The reaction from a whole lot of more modern media is to just capture a lot more of that in the official media itself.
In anime in particular, this is called ‘fan-service.’
Porn is absolutely a giant competitor in the ‘digital media consumption / attention’ market, and it completely makes sense that for people who believe in making every digital media product with as broad an appeal as possible, that there’s been a trend toward an analog of fan-service outside of anime.
I mean…
There absolutely still are widespread, massive shortages.
https://www.nowinstock.net/computers/videocards/nvidia/rtx5090/
https://www.nowinstock.net/computers/videocards/nvidia/rtx5080/
https://www.nowinstock.net/computers/videocards/nvidia/rtx5070ti/
https://www.nowinstock.net/computers/videocards/nvidia/rtx5070/
As of the time I post this, all models of all RTX 50 series cards are completely out of stock at every large online retailer, other than resellers/scalpers on Ebay.
BestBuy, B&H, NewEgg, Amazon… literally 0 available stock.
Yes, they did also raise their prices, but there is still a massive stock shortage.
99% sure MicroCenter is also effectively completely out of stock for any online purchaser as well, as they basically only ship if you’re within a one to two hour drive of one of their physical store locations, and they don’t actually have very many locations…
Sure!
The title isn’t clickbait though, and you don’t actually even have to know the term.
‘branded a paper launch’
The Verge is saying others have called it a paper launch, which is completely correct and not misleading, regardless of what ‘paper launch’ means.
As to whether or not it actually is a paper launch:
It certainly seems like it is.
If the people’s whose job it is to review tech hardware, who often actually have some level of direct contacts and connections with manufacturers … if they can’t even get their hands on these, its safe to say Nvidia shipped an astoundingly small amount of actual hardware.
Just go on youtube right now and you can find a plethora of videos describing how almost no one could actually get one, that supply evaporated in minutes, possibly literally less than a minute.
No online store currently has any stock, whatsoever, of 5000 series cards.
Also, there was chatter and rumors before the launch that the secondary/partner manufacturers had had some kind of miscommunication with Nvidia and did not manufacture enough cards.
It seems even worse than the original PS5 launch.
PS: More fun terminology bs;
A lot of people use the term AiB to refer to a secondary/partner manufacturer of a GPU, as an adjective or prefix, as in:
AiB Card, AiB 5090, AiB Board…
They use this to distinguish a GPU actually made directly by Nvidia (or AMD), as compared to the same model of GPU made by a secondary/partner manufacturer, with slight tweaks to clock speeds and their own housing and fan/cooler style.
Thats not what AiB means.
AiB means ‘add in board’.
Its a noun, not an adjective, and it means basically any graphics card, sound card, capture card, network card, anything that is its own board that plugs into a motherboard.
Nvidia / AMD directly manufactured reference GPUs … are AiBs.
… I am probably fighting a losing linguistic battle on this one, as improper usage of AiB is now quite widespread, much like how ‘liminal space’ actually means ‘a space that is designed to be transited through, not inhabited for long periods of time’, but the common usage is now basically that anything creepy to anyone for any reason is a ‘liminal space’.
Given that Trump has now reiterated multiple times, at least once after he personally met with Nvidia CEO Huang, that he will indeed be going forward with tariffs on Taiwan…
If you’re in the US, I wouldn’t expect to be able to get any 5000 series Nvidia GPU any time soon, at least not for under $3000+.
EDIT:
AMD cards will also be affected by the Taiwan tariffs, but uh, they tend to price things a bit more affordably, and provide more actual stock volume… but as of right now all we know is the 9070XT was planned at some price below $899, most people expected $450 to $500… but there’s still no official price, or date, and now the tariffs are a thing… so… maybe $899 actually is now a realistic price estimate for the 9070XT?
Who knows! All the gamers can now thank Trump for making all PC / Console components and likely video games themselves more expensive.
EDIT 2: Also, you can run professional production, CUDA style workloads on AMD cards:
https://repairspotter.com/computers/what-is-the-amd-radeon-equivalent-to-nvidias-cuda
YMMV.
Also also, the RDNA 4 architecture for AMDs 9000 series cards seems to be rebalancing toward more raytracing performance, but that’s based on leaks so far.
To add a bit more context, it’s been used in the tech industry for at least 20 years, if not more.
There doesn’t seem to be too much actual proper etymological documentation on the first usage or history… as you say, it most likely derives from ordering something, and not getting it, and being left with only a paper invoice, from back when such things were mailed or faxed…
…it may have derived from the old layaway process retail stores used to do: you order and pay for something upfront, they hand you a voucher, and when they get the product, they hold it in inventory for you, as opposed to putting it on the sales floor for general purchase by anyone, and then you exchange the voucher for the item.
But that’s just a guess.
… So… Dragon Age is dead.
But DA isn’t dead. There’s fic. There’s art. There’s the connections we made through the games and because of the games. Technically EA/BioWare owns the IP but you can’t own an idea, no matter how much they want to. DA isn’t dead because it’s yours now."
In a subsequent post, Chee wrote, "So someone just reposted my thing saying they’ll write a giant AU [alternate universe] and that’s what I’m talking about.
So, Dragon Age is not dead in the sense that you can still occasionally commission DeviantArtists for DA art, and you can still freely write DA fanfiction and not make any money off of it.
… And someone can make yet another medieval fantasy world, maybe actually turn it into an actual profitable and thus widespread IP … assuming you’ve quite considerably changed the basic concepts and don’t use any actually named characters or groups.
Wyvern Era.
Drake Epoch.
Dragon Age is dead, add yet another notch to EA’s IP/Studio murder count.
They’ll probably manage to finally kill Battlefield / DICE this or next year.
EDIT: I somehow missed the title.
Yes, you absolutely can own an idea, thats what patents, trademarks and copyrights literally are.
This is actually delusional levels of cope.
You can talk all you want about French revolutionaries and Camus, but at the end of the day, you didn’t revolt against anything, you signed away your creative output to a soulless corporation for cash.
You could have found or founded a worker cooperative non profit, you could have copyleft your world and story and characters, you could have MIT or GPL liscensced the game code, and still sold the finished game for money, but nope, you did none of that.
That’s an insanely bad analogy.
For starters, children are legally barred from entering casinos, unlike many, many video games which include mtx, dark patterns such as layered currency obfuscation, pricing schemes that encourage you to buy that next tier amount of video game currency that always leave you with a bit left over, constantly asking the user if they want to buy something, designing the UI such that you always see what you’d be getting extra if you were levelling up the premium battle pass instead of the default one, etc etc… which are either directly targeted at, or allow children to play them.
Why are children banned from casinos?
Because their brains aren’t developed enough to properly understand risk vs reward.
Secondly, the social dynamics at play in both a casino and games with even only mtx cosmetics… both of these settings massively peer pressure the customer/consumer/mark into engaging with the gambling/mtx spending mechanics.
Its not just that whales are a thing that somehow exist prior to and outside of mtx games or casinos, its that the entire system is designed to pressure everyone who interacts with it in any capacity into breaking down and spending more and more.
These systems are designed to produce more, whales, more dolphins, to push everyone toward a higher class of spending irresponsibility regardless of the propensity upon entering.
Empirically, this works. Insanely well.
Saying ‘its all the whales fault’ is a cop out.
Its the fault of game execs and studios that hire psychologists to figure out how to design the most exploitative games possible, and lawyers to figure out how to legally justify this.
Its the fault of regulators being asleep at the wheel and allowing nearly the entire industry to get away with this shady bullshit for so long, usually with only slaps on the wrist.
You’re blaming the drug addicts in a society that’s normalized addiction.
I’m blaming the drug manufacturers, the pushers and the cops that look the other way because they or their boss gets a kick back.
… Is this a joke?
The vast majority of the gaming market, if you go by generated sales revenue, has been mobile gacha games with lootbox / mtx / in game currency transactions for almost a decade now.
This is why basically every major AAA ‘live service’ game is primarily constructed and rushed out the door as a minimum viable product in terms of being a good game, but very robust and well developed in terms of the game’s primary purpose, which is to serve as a platform, a monopsonist (EDIT: monopolist) market for in game items, cosmetics, battle passes, etc.
Its been like this for almost a decade
Not quite KSP whole planet scale, but uh, Kenshi.
Its a pretty damn big world, pretty sure it is significantly larger than Skyrim.
You’ve got world speed controls, rpg style mechanics and progression, and you can have multiple members of your party, and you can build your entire own town if you want to.
The game is filled with many roving factions, who all have a sort of reputation dynamic with all other factions, as well as yourself/party.
The game is full of many different story lines, many of them conflict with each other and cannot all be done, there is no such thing as a plot armored, impossible to kill npc, and there are tons of unique, npcs you can meet and have many kinds of interactions with.
If you want to take on a huge faction, you can, but you’re probably going to need to literally raise your own army to do so.
Main downside is the control scheme is fairly awkward / old school… its basically like an mmo from the early 00’s, but single player; click to tell your peeps where to go sort of thing, awkward camera controls by modern standards for an ARPG.
You don’t directly control the combat of your character like in Skyrim, the game basically rng rolls based on you and your opponents stats to determine who uses what kind of attack or block or dodge… but you can set different combat stances, basicsally.
… So its not an ARPG in the sense of Skyrim or AssCreed or Dark Souls… but it is an ARPG in a more loose sense, that its an RPG mechanics style game and world, without rigid turn based combat, which all revolves around action.
But the scale you are looking for is there. If you don’t set the time to fast forward, it can easily take 15 minutes to an hour or more to walk between settlements or major landmarks, depending on what part of the map you’re in.
Nothing is really obvious from the onset of the game in terms if what you are supposed to do, beyond not get murdered, eat, drink and sleep to stay alive.
It’s very much a sandbox approach, but theres tons and tons of stuff to do if you are capable of directing yourself.
Also, lots of mods that add more content, immersion, and deepen or alter gameplay mechanics.
Kenshi 2 is in the works with upgraded engine and graphics… ETA totally unknown.
The game looks absolutely amazing on an AMD GPU, without realtime raytracing.
Several years ago now, I managed to get it to 4k90fps on a 6900XT, with basically all settings on ultra/psycho via some ini tweaks, custom FSR values, just no ray tracing, using ‘old school’ cube maps and light sources and what not.
And that was all running in Proton, on linux, as well.
Nvidia absolutely barged in and said hey guys guess what, we completely obliteraterated the entire history of how lighting works in game engines, here’s our new extremely pretty but extremely, astoundingly inefficient lighting engine, rewrite your entire game engine and game to make it work on your custom engine that’s been cooking for 10 years without any notion of this new lighting paradigm.
EDIT: I should add that that is 90 ‘real’ fps, no frame gen, just early FSR.
I was just complaining about this in a thread about the new Nvidia GPU line.
We’re gonna get 30 real FPS with 240 from frame gen… so is the future of FPS games or anything that demands split second timing just… being completely unable to tell the difference between network lag, or your own GPU hallucinating that a guy you just shot at wasn’t actually there?
… Well apparently the answer to that is not only yes, but also, all players will have built in aimbots and automacro scripts, I mean uh, predictive input.
We are literally just going backward toward advanced aim assist.
Used to be a crutch for the difficulty of fine controls on a controller, now its a crutch for we have no idea how to actually develop a game that runs at 4k60, and can be played with affordable hardware.
I look forward to Call Of Duty 28 being marketed as an idle autobattler, presuming you pay for the ultra elite premium tier battlepass that grants you the ability to unlock ultra elite premium predictive input.
Technically not… exactly a rogue like, but:
Deus Ex w/ Randomizer Mod and PermaDeath.
You can set it up fairly easily the Steam version of DX and the Revision Mod, which at this point is basically all the most popular DX mods, reconfigured to play nice with each other and be as mutually compatible as possible.
Someone already mentioned Caves of Qud, that one is amazing, Noita is really good, also StarSector is functionally a roguelike but in space.
Also No Mans Sky is basically a rogue like if you turn on permadeath, kick the difficulty up.
I hate it I hate it I hate it.
This AI hallucinated frame crap is bullshit.
Their own demos show things like the game is running at 30ish fps, but we are hallucinacting that up to 240!
Ok…great.
I will give you that that is wonderful for games that do not really depend on split second timing / hit detection and/or just have a pause function as part of normal gameplay.
Strategy games, 4x, city/colony builders, old school turn based RPGs… slow paced third person or first person games…
Sure, its a genuine benefit in these kinds of games.
But anything that does involve split second timing?
Shooters? ARPGs? Fighting games?
Are these just… all going to be designed around the idea that actually your input just has a delay?
That you’ll now be unable to figure out if you missed a shot or got shot from a guy behind a wall… due to network lag, or your own client rendering just lied to you?
…
I am all onboard with intelligent upscaling of frames.
If you can render natively at 5 or 10 or 15 % of the actual frame you see, and then upscale those frames and result in an actually higher true FPS?
Awesome.
But not predictive frame gen.
Here’s the quote, for people allergic to reading the update in the article.
Update: Nvidia sent us a statement: “We are aware of a reported performance issue related to Game Filters and are actively looking into it. You can turn off Game Filters from the NVIDIA App Settings > Features > Overlay > Game Filters and Photo Mode, and then relaunch your game.”
We have tested this and confirmed that disabling the Game Filters and Photo Mode does indeed work. The problem appears to stem from the filters causing a performance loss, even when they’re not being actively used. (With GeForce Experience, if you didn’t have any game filters enabled, it didn’t affect performance.) So, if you’re only after the video capture features or game optimizations offered by the Nvidia App, you can get ‘normal’ performance by disabling the filters and photo modes.
So, TomsHW (is at least claiming that they) did indeed test this, and found that its the filters and photo mode causing the performance hit.
Still a pretty stupid problem to have, considering the old filters did not cause this problem, but at least there’s a workaround.
… I’m curious if this new settings app even exists, or has been tested on linux.
Neither of the two articles are well sourced.
But you acted like yours was credible, until I presented another one, whereupon you admitted they are both equally valid.
That’s assuming that the random article you found is correct, the veracity of which I can’t verify any more than the interesting engineering article…
That is to say, you cannot verify either of these articles at all, ie, they are both of dubious legitimacy.
You accused someone of being racist based of an article you admit you cannot verify, posted a bunch of related research papers that indicate, sure, they’re trying to develop the thing your article claimed they did… but doesn’t indicate that they actually developed it.
…
I can link you a patent for a triangular shaped aircraft, listed as filed by a US Navy Scientist that claims to outline how to create an electromagnetic, gravity negating field around the craft.
That would not be evidence that the US Navy officially announced that they basically built a UFO, that it works, and there’s a video of it, all officially documented and released.
But to you, it would be, if China had done all those things.
…
I am not saying China certainly has or has not developed a hypersonic passenger liner.
I am saying your source for this claim is dubious.
I am saying that you believe(d?) it credulously, without any skepticism, got very hostile with people who doubted its claim less tactfully than I did, and now you admit you got hostile based on a claim that you now admit is dubious, and shifted the burden of proof from the article making the claim to the skeptic questioning it.
Again, this is the logic of a fanatic.
If we just pick which dubiously sourced claims we believe based on vibes, truth stops existing.
So, you just assumed a unsourced, unverified story is true because you have a bias in favor of China, and put the burden of proof onto the other person to disprove it, and are completely fine with calling the other person a ‘sad racist’, despite now admitting that the veracity of the claim they are skeptical of is in fact not well established.
This is the argument/personality style of a fanatic, a religious fundamentalist, a QAnon adherent, an Elon Musk simp.
This is how we got ‘the Trump assasination attempt was staged!’
Please stop posting trash tier misinformation as ‘technology news’, please stop jumping to ‘everyone who disagrees with me is rascist’, this level of unjustified vitriol only makes you appear manic.
Ok so 24+ hours later and I now see a few different websites I’ve never heard of before that basically have the same article as this:
https://scienceinfo.net/chinese-hypersonic-aircraft-prototype-reaches-mach-6-speed.html
Still no actual link to the apparently original source somewhere on some social media site.
Now whats being said is that this was a flight test that actually occured 3 years ago, and was classified until now.
And they do provide an image, and credit it to CAS (without an actual link, I still can’t find this on CAS’ english site, but again maybe they are still writing a proper English post?)
This is a test article, that doesn’t appear to have any intakes for scramjet. I think I can make out two small rocket bells inside the thing, but the image quality is very low.
It’s just a test article, launched by a rocket, that Inwould guesstimate to have a wingspan of about… 4 meters, ish?
This new article also mentions that Cui, the team lead, did not mention anything about the current status of the hypersonic passenger jet which this was a test article for.
So… this test article got up to mach 6.5, 3 years ago.
Absolutely nothing about whether or not a successful test flight of a passenger jet sized craft achieved hypersonic speeds with an air breathing turbo ramjet / scram jet or something like that.
Completely different than the originally report.
… This is why I wanted an actual source.
If this very poorly sourced article from this random, clickbait style website is more accurate than the OP article (another poorly sourced article from another clickbait style website) is more accurate, that would mean SCMP, and everyone in this thread saying China has built an air breathing hypersonic jet liner is wrong, and everyone saying that this is basically comparable to the X15 is correct.
(Differences being the X15 was carried up to 45 thousand feet by a B52 instead of a rocket, and the X15 was manned, and this test article is presumably unmanned.)
I mean sure, thats a related research paper, but that isn’t the same thing as an official press announcement or video saying ‘Hey we actually built this thing, it works, take a look.’
I know that the CAS has specifically been researching/developing a hypersonic, passenger liner sized craft for around a decade… and the US has been doing the same with the SR 72, both attempting to develop … something like turbo ramjet that transitions to scramjet at high speeds/altitudes.
But a link to a research paper from 6 years ago is not actually a primary source to what your original link claims, but does not actually source.
I am willing to believe that this may have actually been developed…
But a better source sure would be neat.
Interesting/Wonderful Engineering both claim this was posted on ‘Social Media’ by the Chinese Academy of Sciences… with no link.
South China Morning Post also claims a video posted by CAS on social media… with no link, no video.
The english version of the CAS website is updated every couple of days, but this isn’t on it.
Granted, they could be taking their time doing a proper translation.
Does anybody know where to see this video?
… I mean, in an academic sense, if you possess the ability to implement the method, sure you can make your own code and do this yourself on whatever hardware you want, train your own models, etc.
But from a practical standpoint of an average computer hardware user, no, you I don’t think you can just use this method on any hardware you want with ease, you’ll be reliant on official drivers which just do not support / are not officially released for a whole ton of hardware.
Not many average users are going to have the time or skillset required to write their own inplementations, train and tweak the AI models for every different game at every different resolution for whichever GPUs / NPUs etc the way massive corporations do.
It’ll be a ready to go feature of various GPUs and NPUs and SoCs and whatever, designed and manufactured by Intel, reliant on drivers released by Intel, unless a giant Proton style opensource project happens, with tens or hundreds or thousands of people dedicates themselves to making this method work on whatever hardware.
…
I think at one point someone tried to do something like this, figuring out how to hackily implement DLSS on AMD GPUs, but this seems to require compiling your own dlls, and is based off of such a random person’s implementation of DLSS, and is likely quite buggy and inefficient compared to an actual Nvidia GPU with official drivers.
https://github.com/PotatoOfDoom/DLSS/tree/981fff8e86274ab1519ecb4c01d0540566f8a70e
https://github.com/PotatoOfDoom/CyberFSR2
https://docs.google.com/spreadsheets/d/1XyIoSqo6JQxrpdS9l5l_nZUPvFo1kUaU_Uc2DzsFlQw/htmlview
Yeah, looks like a whole bunch of compatibility issues and complex operations for a ‘i just want play game’ end user to figure out.
…
Also hey! Your last bit there about the second patent I listed seems to describe how they’re going to do the real time moderation between which frames are fully pipeline rendered and which ones are extrapolated: use the described GPU kernel operation to estimate pipeline frame rendering times along with a target FPS/refresh rate, do extrapolation whenever FPS won’t hit target FPS.
… Which would mean that the practical upshot for an average end user is that if they’re not using a GPU architecture designed with this method in mind, the method isn’t going to work very well, which means this is not some kind of magic ‘holy grail’, universal software upgrade for all old hardware (I know you haven’t said this, but others in this thread have speculated at this)…
And that means the average end user is still in a state of comparing cost vs performance/features of an increasingly architecture divergent selection of future GPUs/NPUs/SoCs/APUs.
And also the overhead of doing the calculation of predicting pipeline render times vs extrapolated frame render times is not being figured in with this paper, meaning that the article based on the paper is at least to some extent overstating this method’s practical quickness to the general public.
…
I think the disconnect we are having here is that I am coming at this from a ‘how does this actually impact your average gamer’ standpoint, and you are coming at it from much more academic standpoint, inclusive of all the things that are technically correct and possible, whereas I am focusing on how that universe of technically possible things is likely to condense into a practical reality for the vast majority of non experts.
Maybe ‘propietary’ was not exactly the technically correct term to use.
What is a single word that means ‘this method is a feature that is likely to only be officially, out of the box supported and used by specific Intel GPUs/NPUs etc until Nvidia and/or AMD decide to officially support it out of the box as well, and/or a comprehensive open source team dedicates themselves to maintaining easy to install drivers that add the same functionality to non officially supported hardware’?
Either way, I do enjoy this discussion, and acknowledge that you seem to be more knowledgeable in the technicalities than myself.
The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.
I feel this is a bit of an overstatement, otherwise you’d only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.
Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you’d quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.
Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to… after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.
So, that would be a 1:1 ratio of extrapolated frame to reference frame.
This doesn’t appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.
It seems to just be running already captured videos as input, and then rendering double FPS videos as output.
…But I could be wrong about that?
I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to… all I’m seeing is basically ‘we look at the history buffer.’
Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI).
Thats a good point, I missed that, and it’s worth mentioning they ran this on a 4070ti.
I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.
Unfortunately they don’t actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of ‘control’ column where all the scores for the various ‘visual difference/error from standard fully rendered frames’ are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.
If you control for a single given GPU then sure, other than edge cases, this method will almost always result in greater FPS for a slight degredstion in quality…
…but there’s almost no way this method is not proprietary, and thus your choice will be between price comparing GPUs with their differing rendering capabilities, not something like ‘do i turn MSAA to 4x or 16x’, available on basically any GPU.
More on that below.
This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.
Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.
I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.
Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.
To be fair, when I wrote it originally, I used ‘apparently’ as a qualifier, indicating lack of 100% certainty.
But uh, why did I assume this?
Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.
Much research done by Universities also ends up proprietary as well.
This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it’s comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.
Sure, you could try to implement this method based off of reading this paper, but that’s a far cry from ‘here’s our MIT liscensed alpha driver, go nuts.’
…And, now that you bring it up:
Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.
This one appears to be focused on the machine learning / frame gen method, the software:
https://patents.justia.com/patent/20240311950
And this one appears to be focused on the physical design of a GPU, the hardware made to leverage the software.
https://patents.justia.com/patent/20240311951
So yeah, looks to me like Intel is certainly aiming at this being proprietary.
I suppose its technically possible they do not actually get these patents awardes to them, but I find that extremely unlikely.
EDIT: Also, lol video game journalism processional standards strike again, whoever wrote the article here could have looked this up and added this highly relevant ‘Intel is pursuing a patent on this technology’ information to their article in maybe a grand total of 15 to 30 extra minutes, but nah, too hard I guess.
The paper includes the following chart for average frame gen times at various resolutions, in various test scenarios they compared with other frame generation methods.
Here’s their new method’s frame gen times, averaged across all their scenarios.
540p: 2.34ms
720p: 3.66ms
1080p: 6.62ms
Converted to FPS, by assuming constant frametimes, thats about…
540p: 427 FPS
720p: 273 FPS
1080p: 151 FPS
Now lets try extrapolated pixels per frametime to guesstimate an efficiency factor:
540p: 518400 px / 2.34 ms = 221538 px/ms
720p: 921600 px / 3.66 ms = 251803 px/ms
1080p: 2073600 px / 6.62 ms = 313233 px/ms
Plugging pixels vs efficiency factor into a graphing system and using power curve best fit estimation, you get these efficiency factors for non listed resolutions:
1440p: 361423 px/ms
2160p: 443899 px/ms
Which works out to roughly the following frame times:
1440p: 10.20 ms
2160p: 18.69 ms
Or in FPS:
1440p: 98 FPS
2160p: 53 FPS
… Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn’t introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.
(I want to again stress here this is very rough math, but I am ironically forced to extrapolate performance at higher resolutions, as no such info exists in the paper.)
IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively… this frame gen would be pointless.
I… guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good.
But … this is GPU tech.
Which, like DLSS, requires extensive AI training sets.
And is apparently proprietary to Intel… so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.
Its not gonna somehow be a driver/chipset upgrade to existing Intel CPUs.
Basically this seems to be fundamental to Intel’s gambit to make its own new GPUs stand out. Build GPUs for less cost, with less hardware devoted to G Buffering, and use this frame gen method in lieu of that.
It all depends on the price to performance ratio.
Star Citizen is certainly a gigantic ongoing delusion/scam of development hell…
But it doesn’t really meet the sort of ‘internal corporate contagion’ criteria, it won’t directly tank non Star Citizen teams or games.
While they have contracted out development work at various points… its not a giant conglomerate of different studios working on different projects.
If it finally liquidates and goes tits up, it only kills Star Citizen, and maybe Chris has to sell his yacht or w/e.
Star Citizen is its own special kind of nonsense.