Without more detail we can only assume, but I would imagine it working the same way that DLSS is (presumed?) to work.
Most of the upscaling is done by their TAA algorithm that’s a part of FSR3.1, then the image will be cleaned up with their “AI” component for more image stability.
Best cpu+igpu yes, but not only that. In the server realm they’re doing incredibly well. EPYC remains unmatched by anything Intel has.
ARM has a future in servers too, but for a lot of companies it isn’t there yet (personally, I hope it never is and they go to RISC-V instead but yeah).
I’m sorry what
(I assume typo)