Since the disastrous launch of the RTX 50 series, NVIDIA has been unable to escape negative headlines: scalper bots are snatching GPUs away from consumers before official sales even begin, power connectors continue to melt, with no fix in sight, marketing is becoming increasingly deceptive, GPUs are missing processing units when they leave the factory, and the drivers, for which NVIDIA has always been praised, are currently falling apart. And to top it all off, NVIDIA is becoming increasingly insistent that media push a certain narrative when reporting on their hardware.
It adds rendering time, not “latency” btw.
DLSS improves framerates at basically no cost, to let people hit playable or high framerates at quality levels they couldn’t without it. It’s not for hitting 500fps, it’s for hitting 30/60/100 etc.
It doesn’t render anything, so it can’t add rendering time, it just generates an upscaled version of an already rendered frame
Ok so you definitely don’t understand how DLSS works lol.
DLSS has to be implemented by the developers of the game. They literally have to use the DLSS APIs in their game code. DLSS requires things like the player input and motion vectors for all scenes, materials, and objects that are in the frame. It adds time to the rendering pipeline. The more powerful your GPU the less rendering time it adds.
We’re getting way off track now anyway, so to go back to the start: DLSS Super Resolution is amazing because it lets you get a framerate bump with either little-to-no visibile change to IQ, to a very noticeable degradation of IQ depending on how much of a framerate bump you get. It is one of the most significant advancements in gaming this century IMO.
On my PC with a 4070 Super, I can play COD BO6 at a near locked 120fps on my 4K 120hz VRR tv at “4K” using DLSS, whereas my PC definitely cannot do that without DLSS. It looks like native 4K, and believe me I’ve taken many screenshots and compared them at 300% zoom lol.