wiseblood Posted August 15, 2022 Posted August 15, 2022 Couldn't find a post about it so just sharing the workaround here: nvidia added deep learning DSR modes a few driver revisions ago for RTX cards, which look very similar to brute forcing downsampling but are more efficient. IL-2 BoX would crash on loading about 20% in every time I used a DL-DSR resolution, unless I'd set my desktop resolution to the DL-DSR one first, before launch. You can skip that step by setting the desktop scaling settings as shown here: Weirdly only discovered this after going back to 1946, which breaks with DL-DSR in an entirely different way (as with all opengl apps, it seems) and just tries scaling it to the native display resolution regardless of the desktop res. 3
335th_GRbas_754 Posted October 15, 2022 Posted October 15, 2022 Because i didn't get it.... My 24" monitor native Resolution is 1920x1080 ( 1080p). Dsr in Nvidia panel is active but When i put in game 2560x1440 it crashes. Do i need to change my desktop native Resolution to 2560x1440 before i lunch the game ???
335th_GRbas_754 Posted October 16, 2022 Posted October 16, 2022 Got it. In Nvidia control panel Enable all DSR options Then go to screen Resolution and change the desktop Resolution to the one you want to use in game. In windows settings Go to screen Change the desktop resolution to the same as in Nvidia control panel Click ok In the game Set the same resolution in game's options. Click play. Enjoy ? 2
BOO Posted October 17, 2022 Posted October 17, 2022 For me it introduces instability. Not sure why but the game simply wont run with DL-DSR running. LIkely some of my other settings but for the limited benefits it brings for me specifically, I can definately leave it. Each to their own though. The tip in the original post is good though for those it does work for.
wiseblood Posted October 17, 2022 Author Posted October 17, 2022 (edited) 18 hours ago, DragonDaddy said: What are the benefits to using this? It's basically a pseudo-DLSS solution for games without a native model for deep learning supersampling. Rather than training an AI model on what super high resolution renders of a specific game ought to look like and approximating them, which requires some game specific info on which parts of the scene to pay more or less attention to for maximum performance vs quality tradeoff, it renders natively at the maximum native resolution of your monitor, AI upscales that using a generic model of what high resolution renders generally look like, downsamples it and sharpens it. A good comparison would be something like running Topaz Gigapixel on every frame rendered. The end result is a supersampled image that is for most purposes indistinguishable from brute force supersampled image, but at much greater performance. Cores on the graphics card that would otherwise be very taxed processing the shaders natively or be buried under the raster/fill limit of the card at high res are more efficient taking a good guess at what it might look like at high res, then downsampling the image to a point where significant errors just get buried by averaging the results. The claim of 2x efficiency over brute force dynamic superresolution will depend on what fundamental limits the game itself hits (memory, threadedness, etc) but there if is no prior bottleneck it's not terribly far off. Edited October 17, 2022 by wiseblood
IckyATLAS Posted October 17, 2022 Posted October 17, 2022 31 minutes ago, wiseblood said: It's basically a pseudo-DLSS solution for games without a native model for deep learning supersampling. Rather than training an AI model on what super high resolution renders of a specific game ought to look like and approximating them, which requires some game specific info on which parts of the scene to pay more or less attention to for maximum performance vs quality tradeoff, it renders natively at the maximum native resolution of your monitor, AI upscales that using a generic model of what high resolution renders generally look like, downsamples it and sharpens it. A good comparison would be something like running Topaz Gigapixel on every frame rendered. The end result is a supersampled image that is for most purposes indistinguishable from brute force supersampled image, but at much greater performance. Cores on the graphics card that would otherwise be very taxed processing the shaders natively or be buried under the raster/fill limit of the card at high res are more efficient taking a good guess at what it might look like at high res, then downsampling the image to a point where significant errors just get buried by averaging the results. The claim of 2x efficiency over brute force dynamic superresolution will depend on what fundamental limits the game itself hits (memory, threadedness, etc) but there if is no prior bottleneck it's not terribly far off. Let me explain what I understand here. If you have a 4K native resolution screen, then you can have the game generate a HD resolution image, and then scale it (x4) by a kind of supersampling procedure through DSR at 4K. The idea here being that instead of generating with your GPU a 4K image for a 4K screen, you generate an HD image to be then bloated to 4K for your screen. The reason to do that would be that generating HD image and supersampling it is less taxing (and will allow for higher FPS) with the graphic card then generating directly a 4K image. Is what I understand correct?
Wilhelm_Heidkamp Posted October 17, 2022 Posted October 17, 2022 (edited) 10 minutes ago, IckyATLAS said: Let me explain what I understand here. If you have a 4K native resolution screen, then you can have the game generate a HD resolution image, and then scale it (x4) by a kind of supersampling procedure through DSR at 4K. The idea here being that instead of generating with your GPU a 4K image for a 4K screen, you generate an HD image to be then bloated to 4K for your screen. The reason to do that would be that generating HD image and supersampling it is less taxing (and will allow for higher FPS) with the graphic card then generating directly a 4K image. Is what I understand correct? No. With DL/DSR you can use graphic resolutions that your monitor does not allow with the idea of getting the benefit of, for example, supersampling antialiasing but without the cost of using a true SSAA. That is to say, DL/DSR is not intended to use resolutions lower than the native resolution on your monitor and then scale them to 4K. It is intended to get the benefits of a supposed reduction of aliasing by generating each frame at a much higher resolution than your monitor supports (eliminating sawtooths in the process) and then scaling it down to fit your monitor's resolution. All this using an intelligent algorithm. 1 hour ago, wiseblood said: It's basically a pseudo-DLSS solution No. Its not. DLSS is designed to get better performance when your game, at native resolution and with the parameters applied, you are running low on frames per second. For example, I play with DLSS on a 4K monitor with a quality close to native but at a lower cost. With DLSS, the image is generated at a lower resolution than native and then scaled to the maximum resolution of your monitor. With DL/DSR it is just the opposite. The image is generated at several times your native resolution and then scaled to fit your native resolution. Edited October 17, 2022 by PA_Willy 1
wiseblood Posted October 17, 2022 Author Posted October 17, 2022 1 hour ago, PA_Willy said: No. With DL/DSR you can use graphic resolutions that your monitor does not allow with the idea of getting the benefit of, for example, supersampling antialiasing but without the cost of using a true SSAA. That is to say, DL/DSR is not intended to use resolutions lower than the native resolution on your monitor and then scale them to 4K. It is intended to get the benefits of a supposed reduction of aliasing by generating each frame at a much higher resolution than your monitor supports (eliminating sawtooths in the process) and then scaling it down to fit your monitor's resolution. All this using an intelligent algorithm. No. Its not. DLSS is designed to get better performance when your game, at native resolution and with the parameters applied, you are running low on frames per second. For example, I play with DLSS on a 4K monitor with a quality close to native but at a lower cost. With DLSS, the image is generated at a lower resolution than native and then scaled to the maximum resolution of your monitor. With DL/DSR it is just the opposite. The image is generated at several times your native resolution and then scaled to fit your native resolution. Well, it believes whatever you tell it about your monitor's capabilities - so there's no reason you can't just CRU your way into it working the way you want, then using whatever pixel scaler you want to push it back up. The difference is, as I said, that it has no idea what's expensive to do in the scene or how different render target res's for different surfaces work and it'll just push it vs a generic model.
Wilhelm_Heidkamp Posted October 17, 2022 Posted October 17, 2022 DLSS uses a lower resolution than the one that will be displayed on the screen with the idea that, by rendering at lower resolution, the performance will be better (FPS). DL/DSR uses a resolution several times higher than the one you are playing and then reduces it to the resolution that your monitor can display. One function is intended for cases where your GPU does not perform sufficiently at native resolution (DLSS) and the other precisely the opposite, when you have the ability to render the image above your native and then present it on screen having eliminated the sawtooths as donwnsampling used to do by brute force. ITs DIFFERENT
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now