Mysticpuma Posted September 20, 2020 Posted September 20, 2020 VR is always going to put a strain on any system and as I understand it, 90-fps is the minimum target for smooth gameplay. Not sure if BoX can or would be able to run it, but the results and fps boost are quite incredible and significantly advanced over the first version. I imagine this with Verb G2 would be very impressive to see in action? I think it requires the developer to implement it with Nvidia but to have high fidelity graphics with 50% higher fps.....well it must be worth considering? 1 1 2
SJ_Butcher Posted September 20, 2020 Posted September 20, 2020 (edited) I think the easier approach should be implement TAA antialisaing, since dlss3.0 is coming and it will run on every game that use TAA antialisaing. Edited September 20, 2020 by SJ_Butcher 1 1 1
SharpeXB Posted September 20, 2020 Posted September 20, 2020 (edited) If the CPU is limiting your frame rate then DLSS won’t help. Sims like this tend to be limited by CPU performance. Edited September 20, 2020 by SharpeXB 1
Ala13_UnopaUno_VR Posted September 20, 2020 Posted September 20, 2020 I think it was commented that with the arrival of deferred shading, a large load became for the GPU, freeing up the CPU a lot, the DLSS would be the culmination for everyone to throw away their track Trackir5, and could enjoy super stable FPS in 2k 4k resolutions, I don't think this will come soon in Il2 Box but if an open window
coconut Posted September 21, 2020 Posted September 21, 2020 12 hours ago, SharpeXB said: If the CPU is limiting your frame rate then DLSS won’t help. Sims like this tend to be limited by CPU performance. True, but with high-res wide FOV headsets you typically need to be able to render 4K at 150FPS, and ideally even more than that. I think VRSS is likely a more promising technique. Resolution is wasted on the sides. I fear that the lack of control devs would have on DLSS could mean difficulties with contact ID and spotting.
E69_Qpassa_VR Posted September 21, 2020 Posted September 21, 2020 (edited) Obviously it's the feature of the future, but its something the team would have to implement, invest time and resources on it. Jason may have another priorities, because I'm not sure how much money they would earn with newcomers Edited September 21, 2020 by E69_Qpassa_VR
SharpeXB Posted September 21, 2020 Posted September 21, 2020 6 hours ago, coconut said: True, but with high-res wide FOV headsets you typically need to be able to render 4K at 150FPS, Aren’t most HMDs 80-90 FPS? Monitors can get up to 144hz but there aren’t too many of those in 4K, there are some though. DLSS is fancy upscaling though, not technically “supersampling”. I suppose “DLUS” didn’t sound sexy enough...
adonys Posted September 18, 2021 Posted September 18, 2021 (edited) @SharpeXB it's obvious you don't have not even a basic understanding how DLSS works, like for example how the trained deep learning Ai network is used in the process. Yet you are throwing statements like the above one.. Please have a read in here, it's just an introductory article from nVIdia, but if you're really interested in the subject, I'm sure you can google independent scholar articles which are detailing the DLSS technology. Like for example this one (with a lot of references to other independent scientific articles), or any from this list. Happy reading! Edited September 18, 2021 by adonys
C6_lefuneste Posted September 18, 2021 Posted September 18, 2021 (edited) This remain a matter of extrapolating instead of computing. The best illustration we have currently for VR is reprojection, more smooth experience but artifacts occuring as soon as extrapolation is not matching computation... Maybe this Neural Network technology may reduce them, but it would need network trained for the game in different config (summer, winter, sea and land landscape,...). Not sure that a dev team will spend the time for only customer which ghave the right hardware... Edited September 18, 2021 by c6_lefuneste
TCW_Brzi_Joe Posted September 18, 2021 Posted September 18, 2021 (edited) Foveated rendering (partialy sharp picture, with blured part out of sweet spot) was my favourite game changer idea. That would give us nice picture and excellent performance. Pimax made it for theirs vr goggles, but it does not work with IL-2. I assume problem is current spotting system, or one point rendering for distant planes ( as same point woud be differently rendered in sweet spot and outside - that chrashes game). Maybe foveated rendering solution should be more priority for developer than dlss for vr players with 30 nvidias... Edited September 18, 2021 by TCW_Brzi_Joe
SharpeXB Posted September 18, 2021 Posted September 18, 2021 (edited) 13 hours ago, adonys said: it's obvious you don't have not even a basic understanding how DLSS works I’m not going to read a bunch of silly scientific papers to explain what I could see with my own eyes. I also know there’s no way upscaling will equally native resolution quality. That’s just a fantasy. And yeah DLSS is actually upscaling, not “super damping” Why is DLSS hyped as a magic performance boost? It’s like any other graphic setting, you choose a compromise between quality and performance. Edited September 18, 2021 by SharpeXB
AEthelraedUnraed Posted September 21, 2021 Posted September 21, 2021 On 9/18/2021 at 10:51 AM, adonys said: @SharpeXB it's obvious you don't have not even a basic understanding how DLSS works, like for example how the trained deep learning Ai network is used in the process. Yet you are throwing statements like the above one.. Please have a read in here, it's just an introductory article from nVIdia, but if you're really interested in the subject, I'm sure you can google independent scholar articles which are detailing the DLSS technology. Like for example this one (with a lot of references to other independent scientific articles), or any from this list. Happy reading! Well, he's right that DLSS is just fancy upscaling. CNNs (also known in buzzword terms as "deep learning AI") are basically just common filters, except that these filters are created through machine learning. Fact remains that DLSS is just a set of nonlinear upscaling filters. Sure, it uses motion vectors, but all that does is increase the input tensor from size width*height*3 (3 because of RGB) to width*height*5 (3+dx+dy). And yes, I know what I'm talking about. I've designed and adjusted many convolutional neural networks, as well as "traditional" signal processing solutions. On 9/18/2021 at 2:36 PM, TCW_Brzi_Joe said: Foveated rendering (partialy sharp picture, with blured part out of sweet spot) was my favourite game changer idea. That would give us nice picture and excellent performance. Pimax made it for theirs vr goggles, but it does not work with IL-2. I assume problem is current spotting system, or one point rendering for distant planes ( as same point woud be differently rendered in sweet spot and outside - that chrashes game). Maybe foveated rendering solution should be more priority for developer than dlss for vr players with 30 nvidias... Foveated rendering is perfectly possible with the current system, no need why it should crash. In fact, the very similar FidelityFX SuperResolution by AMD already works in-game (and I can really recommend it!). This is basically Foveated Rendering, except that it doesn't do eye tracking. Given that for true Foveated Rendering you need an eye tracker, which currently only the expensive VR goggles support, I don't think it should be a priority. Given the amount of work needed to get IL2 to work with DLSS, I don't think that should be a priority either.
TCW_Brzi_Joe Posted September 21, 2021 Posted September 21, 2021 (edited) 2 hours ago, AEthelraedUnraed said: Foveated rendering is perfectly possible with the current system Actualy I use AMD FSR, and it is excellent tool, but foveated rendering there does not give enough performance boost. I have Pimax, and there we have possibility on driver level to select (fixed for me) foveated rendering (3 levels), with sharp sweet spot, and 2-3 lower level rendered areas around. The performance boost with it is a huge, up to ca 30% ( old link, now chrashes game:link).I am not sure how it works in detail, but outer zones are no way rendered as middle one. I belive outer zones are already on start of "pipeline" in lower resolution, as it was 3 differnt resolution for one picture. I can start it in IL-2, but it breakes game in a seconds or minutes. I wish that kind of boost to have with AMD tool.... Edited September 21, 2021 by TCW_Brzi_Joe
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now