Jump to content

SYN_Vander BENCHMARK v6 to measure IL-2 performance in monitor & VR


Recommended Posts

MilitantPotato
Posted (edited)

I'm not 100%, but I also think the game needs to run in full screen for nvidia control panel tweaks to work properly. I can't remember exactly how that bit of information got in my brain, so it's likely not true, but maybe. I've always run full screen and never task switched to avoid that awful memory leak crash that comes from switching off il-2 windowed or not.

 

 

Also I'd think as a guess the benefits would be more pronounced if cpu limited in some ways (render pipeline and not AI load maybe?) but not others.

 If i recall i was almost fully gpu limited aside from a select view seconds where the ai caused a slight cpu bind

Edited by MilitantPotato
More typing
TG-55Panthercules
Posted
On 11/16/2022 at 9:33 PM, DBCOOPER011 said:

 

As an FYI with the G2, you should go into computer management and disable the "Holographicshell" app for VR ops. It's a current problem affecting the G2 causing stutters and fps loss...

 

Is that Holographicshell thing just a Windows 11 problem or does it need to be turned off for Windows 10 as well?

TG-55Panthercules
Posted (edited)

Well, I just tried running the v6 benchmark with VR settings (Option 1), and the experience was rather underwhelming.  It looked terrible (total shimmer-fest) and the FRAPS results were rather disappointing as well:

 

2022-11-24 22:17:33 - Il-2 (1st run using VR benchmark settings option 1)
Frames: 3755 - Time: 60000ms - Avg: 62.583 - Min: 54 - Max: 89
 

All the info on my rig is the same as I posted a page or two ago regarding the 1080P monitor benchmark, except that this run on VR settings was done on IL-2 GB ver. 5.003.

 

I suspect something must still be set wrong somewhere in my VR apps.  I could not see my mouse cursor with my headset on - I had to take it off to use my mouse to navigate the game menus, and then put it back on once the mission was ready to unpause.

Edited by TG-55Panthercules
MilitantPotato
Posted (edited)
2 hours ago, TG-55Panthercules said:

Well, I just tried running the v6 benchmark with VR settings (Option 1), and the experience was rather underwhelming. 

Your ram is pretty slow for DDR5, You're likely missing out on 10-15% performance (probably not worth the upgrade) vs 6000+ MT/s ram which would put you closer to coopers results, which seem to be higher than average results from overclocking and very expensive supporting parts.

This benchmark in no way indicates actual gameplay, since in single player the game slows down, and in multiplayer you'll experience far less CPU load (no AI to run.) 

 

@TG-55Panthercules I sent you a PM with some settings for a G2 on your hardware on this website. I'd be happy to walk you through the more complex tweaking on discord some time, you can add me with the name I sent in PM

Edited by MilitantPotato
  • Like 1
MilitantPotato
Posted (edited)

Upgraded my cpu and did some ram overclocking.
I'll have some VR results soonish and may do a few runs with hyper threading off.

Full screen 4k results are interesting, consistently higher if only a little. I may investigate further with FPSVR or OCAT for frametimes if VR shows the same improvements.

Motherboard: X570 AORUS MASTER

 CPU:                 Ryzen 5 5800X3D
 CPU Freq:        4.5 Ghz
 Cores:               8 
 Threads:           16 
 RAM size:        32Gb (2x16GB)
 RAM Freq:       3800 MHz 
 NB Freq:          1900 MHz 
 RAM timings:  16-16-16-35
 GPU:                 Gigabye Gaming OC 4090
 HMD: Reverb G2

 

 


Win 1080 Frames: 7654 - Time: 60000ms - Avg: 127.567 - Min: 109 - Max: 168

Win 1080 Frames: 7640 - Time: 60000ms - Avg: 127.333 - Min: 109 - Max: 172

Full 1080 Frames: 7690 - Time: 60000ms - Avg: 128.167 - Min: 109 - Max: 168

Full 1080 Frames: 7607 - Time: 60000ms - Avg: 126.783 - Min: 110 - Max: 172

Win 4k Frames: 10283 - Time: 60000ms - Avg: 171.383 - Min: 148 - Max: 198

Win 4k Frames: 10378 - Time: 60000ms - Avg: 172.967 - Min: 150 - Max: 216

Full 4k Frames: 10748 - Time: 60000ms - Avg: 179.133 - Min: 150 - Max: 229

Full 4k Frames: 10778 - Time: 60000ms - Avg: 179.633 - Min: 155 - Max: 214

 

Here's 4k full screen with +150 core +500 memory for my 4090, I would have expected *something* from overclocking maybe my ram was set too high ?
Frames: 10775 - Time: 60000ms - Avg: 179.583 - Min: 154 - Max: 209

Edited by MilitantPotato
Posted (edited)
On 11/25/2022 at 4:44 AM, TG-55Panthercules said:

Frames: 3755 - Time: 60000ms - Avg: 62.583 - Min: 54 - Max: 89
 

All the info on my rig is the same as I posted a page or two ago regarding the 1080P monitor benchmark, except that this run on VR settings was done on IL-2 GB ver. 5.003.

 

I suspect something must still be set wrong somewhere in my VR apps

 

Thank you for runing this VR1 test. I don´t think something is wrong in your system. The other tests of DBCOOPER is using an OC CPU and faster RAM. He went to the very top top. But in normal missions you will be at 90fps during 99% of the time.

Take into account that all CPU settings are max out and the SYN_VANDER is a dense object scenario. If you have a chance just run again the CUP test with the version 5.003 and also VRtest2. Just to verify everything is fine.

 

You are running the test with "WMR for SteamVR", in future you can switch to OpenXR and also get an improvement. For for now just enjoy your new system and G2.

On 11/25/2022 at 7:42 AM, MilitantPotato said:

Here's 4k full screen with +150 core +500 memory for my 4090, I would have expected *something* from overclocking maybe my ram was set too high ?
Frames: 10775 - Time: 60000ms - Avg: 179.583 - Min: 154 - Max: 209

 

Thank you very much for these test. They are great. It is clear then that Full screen gives a bit more of juice.

Regarding your extra OC GPU test, you didn´t obtained more. The reason could be that you are CPU limited in that test.

Edited by chiliwili69
TG-55Panthercules
Posted (edited)
10 hours ago, chiliwili69 said:

 

If you have a chance just run again the CUP test with the version 5.003 and also VRtest2. Just to verify everything is fine.

 

OK - ran the 1080P monitor test again, this time with game version 5.003.  I also explored around in the BIOS some more (very much more complicated for this rig than it was for my i7-7700K build), and managed to find and turn off the hyperthreading setting, as well as (I think) fixing the AVX offset and, while I was at it, disabling all the E-cores (I think) so only the P-cores would be used.  Did not find (yet) the way to manually specify a fixed CPU frequency for each core.  Anyway, the results were as set forth below (all other info same as in my previous posts on this rig):

 

2022-11-26 14:09:51 - Il-2 (1st run on v.5.003 on 1080P monitor settings - BIOS tweaked)
Frames: 8740 - Time: 60000ms - Avg: 145.667 - Min: 119 - Max: 200

 

Compared to my previous 1080P monitor run (on version 5.002b, and without the further BIOS tweaks - Avg: 132.933 - Min: 107 - Max: 187)  things got a little bit better on this run.  I'll try doing another run of VR Option 1 (and also try VR Option 2) while I still have these BIOS tweaks set.

 

[EDIT] - well, something I did in the BIOS tweaks managed to kill my VR (WMR reporting that my CPU won't work for VR/WMR) - I tried to back off some of the tweaks, but nothing helped, so I had to go back in and reset the BIOS completely to load all the defaults.  That got the VR working again, but I'll have to re-run all the benchmarks later tonight.

 

[EDIT] Re-ran the 1080P monitor benchmark with version 5.003 and my BIOS defaults:  Frames: 8263 - Time: 60000ms - Avg: 137.717 - Min: 110 - Max: 190  (pretty much the same as my first run on the 5.002b version). 

 

Also re-ran the VR benchmark (Option 1), again with my BIOS defaults - results were as noted below (not very different from my previous version 5.003 run for VR Option 1):

2022-11-26 22:30:10 - Il-2 (2nd run on v.5.003 using VR benchmark settings option 1; BIOS defaults)
Frames: 3925 - Time: 60000ms - Avg: 65.417 - Min: 55 - Max: 91

 

Also took my first run on ver. 5.003 using the VR Option 2 settings (again, BIOS defaults) - results below:

Frames: 3654 - Time: 60000ms - Avg: 60.900 - Min: 52 - Max: 82

 

BTW - all of my VR tests so far have been done using SteamVR (v.1.24.6)

Edited by TG-55Panthercules
  • Thanks 1
RAAF492SQNOz_Steve
Posted
On 11/25/2022 at 5:42 PM, MilitantPotato said:

Upgraded my cpu and did some ram overclocking.
I'll have some VR results soonish and may do a few runs with hyper threading off.

Full screen 4k results are interesting, consistently higher if only a little. I may investigate further with FPSVR or OCAT for frametimes if VR shows the same improvements.

Motherboard: X570 AORUS MASTER

 CPU:                 Ryzen 5 5800X3D
 CPU Freq:        4.5 Ghz
 Cores:               8 
 Threads:           16 
 RAM size:        32Gb (2x16GB)
 RAM Freq:       3800 MHz 
 NB Freq:          1900 MHz 
 RAM timings:  16-16-16-35
 GPU:                 Gigabye Gaming OC 4090
 HMD: Reverb G2

 

 


Win 1080 Frames: 7654 - Time: 60000ms - Avg: 127.567 - Min: 109 - Max: 168

Win 1080 Frames: 7640 - Time: 60000ms - Avg: 127.333 - Min: 109 - Max: 172

Full 1080 Frames: 7690 - Time: 60000ms - Avg: 128.167 - Min: 109 - Max: 168

Full 1080 Frames: 7607 - Time: 60000ms - Avg: 126.783 - Min: 110 - Max: 172

Win 4k Frames: 10283 - Time: 60000ms - Avg: 171.383 - Min: 148 - Max: 198

Win 4k Frames: 10378 - Time: 60000ms - Avg: 172.967 - Min: 150 - Max: 216

Full 4k Frames: 10748 - Time: 60000ms - Avg: 179.133 - Min: 150 - Max: 229

Full 4k Frames: 10778 - Time: 60000ms - Avg: 179.633 - Min: 155 - Max: 214

 

Here's 4k full screen with +150 core +500 memory for my 4090, I would have expected *something* from overclocking maybe my ram was set too high ?
Frames: 10775 - Time: 60000ms - Avg: 179.583 - Min: 154 - Max: 209

Interesting, will soon have almost the same setup when my Gigabye Gaming OC 4090 gets delivered in the next couple of days.

MB is different but have same CPU and RAM timings. Will be interested in your VR2 results.

Most likely will not bother with OC'ing the GPU over the standard, very small, factory overclock.

Was originally going to get a MSI Suprim but cancelled that order when the over heated power socket issue flared up. The MSI, that I would have preferred, is now out of stock but as a positive I got the Gigabyte for 20% less. 

Posted
15 hours ago, TG-55Panthercules said:

Also took my first run on ver. 5.003 using the VR Option 2 settings (again, BIOS defaults) - results below:

Frames: 3654 - Time: 60000ms - Avg: 60.900 - Min: 52 - Max: 82

Thanks for this.  From all the tweaks you introduced it seems there is one that is not liked by WMR.

For sure it is not the Hyperthreading OFF and the E-cores OFF, since DBCOOPER was running like that using his G2.

TG-55Panthercules
Posted (edited)
7 hours ago, chiliwili69 said:

Thanks for this.  From all the tweaks you introduced it seems there is one that is not liked by WMR.

For sure it is not the Hyperthreading OFF and the E-cores OFF, since DBCOOPER was running like that using his G2.

 

OK - based on that info I went back into BIOS and just turned hyperthreading and e-cores off.  Confirmed that WMR seems to tolerate that, but first ran the 1080P monitor test - results below:

 

2022-11-27 13:32:29 - Il-2 (3rd run on v.5.003 on 1080P monitor settings - Hyperthreading and e-cores off)
Frames: 8428 - Time: 60000ms - Avg: 140.467 - Min: 121 - Max: 198

 

Just ran the Option 1 VR test - results below:

 

2022-11-27 13:44:21 - il-2 (3rd run on v.5.003 using VR benchmark settings option 1; Hyperthreading and e-cores off)
Frames: 3979 - Time: 60000ms - Avg: 66.317 - Min: 56 - Max: 91

 

And finally, the Option 2 VR test - results follow:

 

2022-11-27 13:53:10 - il-2 (2nd run on v.5.003 using VR benchmark settings option 2; Hyperthreading and e-cores off)
Frames: 3758 - Time: 60000ms - Avg: 62.633 - Min: 53 - Max: 84

 

Since it seems these results are basically the same without these particular BIOS tweaks (slightly better but probably statistically indistinguishable), I'm assuming I should go back in and remove those tweaks for normal playing and general PC use purposes?

Edited by TG-55Panthercules
MilitantPotato
Posted (edited)
1 hour ago, TG-55Panthercules said:

Since it seems these results are basically the same without these particular BIOS tweaks (slightly better but probably statistically indistinguishable), I'm assuming I should go back in and remove those tweaks for normal playing and general PC use purposes?

Just make sure XMP is on for your RAM, anything else would be marginally beneficial and potentially take way more fiddling than the reward.  If you're not doing any encoding or compiling having hyper threading on isn't really helpful.  I keep it off until i need to encode video or something, a quick reboot to turn it on is no problem.

18 hours ago, RAAF492SQNOz_Steve said:

Interesting, will soon have almost the same setup when my Gigabye Gaming OC 4090 gets delivered in the next couple of days.

MB is different but have same CPU and RAM timings. Will be interested in your VR2 results.

Most likely will not bother with OC'ing the GPU over the standard, very small, factory overclock.

Was originally going to get a MSI Suprim but cancelled that order when the over heated power socket issue flared up. The MSI, that I would have preferred, is now out of stock but as a positive I got the Gigabyte for 20% less. 

It runs a g2 at 100% super sampling 2xMSAA 90fps with really good settings on multiplayer. Only time I have frame drops is certain super dense multiplayer locations and that's from CPU, as the gpu hits 90-95% usage at most.) Based on the benchmarks even the 13700k would drop frames there so i'll continue waiting for 7000x3d and hope it's amazing ?

Edited by MilitantPotato
TG-55Panthercules
Posted (edited)
1 hour ago, MilitantPotato said:

Just make sure XMP is on for your RAM, anything else would be marginally beneficial and potentially take way more fiddling than the reward.  If you're not doing any encoding or compiling having hyper threading on isn't really helpful.  I keep it off until i need to encode video or something, a quick reboot to turn it on is no problem.

 

 

Well, I decided to test that theory (leaving hyperthreading and e-cores off) so ran the 3D Mark Time Spy benchmark again with those settings off.   Lost more than 11% off the combined score, and more than 43% off the CPU score in that benchmark.  Given that the difference in the IL-2 benchmark was miniscule, I guess I'll leave them on.  Of course, it may have been the lack of e-cores more than the hyperthreading affecting those 3D Mark results, but I don't feel like spending the time to reboot and re-run the benchmarks to find out, given the small impact they seem to have on IL-2.

 

[EDIT] I guess there are still a couple of other things to consider doing before wrapping up this spate of benchmarking - (1) update nVidia driver from 516.94 to 526.98, and/or (2) update to Windows 11.  Is there any real good reason to do either of those at this point or should I just stand pat for a while?

Edited by TG-55Panthercules
Posted (edited)

For daily settings, I use a core ratio of 58x2 and 56x8 with 4400mhz E-Cores on my 13700KF. I  use max droop for the LLC and adaptive voltage with a +35 offset. That's pretty much my sweet spot so far since I want to keep my voltage under 1.4V during single core boost.
It appears IL2 is sensitive to the power plan used and project lasso high performance plan is the best I used so far for IL2. I don't turn off the E-cores/HT during gaming, but found out there is a little more performance to be gained without HT on. I mainly use project lasso to tweak the CPU/IO priority, induce performance mode and enable the pro balance mode.  I just did an IL2 CPU/VR2 test with these settings and got the results below.  I got HWinfo64 screens of the CPU test and also a time spy test for reference...


Also, there is a glitch with holographic shell causing stutters with the G2. You can turn it off in computer management. Link describing the issue below..

 

https://forums.flightsimulator.com/t/win-11-update-causing-stutters/543631?page=9

 

CPU Test:
Frames: 9124 - Time: 60000ms - Avg: 152.067 - Min: 135 - Max: 205

 

VR Test 2:
Frames: 5185 - Time: 60000ms - Avg: 86.417 - Min: 74 - Max: 91

 

 

IL2 CPU Test.png

Time Spy Test.png

Edited by DBCOOPER011
  • Thanks 1
Posted
13 hours ago, TG-55Panthercules said:

Since it seems these results are basically the same without these particular BIOS tweaks (slightly better but probably statistically indistinguishable), I'm assuming I should go back in and remove those tweaks for normal playing and general PC use purposes?

 

I really don´t know.  I was doing OC with my previous 4790K but since I had my 5600X I really have no need to go deeper in the OC thing.

So, I am not helpfull in giving you advice about intel 12th gen or 13th gen. But according to DBCOOPER test it seems there is some juice there. Apparently the key thing should be core ratios.

Posted (edited)

Hi guys, I've finally put my 1080ti out to pasture and now have a 4080 which was a bit painful to the wallet but at least I managed to skip two generations.

 

Anyway, here are the results of some tests I did. 

 

My system is the same as before just different memory timings.

Frequency 3800    16-16-16-34

Windows 10

Open Xr Runtime 112.2211.2002

Cpu  5800x3d

Gpu  Palit gamerock omniblack rtx4080 stock settings

 

Cpu test

2022-12-02 15:50:57 - Il-2
Frames: 7746 - Time: 60000ms - Avg: 129.100 - Min: 110 - Max: 176

2022-12-02 15:52:50 - Il-2
Frames: 7694 - Time: 60000ms - Avg: 128.233 - Min: 105 - Max: 180

2022-12-02 15:54:40 - Il-2
Frames: 7741 - Time: 60000ms - Avg: 129.017 - Min: 112 - Max: 170

Average of 3 runs= 128.78

 
Vr tests using open composite 
Open xr runtime 112.2211.2002

 

vr test1

2022-12-02 16:29:53 - Il-2
Frames: 4212 - Time: 60000ms - Avg: 70.200 - Min: 59 - Max: 91

vr test2

2022-12-02 16:42:33 - Il-2
Frames: 4051 - Time: 60000ms - Avg: 67.517 - Min: 54 - Max: 91

vr test3

2022-12-02 16:52:08 - Il-2
Frames: 2564 - Time: 60000ms - Avg: 42.733 - Min: 36 - Max: 47

Edited by shirazjohn
  • Like 1
Posted
15 hours ago, shirazjohn said:

put my 1080ti out to pasture and now have a 4080

Thanks for runing the tests. First 4080 tested here.

 

Your CPU results is quite aligned with other 5800X3D.

The VRtest1 and 2 are clearly still bottlenecked by CPU, so almost no difference between 4080 and 4090.

I think the 4080 is a very good option for almost all VR headsets today and next year.

  • Like 1
  • Upvote 1
Posted

What is the % performance gain to be expected between 3080 and 4080 for VR in IL2?

 

Knowing that IL2 does not care about DLSS performance gain.

Posted
4 hours ago, Youtch said:

What is the % performance gain to be expected between 3080 and 4080 for VR in IL2?

 

Knowing that IL2 does not care about DLSS performance gain.

Hi,  I've gone from a 1080ti to the 4080 so have a quite a noticeable uplift in performance but can't comment on the 3080 as I've never owned one.

 

But i can give you an idea of performance after playing a few Normandy campaign missions.

 

Early Normandy V1 site bombing mission , frontline activity medium and medium difficulty.

Game settings : All Extreme or high no AA, Hdr or ssao

Open composite: SS110%

The whole mission runs at a solid 90fps except when near the ground it sometimes drops to around 85 fps.

I have tried with 2x msaa but frames drop to around 75-80 fps , if i reduce game settings to high i am back to near 90 fps.

Hope this helps

 

Posted
18 hours ago, Youtch said:

What is the % performance gain to be expected between 3080 and 4080 for VR in IL2?

 

it depends on your PC specs, your desired IL-2 settings and your VR headset and settings.

 

For example, it my case (CPU:5600X, High settings, no mirrors, High clouds, no MSAA. Index at 80Hz with 150% SS) I would obtain zero gain moving from current 3080 to a 4080. That´s why I have no plan to upgrade GPU while using the Index. With another VR headset it might be a different story.

 

 

Posted
31 minutes ago, chiliwili69 said:

 

For example, it my case (CPU:5600X, High settings, no mirrors, High clouds, no MSAA. Index at 80Hz with 150% SS) I would obtain zero gain moving from current 3080 to a 4080. That´s why I have no plan to upgrade GPU while using the Index. With another VR headset it might be a different story.

I understand that you mean that you are CPU limited, correct? I have G2 + 5600X + 3080

Posted

 

What are the settings which have more impact on the CPU and that can be lowered down?

Posted
10 hours ago, Youtch said:

 

What are the settings which have more impact on the CPU and that can be lowered down?

 

 

  • Thanks 2
  • 2 weeks later...
MilitantPotato
Posted (edited)

@chiliwili69

Steamvr beta now lets you bypass the "recommended max resolution"  for WMR headsets, I can confirm that resolutions over 100% apply correctly to il2 for the G2. 
Not super useful for the purposes of this thread, but it may allow folks to see what FPS they could expect on higher res headsets vs the G2 without MSAA 8X (which seems to cause memory bandwidth issues) when the time comes.

Spoiler


extra ss.png

 

Edited by MilitantPotato
  • Thanks 1
Posted
12 minutes ago, MilitantPotato said:

Steamvr beta now lets you bypass the "recommended max resolution"  for WMR headsets

 

Thanks for this. 

This thread explains the cap limitation:  https://forum.il2sturmovik.com/topic/81095-steamvr-resolution-cap-maxrecommendedresolution/

When Valve will move that to the standard SteamVR we can then redesign the VRTest3 as VRtest2 but with a higher resolution (like 30million pixels)

 

Wilhelm_Heidkamp
Posted

Has anyone tested this game without Hyperthreading (Intel HT) or Simultaneous Multiple Threading (AMD SMT? 

I see improvements with 8 physycal cores versus 16 threads.

Posted
27 minutes ago, PA_Willy said:

Has anyone tested this game without Hyperthreading (Intel HT) or Simultaneous Multiple Threading (AMD SMT? 

I see improvements with 8 physycal cores versus 16 threads.

 

I have mostly always run mine that way. Plus CPU runs a little cooler. Hyperthreading does not really benefit me in way I use my PC.

FerreroRocher
Posted

Got my 7900xtx yesterday so figured I'd contribute.


Motherboard: Asus prime x570 pro
 CPU:         Ryzen 7 5800X3D
 CPU Freq:    4.5 Ghz
 Cores:       8 
 Threads:     16 
 RAM size:    32Gb (2x16GB)
 RAM Freq:    3600 MHz 
 NB Freq:     1800 MHz 
 RAM timings: 16-19-19-39
 GPU:         XFX 7900xtx merc310 black edition

HMD: Quest 2
windows 10


Cpu tests 1080p

2022-12-23 16:37:32 - Il-2 cpu1
Frames: 7333 - Time: 60000ms - Avg: 122.217 - Min: 109 - Max: 162

2022-12-23 16:41:57 - Il-2  cpu1
Frames: 7532 - Time: 60000ms - Avg: 125.533 - Min: 109 - Max: 171

2022-12-23 16:44:40 - Il-2 cpu1
Frames: 7392 - Time: 60000ms - Avg: 123.200 - Min: 110 - Max: 162 


Vr tests using Opencomposite
Vrtest1 with 1.4xSS for 10.5 million pixels
Vrtest2/3 with 1.9xSS for 19.4 million pixels
 
2022-12-23 15:43:18 - Il-2 Vrtest1
Frames: 3269 - Time: 60000ms - Avg: 54.483 - Min: 42 - Max: 89

2022-12-23 15:58:38 - Il-2 Vrtest2
Frames: 3164 - Time: 60000ms - Avg: 52.733 - Min: 42 - Max: 90

2022-12-23 16:23:07 - Il-2     Vrtest3
Frames: 2525 - Time: 60000ms - Avg: 42.083 - Min: 36 - Max: 46

 

This is with bone stock settings on cpu/gpu/ram.
Comparing it to the 4080 results above it's quite interesting how the 7900xtx underperforms in vrtest1 and 2, yet is on par in test3
I also noticed in vrtest 1 and 2 clock speeds never went above 2000mhz (with gpu utilisation at around 60%), whereas in vrtest 3 they consistently sat a hair above 3000mhz! 

After a few runs in vr,  fps occasionally limited to 45, as if ASW was on, which resolved itself with a game reboot.

 

Cheers!

 

  • Thanks 1
Posted
On 12/23/2022 at 5:26 PM, FerreroRocher said:

GPU:         XFX 7900xtx merc310 black edition

Thank you. Those tests are not only interesting for being the first 7900XTX card tested here but also the first Quest2 tested here.

Since every VR test has their own software (and compression in the case of Quest2), it is difficult to compare results. You can take a look of previous results with VP2, G2 and Index over the same system.

 

It is also surprising that the new AMD seems to not be affected by the issues with previous AMD cards. (we can see that in the CPU test for example).

So, Does it mean that the new AMD cards can now be OK as well for IL-2???

 

We need more test from other people with other headsets and Intel 13th gen CPUs.

  • Upvote 1
Posted
On 12/25/2022 at 4:40 AM, chiliwili69 said:

Thank you. Those tests are not only interesting for being the first 7900XTX card tested here but also the first Quest2 tested here.

Since every VR test has their own software (and compression in the case of Quest2), it is difficult to compare results. You can take a look of previous results with VP2, G2 and Index over the same system.

 

It is also surprising that the new AMD seems to not be affected by the issues with previous AMD cards. (we can see that in the CPU test for example).

So, Does it mean that the new AMD cards can now be OK as well for IL-2???

 

We need more test from other people with other headsets and Intel 13th gen CPUs.

 

I've suspected that the RDNA2 problems were due to their memory architecture. Remember their top end card had the same non-cache bandwidth as the 5700 XT, and performed the same or slightly worse in Il-2.

 

One of the many things they changed in RDNA3 was to go from a 256 bit bus to a 384 bit bus like the nVidia cards have had for a while. I sort of wonder if Il-2, while it uses less vram than other flight sims, ends up accessing far more of it than other flight sims tend to? 

 

From what I'm hearing, RDNA3 seems to have actually stuck the landing on their new memory architecture, and their current problems are in the compute side of things. If that is true, it's huge, because it means that silicon is done, and never has to be touched again, and they can develop a GDDR7 interface that can just drop in once they've sorted out the compute problems. Or even just drop in a whole new compute unit in without touching the memory controllers if it's really a silicon problem. 

 

They're having a rough launch, and things definitely did not go according to plan, but this really is going to be a watershed moment in GPUs. 

Posted
2 hours ago, Voyager said:

Remember their top end card had the same non-cache bandwidth as the 5700 XT, and performed the same or slightly worse in Il-2.

 

Do you have a reference for this?  (Something authoritative, that is.  A link if you have it is preferred.)

 

Do you own both 6900/6950XT and 5700XT cards (or have you, at some point)?  Part of the reason I'm asking is that I actually do have examples of each (as well as 6800XT, 6700XT) and I've intended for a while to test these high-end 6xxx cards in IL2 (on a test bench; my own personal machine has a 3090).

 

But I'm curious, and more than a little surprised, to hear what you're saying.  Not at all to derail the thread here, but can you elaborate?

Posted
10 hours ago, NoBreaks said:

 

Do you have a reference for this?  (Something authoritative, that is.  A link if you have it is preferred.)

 

Do you own both 6900/6950XT and 5700XT cards (or have you, at some point)?  Part of the reason I'm asking is that I actually do have examples of each (as well as 6800XT, 6700XT) and I've intended for a while to test these high-end 6xxx cards in IL2 (on a test bench; my own personal machine has a 3090).

 

But I'm curious, and more than a little surprised, to hear what you're saying.  Not at all to derail the thread here, but can you elaborate?

There are a very small number of 5700 XT cards in the Syn_Vander benchmarks, and from last I checked, the few 6800/6900 performed about as well as them or worse. 

 

For the non-cache bandwidth, I'm just referencing the stock specs. The reference 5700XT has a 256 GDDR6 bus with 448GBps, while the 6900 XT also has a 256 bit bus with 512 GBps bus. 

 

The reference 1080 TI had a 352 bit GDDR5X bus with 484Gbbs bandwidth, and was a bit better than both in the Vander benchmarks. 

 

The cache thrashing was just my speculation, since the major change from RDNA1 to RDNA2 was the addition of the cache system so they didn't have to go to a wider bus yet. 

Posted (edited)
On 12/21/2022 at 10:30 AM, chiliwili69 said:

 

Thanks for this. 

This thread explains the cap limitation:  https://forum.il2sturmovik.com/topic/81095-steamvr-resolution-cap-maxrecommendedresolution/

When Valve will move that to the standard SteamVR we can then redesign the VRTest3 as VRtest2 but with a higher resolution (like 30million pixels)

 

 

Hello to all , i wish you the best things to happen during 2023!!!

 

i want a question to ask about RAM overclocking....

 

Since i am CPU bottlenecked i decided to do some CPU overclocking from 4.5 ghz to 5 ghz in my 10th gen 10600k.

i gained about 10 fps during cpu 1080p benchmark test ( i will soon send exact results..)

 

After that i tried to do some RAM overclocking because i see that ram speed affects il2 fps a lot.

So i used a useful gigabyte utility ( my motherboard is gigabyte z590 aorus elite ax) which is called "Easy tune" and overclocked my G.SKILL F4-3200C14D (2X8GB) RAM from 3200 mhz and 14 CL to 3766 mhz and 17 CL . Utility didnt give me the chance to set latency numbers by myself but it decides itself the best latency (CL) value and the other relevant values of timing . So finally for 3766 mhz RAM overclocking speed i get 17-15-16-36 timings from 3200 mhz and 14-14-14-34 xmp profile 1 timings. 

 

After that i ran 1080 cpu synvander test and i had no gain at all in fps ( my average fps during 1080 cpu test are about 97-98  after cpu oc to 5 ghz from 88 fps before oc).

 

Strange thing is that I did some cine-bench testing and had better score after RAM overclocking.

 

Where do you thing its the problem of not getting more fps after RAM overclocking?

Do you think that increasing latency timing from 14 to 17 is a possible explanation?

 

Thank you in advance. 

 

 

p.s. i didnt try RAM overclocking via BIOS because i dont know what settings to do in there . its also quite dangerous since memory timings aren't locked...  

Edited by dgiatr
Posted (edited)
On 12/28/2022 at 12:44 AM, Voyager said:

For the non-cache bandwidth, I'm just referencing the stock specs. The reference 5700XT has a 256 GDDR6 bus with 448GBps, while the 6900 XT also has a 256 bit bus with 512 GBps bus.

 

So...bandwidth is not the same then - correct?  And "their top end card" (as you put it) is a 6950XT, on which bandwidth increases to 576 GB/s.  Perhaps a closer review of the the specs is in order.

 

And you don't/haven't actually own either of the cards?  (While I appreciate your answer concerning the number of cards in this particular study, I asked if you own either card or have).  The reason it matters is because, unless you've tested them yourself, you're relying solely on others' results, and entirely on this one 60-second "benchmark", to draw your conclusion.

 

It may be that the two cards did about the same in this one test.  But I think it's a reach to say they performed "the same or slightly worse in Il-2" based on a single, 60-second long test run.  (I mean there's only a single prescribed test, not that there's only one published result).

 

I have done many (many) test runs using a particular map/scenario, during which I can predictably and reliably demonstrate that a 1080 will perform (at least according to frame rate) about the same as my 3090.  However, it's absolutely not true to say that the 1080 can keep up with the 3090 most of the time outside these particular scenarios.  It happens that the scenario manifests certain (very bad) CPU utilization in the sim, thus effectively 'leveling' the two GPUs.  However, during most of the mission, the 3090 will far outperform the 1080 (of course). 

 

Point is, it's misleading to suggest that performance in any particular small segment (even a hundred records or two) will always reliably indicate performance overall.

 

Also, given there are only a few records showing 5700XT/6800XT/6900XT...well, that's also a small sample size, meaning it's less likely the outcome represents the entirety on the whole.  I've used spreadsheets professionally for about 35 years now, and maybe I'm missing something, but I only saw 12 records total with 5700XT, 6800XT or 6900XT.  Of the 12 "monitor" tests, half (6 rows) show that only the "CPU" part of the test was done (presumably because the tester doesn't have a 4k monitor).  So, right off the bat, we're practically excluding* the very area in which the more recent, high-end cards are known to perform better: high resolution (and it's not a linear increase, I can tell you).  FWIW, It doesn't appear as if any 5700XTs were tested with VR (although I'm really only looking at monitor tests anyway).

 

*(One tester does show results for a 5700XT at 4k, but I think the records could be considered questionable...more on that below)

 

Further, going back to the "CPU test" results, I believe the explicit stated objective of that test is to create a situation where the system will be "CPU bound", and the records show very different CPUs are being tested - as an example, the highest average FPS for a 5700XT in this CPU bound test was a 98.22 with an R5-5600X posted by the same guy who posted a 66-68 range avg (there are two rows) when using a R5-3600.  I don't think it's a surprise to anyone - shouldn't be, anyhow - that a 5600X will run the pants off a 3600.  And we're testing them with the same 5700XT in a scenario that's explicitly intended to be CPU bound...so yes, I'd fully expect a better outcome with a (much) better CPU in this test.

 

Because testers who don't have 4k monitors cannot post results showing higher-resolution (monitor) results, the test can't really offer a comparison where higher-end, newer GPUs are almost certain to perform better.  There's only *one* user who posted a 4k test (or VR for that matter) using a 5700XT.  He posted two results, with very different AVG FPS result, where the only difference the data shows is that the Uncore changed from 800 to 2400.  Somehow, I just find that...well, odd. This same person had also posted a result (the records aren't dated) showing that his RAM wasn't even running at rated speed (looking over his posts, it's because he wasn't using an XMP profile).  I'm not basing my overall conclusion on tests performed by someone who didn't even have the test platform configured properly.   TBH these records seem..well, exceptional to me.  I'd want to know a lot more than the spreadsheet really shows before I'd say they were reliable.

 

If I'm doing an analysis at work and have to show my boss this data, I'm more than likely gonna toss the only records that show the 5700XT at 4k, because they appear questionable.  That's pending more detail, of course, but the point is that, where data analysis is concerned, you don't really want to include stuff that appears very unusual; often you throw out the highest and lowest results anyway.

 

The problem with all this is that someone reading these comments might very well conclude a 5700XT is as good as a 6900XT...and I'm fairly sure that's just not true.

 

Of course, as your comments have shown, if someone isn't actually looking at the details, they might conclude exactly that.

 

To me, the data would say far more about the nature of (some parts) of this sim than it says about the hardware.  Fact is, there are times when the sim is pretty horrible at CPU utilization.  Since only about 1/3rd the records show the tester has a 4k monitor, that inherently restricts testers without 4k monitors to only the test where, by design, a CPU bound scenario is created, which means the GPU is going to be sitting there, bored.  As I explained above, I have seen and can reliably reproduce other scenarios that will demonstrate this very clearly.

 

(Kindly note I am not considering VR tests here, as this thread indicates it represents "IL-2 performance in monitor & VR").

 

In summary, I don't think it's accurate to conclude a 6900XT is 'the same or slightly worse' than a 5700XT except *possibly* in a scenario that says more about the sim than the hardware...and, unless I'm missing something, the hardware specs (i.e. bandwidth) on the 5700XT and 6900XT are not the same.

Edited by NoBreaks
  • Upvote 1
Posted (edited)
12 hours ago, dgiatr said:

 

Hello to all , i wish you the best things to happen during 2023!!!

 

i want a question to ask about RAM overclocking....

 

Since i am CPU bottlenecked i decided to do some CPU overclocking from 4.5 ghz to 5 ghz in my 10th gen 10600k.

i gained about 10 fps during cpu 1080p benchmark test ( i will soon send exact results..)

 

After that i tried to do some RAM overclocking because i see that ram speed affects il2 fps a lot.

So i used a useful gigabyte utility ( my motherboard is gigabyte z590 aorus elite ax) which is called "Easy tune" and overclocked my G.SKILL F4-3200C14D (2X8GB) RAM from 3200 mhz and 14 CL to 3766 mhz and 17 CL . Utility didnt give me the chance to set latency numbers by myself but it decides itself the best latency (CL) value and the other relevant values of timing . So finally for 3766 mhz RAM overclocking speed i get 17-15-16-36 timings from 3200 mhz and 14-14-14-34 xmp profile 1 timings. 

 

After that i ran 1080 cpu synvander test and i had no gain at all in fps ( my average fps during 1080 cpu test are about 97-98  after cpu oc to 5 ghz from 88 fps before oc).

 

Strange thing is that I did some cine-bench testing and had better score after RAM overclocking.

 

Where do you thing its the problem of not getting more fps after RAM overclocking?

Do you think that increasing latency timing from 14 to 17 is a possible explanation?

 

Thank you in advance. 

 

 

p.s. i didnt try RAM overclocking via BIOS because i dont know what settings to do in there . its also quite dangerous since memory timings aren't locked...  

 

On 12/25/2022 at 12:40 PM, chiliwili69 said:

Thank you. Those tests are not only interesting for being the first 7900XTX card tested here but also the first Quest2 tested here.

Since every VR test has their own software (and compression in the case of Quest2), it is difficult to compare results. You can take a look of previous results with VP2, G2 and Index over the same system.

 

It is also surprising that the new AMD seems to not be affected by the issues with previous AMD cards. (we can see that in the CPU test for example).

So, Does it mean that the new AMD cards can now be OK as well for IL-2???

 

We need more test from other people with other headsets and Intel 13th gen CPUs.

just checked my memory benchmarking score via aida64 

 

before RAM overclocking ( xmp profile 1, clock 3200 mhz, timing  14-14-14-34)

memory latency  47,2 ns

memory copy : 41097 MB/s

memory read : 46672 MB/s

memory write :  45942 MB/s

 

after RAM overclocking ( ram clock 3733 mhz, timing 17- 15-15-36 )

memory latency  45,6 ns

memory copy : 44841 MB/s

memory read :  52056 MB/s

memory write :  52535 MB/s

 

i dont think i can do any better about memory overclocking....

 

what do you think?

 

 

 

image.thumb.png.19e53ac4cc1510f2a5ec80811ec32467.png

Edited by dgiatr
Posted
doing some more overclocking so final cpu speed is 5.1 ghz plus RAM overclocking to 3733 mhz, timing 17- 15-15-36 but i don't get any more fps during 1080p cpu synvander benchmark test since i overclocked for the first time from 4.5 ghz to 5 ghz ( i had about 9-10 fps gain then)...i didnt overclocked cpu cache thought because it makes cpu unstable dont know why....should it make any difference if i manage to overclock cpu cache?
 

 

 
2022-12-27 18:28:00 - Il-2
Frames: 5947 - Time: 60000ms - Avg: 99.117 - Min: 83 - Max: 140
 
2022-12-27 23:07:31 - Il-2
Frames: 5921 - Time: 60000ms - Avg: 98.683 - Min: 83 - Max: 138
 
2022-12-28 21:48:51 - Il-2
Frames: 5911 - Time: 60000ms - Avg: 98.517 - Min: 85 - Max: 137
 
2022-12-28 21:52:07 - Il-2
Frames: 5915 - Time: 60000ms - Avg: 98.583 - Min: 84 - Max: 136
Posted (edited)

@dgiatr

 

What you're seeing is fairly common in spite of all the talk you might hear about "overclocking"...getting a little extra out of the CPU with a reasonable OC is fairly straightforward.  Worth it - to some people - if there's adequate cooling.  Still finite, though...there's only so much it can do.

 

RAM is typically best run at an XMP profile (unless you **really** want to tinker with it a lot).   I have been working on computers professionally for 40+ years and have overclocked a lot of them...but I basically quit messing with RAM because the gain honestly isn't usually worth the effort and risk in stability - for me.

 

We also have to consider that any of these gains is *relatively* small and at a certain point (as you're seeing) it's not going to make much of a *perceptible* difference in performance.  I defy *anyone* to show proof they can reliably discern between 98 and 99 FPS ?

 

And yes, I suspect you're right about the CAS latency:  It's usually a tradeoff between clock speed and CAS, and changing one usually means the other will have to change to be stable.  Technically, there's just not much gain (as you've seen) between 3200 CAS14 and 3733 CAS17.  In fact, there's the formula:

 

RL=CL *2000 / DR

Where RL = effective or "absolute" latency, in nanosceonds

CL = Column Address Strobe (CAS) rate

DR = data rate (speed, in MT/s)

 

17*2000/3733 = 9.108r

14*2000/3200 = 8.75

 

Worth noting that you have actually *increased* absolute latency, which is generally considered undesirable.

 

Still, it equates to around 4% difference in absolute latency, which is still a small part of overall RAM performance - which is, itself, a small factor in overall system performance.  By the time it all boils down I would imagine the difference is imperceptible for all intent....which is what I think you're seeing :)

 

 

Edited by NoBreaks
  • Like 1
  • Thanks 1
  • Upvote 2
Posted (edited)
On 12/27/2022 at 6:09 PM, Voyager said:

One of the many things they changed in RDNA3 was to go from a 256 bit bus to a 384 bit bus like the nVidia cards have had for a while.

 

Good observation. The 7900 XTX has a 384 bit bus, and with the only test we have so far, the CPU test result is not affected. It would be really good to have an alternative brand to NVIDIA for IL-2. Let´s see if more tests confirm that. 

 

In previous tests (about 12 tests) of the AMD cards (5700XT, 6800XT, 6900XT, 6950XT) the CPU test perfomance was always lower for the same CPU comparing them with the NVIDIA cards.

 

There were several clear samples:

 

Sample1: Here @thermoregulator went from a 6800XT to a theoretically inferior 2080Ti (according to GPU passmark) but he got +25fps in the CPU test.

sample1.thumb.png.4826870e0eb85aacb8735e91b5bb1eee.png

 

Sample2: Here two people with the same CPU ( @RAAF492SQNOz_Steve, @102nd-YU-cmirko) and the G2 got very different results in the VRTest2.

sample2.thumb.png.6adcce69da10ed31c3094f419c2c649f.png

 

Sample3: Here @BeastyBaiterwent from a 6950XT to a theoretically inferior 2080S but he got a +8fps in CPU test and +9fps in VRtest1.

sample3.thumb.png.56ae023873d5525afd7ef20537633872.png

 

Sample4: Here two people with the same CPU ( @HankCzinaski, @shirazjohn) and theoretically equal GPUs, but the 1080Ti got +22fps in CPU test.

 sample4.thumb.png.4ce5534cfd1d6cee794e3a775f2cb7bf.png

 

If I remember well (could not), All the AMD cards tested with SYN_VANDER and previous IL-2 benchmarks the AMD cards were giving ALWAYS lower performance than the NVIDIA equivalent (PAssmark wise). I that´s why we were not recommending AMD cards for IL-2 in general. (

 

This now could change with that promising latest test of the 7900XTX.

Edited by chiliwili69
Posted
On 12/29/2022 at 4:55 PM, NoBreaks said:

We also have to consider that any of these gains is *relatively* small and at a certain point (as you're seeing) it's not going to make much of a *perceptible* difference in performance.  I defy *anyone* to show proof they can reliably discern between 98 and 99 FPS ?

 

And yes, I suspect you're right about the CAS latency:  It's usually a tradeoff between clock speed and CAS, and changing one usually means the other will have to change to be stable.  Technically, there's just not much gain (as you've seen) between 3200 CAS14 and 3733 CAS17.  In fact, there's the formula:

 

RL=CL *2000 / DR

Where RL = effective or "absolute" latency, in nanosceonds

CL = Column Address Strobe (CAS) rate

DR = data rate (speed, in MT/s)

 

17*2000/3733 = 9.108r

14*2000/3200 = 8.75

 

Yes, RAM tuning is a more difficult territory (for me) and the gains you obtain are very small beyond the XMP profile.

 

The RL or First Word Latency is reported in the column S of the SYN_Vander table.

 

This is always the first think to look when you acquire RAM. Some providers include that value in their webs, so you can sort by FWL for each kind of RAM (DDR4, DDR5, etc):

https://pcpartpicker.com/products/memory/#sort=fwl&b=ddr4

Posted (edited)

AMD will not be a competitor to Nvidia in VR anytime soon best I can tell.  I would never recommend AMD GPU for VR play.

Ram best to stick with it's XMP Profile.

Best overclocking is 1: CPU followed less by 2: GPU.

However with latest Intel CPU's and motherboards not sure that much is to be gained from overclocking the CPU.

Edited by dburne
Posted
3 hours ago, dburne said:

AMD will not be a competitor to Nvidia in VR anytime soon best I can tell.  I would never recommend AMD GPU for VR play.

If I may ask, what are you basing this on?  It would seem apparent that you'd likely never own an AMD GPU...so it seems safe that we can rule out first-hand experience.

 

As much as I respect everyone has a right to their own opinions, I don't think it's unfair to suggest that there ought to at least be some kind of valid *reason* for having a certain opinion.

 

What's your reasoning?  Any empirical data to corroborate? Links or other references?

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...