Jump to content

SYN_Vander BENCHMARK v6 to measure IL-2 performance in monitor & VR


Recommended Posts

Dustybritches
Posted

Results with all benchmark settings as specified except steam SS is much higher than all others. More render difficulty than your test. 3848X3148 150%SS. I wonder how much your card could do at these setting for a true comparison on VR benchmark 2?


2022-08-07 12:27:05 - Il-2
Frames: 4471 - Time: 60000ms - Avg: 74.517 - Min: 62 - Max: 91

Posted
On 8/7/2022 at 9:42 PM, Dustybritches said:

Results with all benchmark settings as specified except steam SS is much higher than all others. More render difficulty than your test. 3848X3148 150%SS. I wonder how much your card could do at these setting for a true comparison on VR benchmark 2?


2022-08-07 12:27:05 - Il-2
Frames: 4471 - Time: 60000ms - Avg: 74.517 - Min: 62 - Max: 91

 

Hey!  this is new to me. Your 6900XT is performing better than any other previous tests of AMD cards. I wonder if the issue has been partially solved.

 

The VRtest2 is set to render 19.5 million pixels in total. For you Pimax5KSuper this is equivalent to SS at 123%:

Untitled.thumb.png.f3703885fcc08eb3c55af93cea0db61c.png

 

But you are making the test with 150% and you have 74 Avg fps which is better than 59fps from previous tests of 69000XT.

 

Other test with a 3090 are in the renage of 89fps (Index) and 84fps (G2), so still below Nvidia but closer than before.

 

In order to compare apples with apples you shold run the test exactly as the instructions (123% SS). Thanks.

 

 

Posted (edited)

Time to IL-2 bench the new GPU. Just swapped my 2070 Super for a Gigabyte RTX 3070 Eagle OC (LHR), which I undervolted to 0.9V following the advice from helpful members of this community. Improved my power-draw and temps pretty significantly. 3DMark Time Spy score was improved slightly, while Passmark's benchmark rated the card slightly worse after the undervolt.

 

It's set to 1890 MHz@0.9V, but runs at 1905 MHz for some reason. GDDR is OCed to 7600 MHz effective (+600 MHz).

 

Everything else in the system is unchanged.

 

My Specs:
 Motherboard: MSI MAG Z690 Tomahawk Wifi DDR4
 CPU: Intel i5-12600K, slightly OCed via BIOS turbo-offset (+1) on the P- and E-cores, cooled by a Lian Li Galahad 240
 CPU Freq: P-cores max 1080 run: 4.98 GHz, P-cores max in VR test1: 4.78 GHz
 L3 cache: 20 MB
 Cores: 6P + 4E = 10
 Threads: 16
 RAM type: Corsair Vengeance Pro RGB, 3600 DDR4 (XMP is on of course)
 RAM size:  2x8 GB, dual channel
 NB Freq: 3600 MHz
 RAM Freq: 3600 MHz
 RAM Latency:  CL18
 GPU: Gigabyte RTX 3070 Eagle OC LHR. Effective clock speed: 1905 MHz @0.9V, GDDR: 7600 MHz.

 


OS is Win 11 Pro

 

Result in the 1080p run:

 

2022-08-14 15:06:38 - Il-2
Frames: 6555 - Time: 60000ms - Avg: 109.250 - Min: 96 - Max: 160

 

 

 

EDIT: And just because I was bored, I ran the 4k test on my 1440p monitor as well:

 

2022-08-14 15:28:26 - Il-2
Frames: 9565 - Time: 60000ms - Avg: 159.417 - Min: 137 - Max: 227

 

 

Haven't gotten around to running the VR test yet. It's still freakin hot around here and I don't feel like putting that thing on my face just yet... ?

 

S.

Edited by 1Sascha
Dustybritches
Posted (edited)

124%SS as 123 couldn't be selected.

2022-08-14 17:17:00 - Il-2
Frames: 5380 - Time: 60000ms - Avg: 89.667 - Min: 78 - Max: 106    Balanced setting

2022-08-14 17:45:12 - Il-2
Frames: 5002 - Time: 60000ms - Avg: 83.367 - Min: 75 - Max: 105     High setting

150%SS High setting
2022-08-14 17:51:50 - Il-2
Frames: 4916 - Time: 60000ms - Avg: 81.933 - Min: 64 - Max: 95

Edited by Dustybritches
  • Thanks 1
Posted
On 8/14/2022 at 3:18 PM, 1Sascha said:

 RAM Latency:  CL18

 

Thank you for re-runing the test with your new 3070. 

Could you give the other memory latencies reported by CPU-Z memory tab?

Let´s see what you achieve in VR tests.

On 8/15/2022 at 2:21 AM, Dustybritches said:

Frames: 5002 - Time: 60000ms - Avg: 83.367 - Min: 75 - Max: 105     High setting

 

Thank for running the VR tests at 124% but I think you run it at 120Hz since the max fps is 105 (not 90 or 91).

The VR test must be done at 90Hz, otherwise the Avg value is affected.

Posted
1 hour ago, chiliwili69 said:

Could you give the other memory latencies reported by CPU-Z memory tab?

1938872774_cpu-zmem.jpg.98944efe897fa8dcf223370c111299b0.jpg587923240_cpu-zspd.jpg.d1caa6735536472729348c2f44a30610.jpg

  • Thanks 1
Posted (edited)
12 hours ago, chiliwili69 said:

Let´s see what you achieve in VR tests.

2022-08-17 23:15:42 - Il-2
Frames: 3191 - Time: 60000ms - Avg: 53.183 - Min: 38 - Max: 81

 

Rift-S, SS set to 184% in Steam VR.

 

What I don't quite get is that my GPU wasn't really working all that hard - neither during the benchmark nor with my in-game settings (reduced SS from the 184% used during the bench-run, increased eye-candy). It peaked at only 157W power draw and 90% utilization and temps peaked at 55°C/68.3°C (GPU/hotspot). And all that with a very stable 80 FPS with my gaming settings, which leads me to believe I could probably crank up the SS and/or switch on more eye-candy. Even at 184% SS the card never exceeded those values and didn't run close to its TDP. The card is undervolted but I did reach over 200W power consumption in benchmarks... weird.

 

 

 

S.

Edited by 1Sascha
Posted
40 minutes ago, 1Sascha said:

2022-08-17 23:15:42 - Il-2
Frames: 3191 - Time: 60000ms - Avg: 53.183 - Min: 38 - Max: 81

 

Rift-S, SS set to 184% in Steam VR.

 

What I don't quite get is that my GPU wasn't really working all that hard - neither during the benchmark nor with my in-game settings (reduced SS from the 184% used during the bench-run, increased eye-candy). It peaked at only 157W power draw and 90% utilization and temps peaked at 55°C/68.3°C (GPU/hotspot). And all that with a very stable 80 FPS with my gaming settings, which leads me to believe I could probably crank up the SS and/or switch on more eye-candy. Even at 184% SS the card never exceeded those values and didn't run close to its TDP. The card is undervolted but I did reach over 200W power consumption in benchmarks... weird.

 

 

 

S.

 

GPU is not going to work hard playing a trk file for benchmarking.

Posted (edited)
13 hours ago, dburne said:

GPU is not going to work hard playing a trk file for benchmarking.

Well, yeah... but I did play the game after the benchmark run (reduced SS to more sensible levels and upped the eye-candy after the bench run was done) and there too the card didn't really get pushed that hard. Like I said: It was the same during the benchmark-run and afterwards actually playing the game. Utilization never higher than 90% and power-draw and GPU heat were surprisingly low. Guess I'll crank the preset up to Ultra, switch on more bells and whistles and increase SS a bit more. Maybe that will push the card closer to its limits.

 

 

 

S.

Edited by 1Sascha
Posted
21 hours ago, 1Sascha said:

Avg: 53.183 - Min: 38 - Max: 81

 

Uhmm.  I would expect a bit more for your system.

There are not other i5-12600K CPU to compare with or other Rift-S test.

The VRtest1 is "only" rendering 9.5 million pixels and this should be handled quite well by a 3070. And, if you say that the GPU was always below 90%, then something else is constraining this test to not achieve 80 fps Avg.

 

You can run the bench test with just 100%SS and with 374%SS (VRTest2) and see what you achieve.

 

All this years with tests with IL-2, specially in VR, we know that it is sensitive to RAM memory latencies and also CPU cache size.

 

Your i5-12600K has a cache of 20M, whereas the 12900K has a cache of 30M (and also higher turbo and people who tested had better RAM), but perhaps the cache is the bigger advantage.

 

RAM speed plays a role, but I think that latencies are even more important for IL-2. Not an exact science since we have done many tests in many different CPU/GPU/Devices but if you have a chance to change your RAM for another RAM with better lantencies, like any of these models tested for your Mobo:

 

https://www.gskill.com/configurator?page=1&cls=1529635169&manufacturer=1524715120&chipset=1635734584&model=1635736507&adSearch2=Capacity§16GB (8GBx2),Tested_Speed§3600 MT/s,Tested_Speed§4000 MT/s,Tested_Latency§14-14-14-34,Tested_Latency§14-15-15-35,

 

For example:

https://www.gskill.com/product/165/166/1620975084/F4-3600C14D-16GTZRA

or

https://www.gskill.com/product/165/166/1620974880/F4-4000C14D-16GTZR

 

In EU you always have a period of time to test the RAM, and if you are not happy you can deliver it back.

 

Posted
22 hours ago, 1Sascha said:

Well, yeah... but I did play the game after the benchmark run (reduced SS to more sensible levels and upped the eye-candy after the bench run was done) and there too the card didn't really get pushed that hard. Like I said: It was the same during the benchmark-run and afterwards actually playing the game. Utilization never higher than 90% and power-draw and GPU heat were surprisingly low. Guess I'll crank the preset up to Ultra, switch on more bells and whistles and increase SS a bit more. Maybe that will push the card closer to its limits.

 

 

 

S.

 

Main thing is to get it to where it is a smooth and enjoyable experience for you, and then just have fun and enjoy the game. After all the rest is just - chasing some numbers that may or may not mean anything. At least that is what I do and am happier for it. More time playing equals less time testing.

Posted (edited)
23 minutes ago, dburne said:

Main thing is to get it to where it is a smooth and enjoyable experience for you, and then just have fun and enjoy the game. After all the rest is just - chasing some numbers that may or may not mean anything. At least that is what I do and am happier for it. More time playing equals less time testing.

Oh yeah... I was just wondering about the card being.. errr.. underused? Maybe because I undervolted it, but I can't be sure. Perhaps I should revert to stock voltage and see what happens in the VR-run. But then again: I did get a few hundred marks more in Time Spy, a few hundred less in Passmark's 2d and 3d tests, so I was under the impression that, compared to stock, the undervolt gave me more or less comparable performance but reduced power draw and heat.

 

*shrug*

 

Should I rather crank up IL-2's graphic settings or up my SS to improve picture quality? I'm pretty sure I've got some headroom available since when I applied my own settings again (lowering pixel density to 1.1 or so), my FPS never dropped below 80 and, like I said, the card never got above 90% utilization. But I'm not sure I'll be able to go nuts on both in-game settings and SS.

 

 

 

2 hours ago, chiliwili69 said:

Uhmm.  I would expect a bit more for your system.

I'm fairly certain I set everything up as specified in your initial post... SS to 184% in Steam VR, Legacy Reprojection off in Steam VR, ASW disabled in Tray Tool and all in-game graphics settings according to your screenshot. But I'll double check. My RAM seems to be running as it should (3600 more or less), but those sticks are only CL18s, so ... *shrug again* ?

 

S.

Edited by 1Sascha
Posted
22 hours ago, 1Sascha said:

I'm fairly certain I set everything up as specified in your initial post... SS to 184% in Steam VR, Legacy Reprojection off in Steam VR, ASW disabled in Tray Tool and all in-game graphics settings according to your screenshot. But I'll double check

Remember to put PD=1 in OTT and to check the pixel count in SteamVR with 184% should be something like 2235x2409 per eye (about 10.7 million pixels)

But if you run those two extra tests (at 100%SS and at 374%) then we will have more data to try to guess why your system is giving that performance.

 

Please, if there is anyone with a Rift-S or i5-12600K it would be good to have more tests just for comparisons.

Posted (edited)
On 8/19/2022 at 9:36 PM, chiliwili69 said:

But if you run those two extra tests (at 100%SS and at 374%) then we will have more data to try to guess why your system is giving that performance.

Haven't done the VR test again, but I did run some more benchmarks after altering my undervolt and clock-speeds a bit.

 

Passmark does a whole system benchmark, including memory, and it doesn't seem to think my RAM is particularly slow.

 

image.thumb.png.252201e89ff9c53a0226a4d713477bfa.png

 

It's not as highly rated as most of the rest of the system, but enough to get it into the 86 percentile.... weird. Perhaps I've overlooked some other setting... here's my settings in OTT, SS/pixel density seems to be set correctly in here.

 

image.thumb.png.5075773a194c85deb6b708c6fa8a3058.png

 

 

... and here are my SteamVR settings (not much to do here, except set SS and disable that legacy thing).

 

image.thumb.png.9732f7e30368012be8712c020047c20f.png

 

S.

Re-ran test 1 with 184% SS:

 

image.thumb.png.79beb8ba8982c23939ba3e2ded33fbef.png

 

Result is virtually the same, even though I upped voltage, GPU freq and GDDR freq a bit compared to the last run.

 

2022-08-21 16:18:16 - Il-2
Frames: 3122 - Time: 60000ms - Avg: 52.033 - Min: 38 - Max: 81

 

 

Gonna edit here once I have the other two runs complete (don't want to keep my browser open for that... ? ).

 

S.

 

 

Crap.. I wish the forum would stop merging my posts... makes things a bit confusing. ?

 

Update:

Ok.. this is officially weird. Re-ran the test at 100% SS in SteamVR (everything else unchanged) and:

 

2022-08-21 16:27:26 - Il-2
Frames: 3717 - Time: 60000ms - Avg: 61.950 - Min: 40 - Max: 81

 

Would've expected a bit more difference in results between 100 and 184%.

 

BTW: I did confirm via OTT that in-game PD was indeed at 1.36 with 184% and 1.00 with 100%. Also that ASW was disabled the whole time.

 

 

Temps and GPU power-draw were quite a bit lower though ... 54/67°C and 125W.

 

Going to run the 300+ SS insane one, hoping my PC won't blow up.. ?

 

 

 

S.

 

 

 

 

Edited by 1Sascha
Posted (edited)

... getting weirder still.

 

With 374 % SS, it didn't do much worse. Confirmed again that PD was indeed at 1.96 according to OTT.

 

2022-08-21 16:37:11 - Il-2
Frames: 3357 - Time: 60000ms - Avg: 55.950 - Min: 39 - Max: 73

 

The card got taxed a bit more (up to 95% usage, before it never went above 90%) and power draw was higher at 196W. Temps were at 62 and 76 for GPU and hotspot.

 

 

EDIT:

And just to see what would happen, I set everything up as I would actually play the game and ran the test again. SS down to 140%, lots of eye-candy turned on (but no FXAA or MSAA) and:

 

2022-08-21 16:47:57 - Il-2
Frames: 3038 - Time: 60000ms - Avg: 50.633 - Min: 38 - Max: 81

 

This time the card wasn't even breaking a sweat. 70% usage, 143W max draw and 55/68°C on GPU/hotspot.

 

 

And now I'm thoroughly confused and thinking that I'm missing some sort of vital setting to get the most out of my GPU.... ?

 

Maybe all this has to do with the undervolt, since with the Afterburner curve that card cannot boost beyond what I've set as its max frequency (1920 MHz IIRC). Never tried IL-2 without the undervolt, so I don't know how high it would've boosted with stock voltage. But in other games with just Afterburner's Auto-OC and without undervolting, I only got 1950 MHz max frequency (and the card would run into its power limit). Plus 3DMark and Passmark rate the card basically at the same performance with the Auto-OC and the undervolt. 3D Mark's score is a little higher undervolted, Passmark's is about the same.... *and* with the Auto-OC, the videoRAM was running at 7000 Mhz (stock) while I now have it set to 7800.

 

 

S.

 

Edited by 1Sascha
Posted
On 8/21/2022 at 4:02 PM, 1Sascha said:

including memory, and it doesn't seem to think my RAM is particularly slow.

 

Thank you for providing additional data. Those percentiles are not really useful since they compare your mark vs all the RAM test performed along the years. So, it is easy to be in 85-95th percentile.

 

 

Posted
On 8/21/2022 at 4:41 PM, 1Sascha said:

And now I'm thoroughly confused and thinking that I'm missing some sort of vital setting to get the most out of my GPU

Thanks again for running those extra VR tests with your Rift-S. Let´s analyze the results.

 

You got:

 

SS=100%:  Frames: 3717 - Time: 60000ms - Avg: 61.950 - Min: 40 - Max: 81

SS=184%:  Frames: 3122 - Time: 60000ms - Avg: 52.033 - Min: 38 - Max: 81

SS=374%  Frames: 3357 - Time: 60000ms - Avg: 55.950 - Min: 39 - Max: 73

 

Based on the 374% I would say that your 3070 should bot be the bottleneck of your system in normal usage (for example, SS between 100% to 150%).

In the 374% your GPU was really constraining the system, but was still able to give an average of 56fps which is a very good performance for that card (this resolution is equivalent to the Reverb G2 at 100% at 90fps, 19.5 million pixels)

 

Second clue is given by the test with 100%SS, this is just 5.8 million pixels, your GPU is really relaxed here but even with that low resolution you system is just only at 60 fps avg.

 

So, I think the bottleneck is not in the GPU. So, you can forget about tunning more or less the GPU. The problem is not there.

 

So, where it could be?

 

I created a post to find some potential reasons, look here.

 

In your  case I think it could be:

 

- Something related to CPU (throttling because Temp, or Amps or wattage, ). Try to disable the E-Cores, so IL-2 is forced run in the P-Cores for sure.

- Memory latency: You RAM runs a good speed (3600) but latency is not great. Do you have a friend or colleague who can borrow you a better RAM and try.

- Mobo BUS speed: I don´t know the details of your Mobo, but sometimes the fast Bus is used by other devices (SSD) or bottlenecking the system. Review your Mobo manual for GPU bus speed and RAM slots.

- OTT: I used OTT in the past (many years ago), but I resolved uninstalling it since some settings where still prsent after changes. Do you really need OTT?

- W11: I noted you run in W11. Not many tests done with W11. Only @DBCOOPER011, but he got best to date benchmark results in VR. So. perhaps W11 is not the reason.

- Related to Oculus software/hardware: Could you try to run the test with another device borrowed from a friend or colleague?

 

For the forum, anyone with a Rift-S or a 12600K wanting to make a VR test?

  • Thanks 1
Posted

@1Sascha

 

Some very basic things you might check before getting too deep in this.

Make sure both Hags and Game Mode are set to off in Windows.

Have a profile in Nvidia Control Panel for IL-2.exe - set in the Nvidia control panel power management for IL-2  to: Prefer Maximum Performance.

In that same Nvidia Profile you might set VR Pre-rendered frames to 2 and see if that helps.

I would also note running a benchmark trk is not going to pump you up to maximum boost on GPU. More important is how it performs for you playing the game.

You goal obviously I am sure is a smooth stutter free experience. And there are a plethora of settings you can mess with to try and achieve that.

Be methodical in your changes and testing and shoot not necessarily for FPS, but a smooth experience. Frame-times will tell you more on this than FPS will.

 

Taking fps out of the equation you certainly should be able to achieve a relatively smooth VR experience in IL-2 with your system. Start very low with your GFX settings and slowly work your way up checking performance in between.

 

Just my nickels worth...

;)

 

 

 

  • Like 1
  • Thanks 1
Posted
13 hours ago, chiliwili69 said:

- Mobo BUS speed: I don´t know the details of your Mobo, but sometimes the fast Bus is used by other devices (SSD) or bottlenecking the system. Review your Mobo manual for GPU bus speed and RAM slots.

GPU is in the correct slot - meaning the fastest slot on the board (top one, up to 5.0x16 supported). My system M.2 drive is in the CPU connected top slot, which is also one of the fastest ones on the board (supports up to PCIe 4.0 x4 - my 970 Evo Plus NVME only support 3.0 though, so... ). There are two other M.2 slots on the board that support 4.0, but those run through the chipset.

 

The mobo's manual makes it quite clear which RAM slots to use for dual channel operation and I just double-checked that the correct slots are indeed populated.

 

Considering my results in various benchmarks and in other games, I'm not really worried that the system isn't giving me optimal performance. What I will do is double-check my BIOS settings, since I'm using a slight auto overclock on my CPU. Might want to look into going back to stock and/or doing this stuff manually and applying an undervolt if that makes sense in this case. The mobo should be more than capable of giving me all sorts of manual options (MSI Z690 Tomahawk DDR4), but I'm kinda reluctant to start messing with things without fully understanding what I'm doing. Don't want to fry my CPU, MoBo or PSU... :)

 

 

S.

Posted
19 hours ago, dburne said:

I would also note running a benchmark trk is not going to pump you up to maximum boost on GPU.

 

Just for clarification. The current SYN_Vander benchmark (created by @SYN_Vander) is not  simply a recorded track as we use to do in previous benchmarks (Remagen, Chili, Samuel, Balapan). The SYN_Vander is a mission and all activity is computed as in a real mission, so CPU&GPU resources are used as in a ormal mission. The GPU load will depend on the settings and SS% (which have been fixed in VRtest1 and VRtest2 so we compare apples with apples).

 

In fact the VRTest2 has been designed to render 19.5million pixels which is a considerable amount of pixels and is pushing most of GPU to full load (maybe 3080Ti, 3090 and 3090Ti will be not fully loaded).

Posted (edited)
8 hours ago, 1Sascha said:

Don't want to fry my CPU, MoBo or PSU... :)

See...? I went back into the BIOS and remembered that I had simply activated MSI's game boost to overclock - which gives you a turbo offset of +1 on both the P- and E-cores and sets AVX to -3. I went back to stock and re-ran some CPU benchmarks and saw that game boost only got me a few percent of added score at the cost of a few Watts and slightly higher temps.

 

Went back to stock but, just to see what would happen and because my CPU never got into "turbo"-boost territory in any of the benchmarks or games I use (never reached close to the 12600K's 150W short-term TDP), I enabled MSI's "Enhanced Turbo Function". And ... oh boy ... ran passmark's CPU test just to see what would happen and when the encryption test rolled around, the CPU/core temps shot up to 100°C and turned red in HWMon ... I kinda, sorta freaked out a bit so I aborted the benchmark and went back to stock in the BIOS.

 

Next I re-enabled the Game Boost Auto-OC, disabled both Windows Gaming Mode and Hags and didn't find any significant changes in the benchmark results. What did improve performance and slightly raised temps and power draw however was setting Windows power-plan from "high performance" to "balanced". Not only did the CPU now actually throttle down significantly at idle and under light loads but it also seems to get boosted a bit higher at high loads. Still didn't go into 150W/turbo-boost territory, but, hey... it was kinda weird to see that my result in Time Spy for example was actually a bit better with the balanced power plan than with the high performance one.

 

 

S.

 

Edited by 1Sascha
Posted

The important number from Passmark bench for IL-2 is the Single threaded. What value did you get? is aligned with others i5-12600K?

 

I normally use the trends of MSIafterburner to monitor everything, not only GPU related but also CPU items like temperatures, clock, etc. So you can check when turbo or OC is applied while running a test or playing the game.

 

Here I was comparing the temps induced by the Prime95 stress test compared with IL-2 VR. In general the  IL-2 execution should not raise the temps a lot.

 

Posted (edited)
2 hours ago, chiliwili69 said:

What value did you get? is aligned with others i5-12600K?

3955 for me, average result according to Passmark's stats for the 12600K is 3978, so I'm right where I should be I reckon.

 

I should probably go for a per-core overclock (one P-core is a bit touchy, another can be OCed pretty far and the others seem average, judging by their temps under load). And I should also probably play around with voltage ... at least that's what most of the OC-videos and articles I found suggest for 12th gen Intel CPUs.

 

For now, I think I'm content with just the simple turbo-offset. Like I said: Don't want to start messing with voltages and accidentally killing any components... :)

 

 

 

S.

 

   
Edited by 1Sascha
  • Like 1
Posted
6 hours ago, chiliwili69 said:

 

Just for clarification. The current SYN_Vander benchmark (created by @SYN_Vander) is not  simply a recorded track as we use to do in previous benchmarks (Remagen, Chili, Samuel, Balapan). The SYN_Vander is a mission and all activity is computed as in a real mission, so CPU&GPU resources are used as in a ormal mission. The GPU load will depend on the settings and SS% (which have been fixed in VRtest1 and VRtest2 so we compare apples with apples).

 

In fact the VRTest2 has been designed to render 19.5million pixels which is a considerable amount of pixels and is pushing most of GPU to full load (maybe 3080Ti, 3090 and 3090Ti will be not fully loaded).

 

 

It is easy enough to check, just compare boost clock running the benchmark versus boost clock actually playing the game. Hey if they are the same then yeah no negative impact. There are so many variables in systems, how they are set up, what programs are being run, what settings in Windows have been made, what overclocking of GPU, CPU and Ram have been done, etc etc. Really hard to get a solid apples to apples comparison amongst rigs, even if they seem identical.

Posted
9 hours ago, 1Sascha said:

3955 for me, average result according to Passmark's stats for the 12600K is 3978, so I'm right where I should be I reckon.

 

That´s a very good mark for the 12600K not too far from the 12900K. Your CPU is performing as expected then.

 

My ryzen 5600X has 3357, quite below your STmark, but it is delivering around 110 avg in the IL-2 SYN_Vander in monitor test. You are also exactly at the same avg.

Normally, the higher the STMark the higher fps avg in IL-2 SYN_Vander monitor test.

 

There are two things that could influence:

 

Cache: yours is 20Mb and mine is 32Mb

 

RAM: yours is 3600  with 18-22-22-42, mine is 3600 with 16-16-16-36

 

I don´t know if the reason is cache, or RAM or both or any other thing.

 

Perhaps, with more 12600K or RiftS sample tests from other users we can conclude something.

Posted (edited)

 

I have been taking a closer look of the tests performed by @Voyager  (many thanks) since he was experimenting with Cache (from 5800X with 32Mb to 5900X3D with 100Mb) and also RAM frequencies/latencies in two VR devices (G1 and VP2).

 

The results were:

Untitled.thumb.png.55cafb9459d6226e661e783a7354dd5b.png

 

For L3 cache increase (32-->100Mb):

He got +10fps in G1 in all tests (monitor, VRTest1, VRTest2), and surprisingly +27fps in VRTest1 for VP2.

 

For RAM increase (2666-->3600MHz and CL20-->CL16):

He got +10fps in G1 in all VR tests, and +10fps in VRTest1 for VP2

 

So, based in this data we can conclude that Cache and RAM are important factors for IL-2. Which is somehow quite logical, but here we have quantified evidence.

Edited by chiliwili69
  • Like 1
  • Thanks 1
  • Upvote 1
  • 3 weeks later...
Posted

I have tested the new version 5.001 with the SYN_Vander benchmark.

 

CPU: 5600X stock settings

RAM : 32Gb at 3600Mhz at 16-16-16-36

GPU: 3080 stock settings

 

In principle all performance is as before, no gain/loss in fps.

Changes in the Damage Model are not checked here since the benchmark is a dense scenario but no bullets at all.

 

CPU 1080p test
Frames: 7008 - Time: 60000ms - Avg: 116.800 - Min: 98 - Max: 168
Frames: 7045 - Time: 60000ms - Avg: 117.417 - Min: 100 - Max: 168
Frames: 7074 - Time: 60000ms - Avg: 117.900 - Min: 97 - Max: 175

 

GPU 4K test
Frames: 7454 - Time: 60000ms - Avg: 124.233 - Min: 93 - Max: 182
Frames: 7490 - Time: 60000ms - Avg: 124.833 - Min: 91 - Max: 184
Frames: 7478 - Time: 60000ms - Avg: 124.633 - Min: 87 - Max: 186

 

VRTEST 1 (Index at 106%SS)
Frames: 5129 - Time: 60000ms - Avg: 85.483 - Min: 74 - Max: 91
Frames: 5124 - Time: 60000ms - Avg: 85.400 - Min: 73 - Max: 91
Frames: 5159 - Time: 60000ms - Avg: 85.983 - Min: 74 - Max: 91

 

VRTEST 2 (Index at 216%SS)
Frames: 4978 - Time: 60000ms - Avg: 82.967 - Min: 52 - Max: 92
Frames: 4935 - Time: 60000ms - Avg: 82.250 - Min: 57 - Max: 91
Frames: 4905 - Time: 60000ms - Avg: 81.750 - Min: 57 - Max: 91

 

I also have tested the Normandy map with my usual settings (Index at 150%SS and 80Hz) and Qucikmission and Career run pretty well, even over dense cities or with 20 planes. My usual settings are: ULTRA Preset, screen 19020x1080, Shadow High, Mirror off, landscape x4, Horizon 150Km, Land filter off, Grass normal, Clouds High, FXAAx2, Sharpen ON, 4K text. ON)

  • Upvote 1
Posted
On 1/15/2022 at 8:22 PM, HankCzinaski said:

Motherboard:   MSI B450 Tomahawk Max


CPU:                  Ryzen 5 3600
CPU Freq:         3.6 Ghz

L3 cache:          2x16 MB
Cores:               6
Threads:           12

Cooling:            Cooler Master Hyper 212 Black Edition

 

RAM:                 AData XPG GAMMIX D10

RAM type:         DDR4
RAM size:          16Gb (2x8GB)

Uncore Freq:    1600 MHz
RAM Freq:         DRAM 1600 MHz   ->   3200 MHz
RAM timings:    16-18-18-36-74

 

GPU:                  Gigabyte RX 5700 XT

 

Radeon drivers: 22.1.1

IL2 Version 4.702c

Benchmark V6

 

CPU Test 1080p:

Frames: 3963 - Time: 60000ms - Avg: 66.050 - Min: 57 - Max: 96

Before upgrading my Ryzen 3600 to 5600x I ran the benchmark again. Since the last test I re-pasted the CPU, installed new Radeon drivers: 22.8.2, installed new IL2 Version 5.001 and updated motherboard drivers in preparation for CPU change.

 

Still with Ryzen 3600:

First test was with XMP off (I guess it was turned off after mobo drivers update):

Frames: 3867 - Time: 60000ms - Avg: 64.450 - Min: 55 - Max: 101

 

Second test with XMP on:

Frames: 4073 - Time: 60000ms - Avg: 67.883 - Min: 58 - Max: 105

 

 

Here's the results after upgrading to Ryzen 5600x:

Motherboard:   MSI B450 Tomahawk Max


CPU:                  Ryzen 5 5600X
CPU Freq:         3.7 Ghz

L3 cache:          32 MB
Cores:               6
Threads:           12

Cooling:            Cooler Master Hyper 212 Black Edition

 

RAM:                 AData XPG GAMMIX D10

RAM type:         DDR4
RAM size:          16Gb (2x8GB)

Uncore Freq:    1600 MHz
RAM Freq:         DRAM 1600 MHz   ->   3200 MHz
RAM timings:    16-18-18-36-74

 

GPU:                  Gigabyte RX 5700 XT

 

Radeon drivers: 22.8.2

IL2 Version 5.001

Benchmark V6

 

CPU Test 1080p:

Frames: 5893 - Time: 60000ms - Avg: 98.217 - Min: 83 - Max: 152

Posted
56 minutes ago, HankCzinaski said:

Frames: 5893 - Time: 60000ms - Avg: 98.217 - Min: 83 - Max: 152

 

Thanks and congrats for your achievement. You got about +30fps by just chaging the CPU from Zen2 to Zen3.

I believe you are still constrained by your GPU, as you ould know AMD GPUs have an issue with IL-2 and it impacts your performance.

If you just play with monitor it doesn´t matter, but if you play in VR then it is a diferent story.

  • Thanks 1
  • Haha 1
Posted
On 9/18/2022 at 8:42 PM, chiliwili69 said:

 

Thanks and congrats for your achievement. You got about +30fps by just chaging the CPU from Zen2 to Zen3.

I believe you are still constrained by your GPU, as you ould know AMD GPUs have an issue with IL-2 and it impacts your performance.

If you just play with monitor it doesn´t matter, but if you play in VR then it is a diferent story.

Hello to all!

 

Just did 3 times the 1080p cpu test and my results are as below

 

2022-09-20 14:41:26 - Il-2
Frames: 5329 - Time: 60000ms - Avg: 88.817 - Min: 77 - Max: 129

2022-09-20 15:06:14 - Il-2
Frames: 5038 - Time: 60000ms - Avg: 83.967 - Min: 73 - Max: 115

2022-09-20 15:09:10 - Il-2
Frames: 5092 - Time: 60000ms - Avg: 84.867 - Min: 74 - Max: 127

 

my specs are 

 

Motherboard: Z590 AORUS ELITE AX
 CPU:                 intel core i5 10600k 
 CPU Freq:        4.5 Ghz
 L3 cache:        12 MB
 Cores:              6  
 Threads:           12 
 RAM type:        DDR4
 RAM size:        16 GB
 NB Freq:          4300 MHz
 RAM Freq:        3200 MHz 
 RAM TIMING: 14-14-14-34 
 GPU:                MSI SUPRIM X 3090 Ti

 

No overclocking , hyperthreading=on

 

What do you think , Is that ok with that cpu?

Is there anything else to do to get a better performance?

( dont want to OC my cpu , i am not still that expert and have enough heat inside my tower due to my GPU...)  

Setting hyperthreading to OFF will give me better fps? 

My cpu single threaded passmark score is  71%

 

i will post other tests soon...

 

...just want to thank chiliwili69 for these tests!!!, today i discovered thanks to cpuid cpu-z that my RAM memory speed was not at 3200 mhz but way lower at about 2200 mhz!! 

Posted (edited)

Just a quick update since I only now got to test the 3070 in IL-2/VR.

 

I went a bit nuts with the graphics settings because I wanted to see if the Rift-S could push the card to its limits while actually playing the game. So far, it hasn't been able to do so. Set SS to 150% in SteamVR (err.. 1.22? pixel density according to OTT). Then I upped the in-game preset to Ultra, upped clouds, viewing distance, FXAA 4x and some other stuff but no matter what I did, the framerate was only ever fluctuating between 79 and 80. With my initial settings, OTT reported performance headroom of over 30% and even when I went a bit nuts after that I still had 8 to 10% headroom left (Kuban map, quick missions with 4 friendlies and eight enemy planes and quite a few ground targets around).

 

When I checked in HWMon after my session, I saw that the card had, apparently, never exceeded 98% useage on both GPU and memory and max power draw was quite low at 178W - I run the card undervolted to 0.925V with clock speed set to 1935 and memory OCed by 800 MHz. Temps were also surprisingly low with GPU max reported at 58 and hotspot at 66°C.

 

All in all, this 3070 performs pretty much how I hoped it would ... where the 2070 Super was getting pushed to its limit and still not managing stable 80 FPS, the 3070 doesn't really seem to be that challenged by the Rift-S. And since I didn't see much difference between my "ugliest" and "prettiest" graphics settings anyway, I might dial back some of the stuff again.

I'm actually starting to wonder whether this card would be capable of driving a Reverb G2 half-way decently - since NVidia seem to be going a bit crazy with their pricing (at least for the three 40xx cards whose prices they announced today), I might be looking to get a G2 rather than jumping on the Ada Lovelace train.. :)

 

 

S.

Edited by 1Sascha
Posted
On 9/20/2022 at 10:17 PM, dgiatr said:

What do you think , Is that ok with that cpu?

Is there anything else to do to get a better performance?

( dont want to OC my cpu , i am not still that expert and have enough heat inside my tower due to my GPU...)  

Setting hyperthreading to OFF will give me better fps? 

My cpu single threaded passmark score is  71%

 

Thank you for your test. Did you run Windows10 or Windows11?

 

You got a performance aligned with other test of the 10600K and 10th gen of intel.

 

Your memory is already quite OK regarding frequency and latency. You will not gain too much by improving it, perhaps about +5fps at best. But I really don´t know.

 

Hyperthreading ON/OFF has no impact on IL-2 performance.

 

To properly answer your question: do you play in monitor (1080p, 1440p, 4K?) or VR (what device?).

 

Posted (edited)
2 hours ago, chiliwili69 said:

 

Thank you for your test. Did you run Windows10 or Windows11?

 

You got a performance aligned with other test of the 10600K and 10th gen of intel.

 

Your memory is already quite OK regarding frequency and latency. You will not gain too much by improving it, perhaps about +5fps at best. But I really don´t know.

 

Hyperthreading ON/OFF has no impact on IL-2 performance.

 

To properly answer your question: do you play in monitor (1080p, 1440p, 4K?) or VR (what device?).

 

Hello chiliwili69

Thank you for your reply!

I play only in Vr with G2 at about 2700x2700 for better spotting and 2xMsaa  , opencomposite and openxr toolkit.

After setting my ram to 3200 mhz i  get 90 fps most of the times.

 

I am about to run your vr tests to see you it is going.

I guess will run your vr tests with the same ingame settings like 1080p cpu test, except of course the %SS setting, is that right?

I run windows 10

Is there any problem to run the tests with opencomposite instead of steam vr?

In case i run opencomposite what should be my resolution in openxr toolkit in order to run properly the two vr tests?

Edited by dgiatr
Posted
4 hours ago, dgiatr said:

I play only in Vr with G2 at about 2700x2700 for better spotting and 2xMsaa

Having the top high end card right now, you are appling an equivalent of 74% of SS. I think you can go even higher since your GPU would have still room for more SS, for example 90% or 100%. But it depends on what you want for spotting vs details.

4 hours ago, dgiatr said:

I guess will run your vr tests with the same ingame settings like 1080p cpu test, except of course the %SS setting, is that right?

No, the two VR tests are run with different settings. Look at the first post in this thread.

4 hours ago, dgiatr said:

Is there any problem to run the tests with opencomposite instead of steam vr?

The VR tests are tought to be run with SteamVR, so we can compare tests done by others.

But you can run both with SteamVR and also with opencomposite so we can measure the gain.

4 hours ago, dgiatr said:

In case i run opencomposite what should be my resolution in openxr toolkit in order to run properly the two vr tests?

If you want to additionally run the two VR tests with opencomposite you need to adjust the resolution to have around the same total pixels of each test:

VRtest1: 9.78 total million pixels (about 2236x2187 per eye)

VRtest2: 19.5 total million pixels (about 3162x3093 per eye)

Posted (edited)
On 9/24/2022 at 8:36 PM, chiliwili69 said:

Having the top high end card right now, you are appling an equivalent of 74% of SS. I think you can go even higher since your GPU would have still room for more SS, for example 90% or 100%. But it depends on what you want for spotting vs details.

No, the two VR tests are run with different settings. Look at the first post in this thread.

The VR tests are tought to be run with SteamVR, so we can compare tests done by others.

But you can run both with SteamVR and also with opencomposite so we can measure the gain.

If you want to additionally run the two VR tests with opencomposite you need to adjust the resolution to have around the same total pixels of each test:

VRtest1: 9.78 total million pixels (about 2236x2187 per eye)

VRtest2: 19.5 total million pixels (about 3162x3093 per eye)

 

 

So here are my vr1 test and vr2 test with steam vr

 

vr1 test

2022-09-27 16:51:04 - Il-2
Frames: 3559 - Time: 60000ms - Avg: 59.317 - Min: 49 - Max: 91

2022-09-27 16:53:17 - Il-2
Frames: 3361 - Time: 60000ms - Avg: 56.017 - Min: 46 - Max: 82

2022-09-27 16:55:47 - Il-2
Frames: 3468 - Time: 60000ms - Avg: 57.800 - Min: 48 - Max: 90


vr2 test

2022-09-27 17:14:00 - Il-2
Frames: 3101 - Time: 60000ms - Avg: 51.683 - Min: 43 - Max: 72

2022-09-27 17:18:27 - Il-2
Frames: 3424 - Time: 60000ms - Avg: 57.067 - Min: 48 - Max: 91

2022-09-27 17:20:22 - Il-2
Frames: 3409 - Time: 60000ms - Avg: 56.817 - Min: 47 - Max: 91

2022-09-27 17:25:50 - Il-2
Frames: 3362 - Time: 60000ms - Avg: 56.033 - Min: 47 - Max: 89
 

......i guess i am cpu bottlenecked, is that right?

during vr1 and vr2 tests  while using FPSVR i see my CPU FRAMETIME MORE THAN 11.1 ms most of the time...

Edited by dgiatr
Posted
6 hours ago, dgiatr said:

i guess i am cpu bottlenecked, is that right?

during vr1 and vr2 tests  while using FPSVR i see my CPU FRAMETIME MORE THAN 11.1 ms most of the time...

 

Exactly like that. In this SYN_Vander VR tests you were clearly CPU limited.

 

The SYN_Vander benchmark has a significant number of objects in the scene (houses, trains, planes, and terrain objects which require the CPU to make an extra effort.

Thst´s why it is a benchmark.

In normal play game you normally don´t have this density of objects until you arrive to the center of the action battle where you could have similar number of objects around (AI and non AI).

 

The 10th and 11th gen of Intel are not particularlly good for IL-2 VR, it is much better the 12gen.

 

So your next upgrade thing would be CPU, but with the right settings you can play IL-2 VR well during 95% of the time.

Posted (edited)
8 hours ago, chiliwili69 said:

So your next upgrade thing would be CPU, but with the right settings you can play IL-2 VR well during 95% of the time.

You are right about that!

 

As a conclusion while flying in MP where i want to be as much competitive as i can (ok i am not that much but don't forget i am new to il2?) and want the best possible spotting i can with reasonable identification, i run my G2 at about 2700x2600 with 2xMsaa , mirrors off , shadows off , e.t..c , i also like to fly at high altitudes in the sky so most of the time i get not less than 85 fps.

 

For me the big gain getting into your bench-marking except to find out that i am cpu limited,  was that i suddenly found out that almost all this year my RAM memory speed was lower than i thought at about 2200 MHZ and now it works as fast as it can at 3200 MHZ plus lower latency  (xmp mode) and i have no Ghosting anymore when other planes fly near to me and better fps!

 

Of course in the near future i will go for a cpu upgrade..

 

Thank you again for such a useful tool ?

Edited by dgiatr
  • Thanks 1
Posted (edited)

Probably should've done this some time ago; will try to find some time to do the VR runs as well at some point:

 

Motherboard:   MSI X570 Unify (MS-7C35)

CPU:                  Ryzen 9 5900X
CPU Freq:         3.7 Ghz - boost 4.95Ghz

Cores:               12
Threads:           24

RAM size:         32Gb (2x16GB dual rank)

RAM Freq:         3733 MHz

Uncore Freq:    1866 MHz
RAM timings:    16-18-18-36-56

GPU:                  3080Ti

 

IL2 Version 5.001


Consecutive CPU test 1080p runs:

Frames: 7780 - Time: 60000ms - Avg: 129.667 - Min: 114 - Max: 193

Frames: 7791 - Time: 60000ms - Avg: 129.850 - Min: 110 - Max: 187

Frames: 7821 - Time: 60000ms - Avg: 130.350 - Min: 112 - Max: 196

 

Consecutive 4K GPU runs:

Frames: 8662 - Time: 60000ms - Avg: 144.367 - Min: 103 - Max: 208

Frames: 8669 - Time: 60000ms - Avg: 144.483 - Min: 105 - Max: 207

Frames: 8681 - Time: 60000ms - Avg: 144.683 - Min: 104 - Max: 217

 

Passmark:

https://www.passmark.com/baselines/V10/display.php?id=163845005948

 

 

Edited by Firdimigdi
Added 4K GPU test, Passmark
Posted
13 hours ago, Firdimigdi said:

Probably should've done this some time ago;

Many thanks! it is never late!. Did you run the test because you are going to upgrade your CPU?  do you run Windows11 or W10

Posted (edited)
1 hour ago, chiliwili69 said:

Did you run the test because you are going to upgrade your CPU?

 

No, no upgrade considered for the time being. Just figured I should get running the benchmarks off my to-do list and it was a good chance since there aren't many done with 5.001.

 

This was done on Windows 10.

 

EDIT:


@chiliwili69, since I was waiting on a task to complete on another machine - here's also 3 consecutive runs of the 4K GPU on the same system as above:

 

Frames: 8662 - Time: 60000ms - Avg: 144.367 - Min: 103 - Max: 208

Frames: 8669 - Time: 60000ms - Avg: 144.483 - Min: 105 - Max: 207

Frames: 8681 - Time: 60000ms - Avg: 144.683 - Min: 104 - Max: 217

And a full passmark score:
https://www.passmark.com/baselines/V10/display.php?id=163845005948

Edited by Firdimigdi
  • Thanks 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...