Jump to content
chiliwili69

Benchmark for CPU/RAM performance: Remagen 4.002 to 4.004

Recommended Posts

2 minutes ago, Jaws2002 said:

Last time i did the test just once and got similar score, but on rerun i got a higher score, so i logged that one.

 

OK, thanks. Normally on intel at fixed clock speed it give small differences (below 10), but it is weird that you have such a large difference (75) for the same clock speed.

Share this post


Link to post
Share on other sites

Ryzen is very opportunistic in regards to temperature, much like a graphics card. If it records lower temperatures it boosts higher and more frequently. that may be the reason for such big jumps.

Share this post


Link to post
Share on other sites
2 hours ago, chiliwili69 said:

 

wju, I have just realized that your CPU is a 4-channel RAM chip!

I really want to know more about your machine:

1.- are you actually using 4-channel? (ie is your RAM config 2x16 or 4x8?)

2.- What CPU cooler are you using?

3.- have you tried to Overclock more by putting Hyperthreading Off and/or disabling 2 or 4 cores of your 8 physical cores? (IL-2 run well with just 4 cores)

4.- could you run again the benchmark with the new settings? ( Clouds  to Low to not bottleneck GPU)

 

Thanks!

 

 

 

well, here you are:

 

add 1) yes I have 4x8GB CRUCIAL Tech part no.  BLE8G4D34AEEAK.K8F, xmp enabled

add 2) Noctua NH-D15S with 2 fans

add 3) my cpu is delided, over 4.7GHz no way, even single core, max voltage => overheated, unstable

add 4) see below -  both high/low clouds 3 times just to be sure;   4 run with MSI log enabled as you can see - almost no impact except GPU load is higher at high clouds, but rarely >95%

 

MSI log html with gpu & cpu all cores load are attached, one can see, that both CPu&GPU are slacking, or the code is not so good optimized 🙂

 

clouds high:

2019-12-09 20:29:03 - Il-2
Frames: 22625 - Time: 180000ms - Avg: 125.694 - Min: 84 - Max: 215

2019-12-09 20:33:06 - Il-2
Frames: 22576 - Time: 180000ms - Avg: 125.422 - Min: 84 - Max: 215

2019-12-09 20:37:36 - Il-2
Frames: 22341 - Time: 180000ms - Avg: 124.117 - Min: 83 - Max: 215

 

clouds low:

2019-12-09 20:43:00 - Il-2
Frames: 22696 - Time: 180000ms - Avg: 126.089 - Min: 83 - Max: 214

2019-12-09 20:47:24 - Il-2
Frames: 22361 - Time: 180000ms - Avg: 124.228 - Min: 81 - Max: 214

2019-12-09 20:51:17 - Il-2
Frames: 22324 - Time: 180000ms - Avg: 124.022 - Min: 83 - Max: 214

 

cloud high + MSI log
2019-12-09 21:13:27 - Il-2
Frames: 22210 - Time: 180000ms - Avg: 123.389 - Min: 82 - Max: 212


cloud low + MSI log
2019-12-09 21:22:57 - Il-2
Frames: 22145 - Time: 180000ms - Avg: 123.028 - Min: 82 - Max: 213

 

 

MSI_Log.rar

Edited by wju

Share this post


Link to post
Share on other sites

Motherboard: ASRock Z270 Pro4
 CPU:                 7600K
 CPU Freq:        5 Ghz
 L3 cache:        6 MB
 RAM type:        DDR4
 RAM size:        16 GB
 RAM Freq:        2933 MHz
 RAM Latency: 16
 GPU:                 1080Ti
 STMark:           2940

 

First run with HIGH Clouds Frames: 24393 - Time: 180000ms - Avg: 135.517 - Min: 94 - Max: 217 / STMark 2940

 

Second test with LOW Clouds : Frames: 24056 - Time: 180000ms - Avg: 133.644 - Min: 88 - Max: 217 /STMark 2946

And one more with LOW Clouds Frames: 24392 - Time: 180000ms - Avg: 135.511 - Min: 85 - Max: 217

Edited by Hartigan

Share this post


Link to post
Share on other sites
12 hours ago, wju said:

MSI log html with gpu & cpu all cores load are attached,

 

Many thanks wju for your detailed answer.

My initial interest in 4-channel CPUs has been eroded due to the facts you bring. I thought they could help in IL-2 but the data shows no significant gain from dual channel CPUs for the same CPU clock. And, as you say, they are more difficult to overclock.

 

As expected, the High vs low in clouds show no difference in fps, but less GPU load. I have extracted this graph from your logs (all in same scale):

So it is better to have the benchmark based with low clouds, since more people with lower GPU could participate.

 

wju.thumb.png.f4c953c3006e8d5d79f907ce51ee69e4.png

35 minutes ago, Hartigan said:

First run with HIGH Clouds Frames: 24393 - Time: 180000ms - Avg: 135.517 - Min: 94 - Max: 217 / STMark 2940

 

Second test with LOW Clouds : Frames: 24056 - Time: 180000ms - Avg: 133.644 - Min: 88 - Max: 217 /STMark 2946

And one more with LOW Clouds Frames: 24392 - Time: 180000ms - Avg: 135.511 - Min: 85 - Max: 217

Thanks. Very much as expected. your expected fps value is very much aligned with the STMark correlation.

Share this post


Link to post
Share on other sites

Rerun with clouds=low:

Motherboard: ASUS MAXIMUS XI HERO
 CPU:             9700K
 CPU Freq:        5.0 Ghz
 L3 cache:        12 MB
 RAM type:        DDR4
 RAM size:        32 GB
 RAM Freq:        3200 MHz
 RAM Latency:     17
 GPU:             2070
 STMark:          3018

 

Results with Graphic Settings as specified:(Screen Resolution 1920x1080, Shadows Quality = High, Mirrors=medium, Distant landscape=x3, Horizon draw dist=130km
                  Landscape filter = off, grass quality = distant  WAS clouds=medium, NOW clouds=low, AA=x2): Distant Buildings = checked, Use 4k=checked, Gamma=.8,
                  Fullscreen    (Mods=OFF)

2019-11-30 06:07:30 - Il-2
Frames: 23616 - Time: 180000ms - Avg: 131.200 - Min: 94 - Max: 228 WAS clouds=medium

2019-12-10 13-03-01-22 - IL-2
Frames: 23952 - Time: 180000ms - Avg: 133.067 - Min: 91 - Max: 231 NOW clouds=low

Share this post


Link to post
Share on other sites
3 hours ago, whitav8 said:

WAS clouds=medium, NOW clouds=low

 

Thanks for the rerun.

your initial (according to the previous instructions was with clouds High, not medium).

Apparently your 2070 was not constraining the test as much as I thought, since your difference is really small.

Share this post


Link to post
Share on other sites

I ran the benchmark 3 times with high clouds and 3 with low clouds. Scores are below. It doesn't look like there is much difference between low and high, in fact max fps looks faster at high. Seems weird to me that this pc is doing these scores. I verified all game and Nvidia settings are as you instruct. Game and monitor resolution is 1920x1080 native for the HDTV.

 

Many thanks for running again the tests with the new settings (Clouds Low).

It is clear that your 1080 GPU was not bottlenecking the test too much, since the average results of the 3 tests does not differ too much.

Yes, your PC delivers a quite nice result. This is a fact. And I really don´t know why, but I took a closer look at the specs of the i7-3820 CPU and I saw it is a CPU of 4 channels RAM, allowing a higher max memory bandwidth:

_______________________________________________________________________________________________________________________________________________________________________________________

Chilli, given the scores I recorded is there anything they tell me about whether a cpu or GPU upgrade would make a significant difference  in terms of monitor and vr gaming in IL2?  What I do know is that things get very choppy when there are a lot of AI enemy around. I have to tone down the number of flights and planes in PWCG and still it is choppy when the mission gets busy. That seems more cpu than gpu. 

 

From what Ive read it is going to take 80 fps average to keep vr fed sufficiently. I am guessing that despite the scores my rig does in this benchmark it probably cant support vr without a good amount of ASW help which I dont want. What would be the recommendations to get where I need to be? 

 

 

Edited by TheSNAFU

Share this post


Link to post
Share on other sites
7 hours ago, TheSNAFU said:

Chilli, given the scores I recorded is there anything they tell me about whether a cpu or GPU upgrade would make a significant difference  in terms of monitor and vr gaming in IL2?  What I do know is that things get very choppy when there are a lot of AI enemy around. I have to tone down the number of flights and planes in PWCG and still it is choppy when the mission gets busy. That seems more cpu than gpu. 

 

From what Ive read it is going to take 80 fps average to keep vr fed sufficiently. I am guessing that despite the scores my rig does in this benchmark it probably cant support vr without a good amount of ASW help which I dont want. What would be the recommendations to get where I need to be?

 

Well, It is a difficult question what you ask. It depends on the monitor resolution and VR headset you want to use and the settings that you want to apply.

 

For a 1080p monitor or TV at 60Hz (this is what you are using now) I think you are quite OK.

If you go to higher monitor resolutions or frequencies then you might need an upgrade.

 

Regarding VR I am in the same boat than you. I will upgrade next year and I am gathering data about best CPU/RAMs.  The GPU is just a matter of how much SS (and clouds) you put, and what VR device you choose.

 

If you are new to VR you can first go with the Rift-S which is the device that demand less (80Hz and 1280x1440 res per eye) and it gives the most for the money. I think you will be OK but will need to tweak the settings a bit.

If you chose a Reverb you will probably will need to upgrade GPU.

 

Overall, your CPU/RAM combo only delivers 10fps less than a modern 9700K at 5.0GHz with 3200MHz RAM. I would wait for future CPUs in your case.

 

Your CPU/RAM deliver an exceptional performance given its age (7 years old).

 

But extrapolating monitor results to VR is a bit dangerous. "Normally" the fps are halved, but there is a lot of variability from headseats and SS and many other factor.

But it is clear that a good CPU/RAM in monitor will be also a good CPU/RAM in VR.

6 hours ago, 71st_AH_statuskuo said:

Chili - rerun with clouds at low:

Avg: 131.933 - Min: 90 - Max: 221

 

Thanks for rerun. Very much aligned with correlation. your 1080 was bottlenecking the test only a little bit.

Share this post


Link to post
Share on other sites
14 hours ago, TheSNAFU said:

 

Chilli, given the scores I recorded is there anything they tell me about whether a cpu or GPU upgrade would make a significant difference  in terms of monitor and vr gaming in IL2?  What I do know is that things get very choppy when there are a lot of AI enemy around. I have to tone down the number of flights and planes in PWCG and still it is choppy when the mission gets busy. That seems more cpu than gpu. 

 

From what Ive read it is going to take 80 fps average to keep vr fed sufficiently. I am guessing that despite the scores my rig does in this benchmark it probably cant support vr without a good amount of ASW help which I dont want. What would be the recommendations to get where I need to be? 

 

I think your best option, if possible, is to buy a VR headset from somewhere with a good "no questions asked" return policy. That way you can try the headset and decide if you like it.

 

Many pilots in my squadron are using a 1080 GPU, and the Rift S, and they are comfortable with 40 FPS + ASW. It's not as good as true 80 FPS, but the rig you need for full-time 80 FPS is very powerful indeed. If you can obtain a Rift S to try the 40 ASW mode, you might find it's acceptable. In my flying there are only a few moments where 40 ASW causes a problem -- when I have a fast moving target going across my vision, maybe to my left or right, then I can see the interpolated frames. But for 95% of my flying, I don't notice the interpolation.

 

All this also depends on your style of flying. PWCG and solo missions can actually be more intensive (for the CPU) than online multiplayer, depending on the number of Ai planes or real players.

Share this post


Link to post
Share on other sites

Alonzo, thanks for the thoughts. I think Rift S is probably best for me to start with in vr if I decide to do it. 
 

And thanks Cilli for all your assessment of my rig. 
 

Is it possible that while it scores very well in the benchmark, my cpu may not be able to manage a lot of AI stuff going on nearly as well as something like a much newer i7 9600 or 9700?  Could that cause the choppy performance? In missions things are butter smooth until enemy AI and additional friends show up. That’s when the game is choppy but FPS stay at locked 60. I don’t know 🤔 

Edited by TheSNAFU

Share this post


Link to post
Share on other sites
14 hours ago, TheSNAFU said:

That’s when the game is choppy but FPS stay at locked 60

 

That´s really strange. if you have always 60fps in your 1080p TV, how it can be choppy at the same time?

 

I use fpsVR to monitor framerates in VR for GPU and CPU. I don´t know a similar tool for just monitor that you could use to analyze what is happening in your PC.

take a look of this recommendations: https://forum.il2sturmovik.com/topic/34107-items-to-review-if-you-have-low-performance-in-il-2-vr-test/

 

Share this post


Link to post
Share on other sites
14 hours ago, TheSNAFU said:

Alonzo, thanks for the thoughts. I think Rift S is probably best for me to start with in vr if I decide to do it. 
 

And thanks Cilli for all your assessment of my rig. 
 

Is it possible that while it scores very well in the benchmark, my cpu may not be able to manage a lot of AI stuff going on nearly as well as something like a much newer i7 9600 or 9700?  Could that cause the choppy performance? In missions things are butter smooth until enemy AI and additional friends show up. That’s when the game is choppy but FPS stay at locked 60. I don’t know 🤔 

Sounds like the stuttering many people are experiencing, mostly online but sometimes in single player. Its hard to fix. Check out the Stutters in Singleplayer and Stutters in Multiplayer threads in the Technical Issues forum. There are some tips there, but its hard to completely fix it. Basically a frame or two gets dropped because the FPS drops suddenly for some reason, but the FPS tracker doesn't show it in-game. Hard to say for sure that this is it without a video showing what you are seeing though. 

Share this post


Link to post
Share on other sites

Re-run with clouds LOW and 3D Migoto mod disabled (all mods off)

Motherboard: MSI Z370 SLI Plus
CPU:         8086K
CPU freq:    5.0 Ghz        (hyper-threading disabled, 6-cores only)
L3 Cache:    12 MB
RAM Type:    DDR4
Ram size:    16 GB
Ram freq:    3333 Mhz (OC from 3200 stock)
Ram latency: 16
GPU:         RTX 2080  (EVGA 2080 XC Ultra, 'stock' pre-OC by manufacturer)
STMark:      3029

Benchmarks:   Frames / Avg / Min / Max
Run 1:        26686 / 148.256 / 102 / 240
Run 2:        26464 / 147.022 / 103 / 240
Run 3:        26432 / 146.844 / 103 / 240

Game NOT restarted between runs, I simply exited to main menu and then reloaded the track. This slight reduction in performance between runs is something I observed in VR benching too. Note that I have reduced my RAM frequency to 3333 from previous setting (3466) due to instability.

Share this post


Link to post
Share on other sites

Hey Redkestrel thanks. I looked thru the sp stuttering thread and there isn’t anything there I haven’t tried. The choppiness actually pre dates the Bodenplatte update. It runs Stalingrad and Moscow well but was choppy in Kuban. That’s why I was thinking about an upgrade of cpu, motherboard and ram. 
 

I ran a pwcg mission today with MSI afterburner. The FPS did drop below 60 into the mid 50’s a few times when things got busy. It also showed the cpu briefly at or near 100% load. That would seem to indicate the cpu could be the cause of the slowdowns. 
 

In the passmark test it scores at the lower end of the recorded scores but in chili’s mission benchmark FPS scores were up there with much newer more powerful rigs. 
 

Just looking for something definitive to decide what to do. i really appreciate all the great help here! Best community anywhere. 

Share this post


Link to post
Share on other sites
8 hours ago, Alonzo said:

Benchmarks:   Frames / Avg / Min / Max Run 1:        26686 / 148.256 / 102 / 240 Run 2:        26464 / 147.022 / 103 / 240 Run 3:        26432 / 146.844 / 103 / 240

Many thanks for re-run.

The mod off, clouds low and memory at 3333 gave you about 10 fps more than before.

The RAM effect probably is very small (negative impact) and the clouds from high to low in your 2080 probably has minimal impact (positive impact)

So the mod cost would be around 8-10fps.

 

Overall your 8086K at 5.0GHz gives a very good framerate. Have your tried to go beyond 5.0GHz? what Vcore voltage and CPU cooler do you use?

Edited by chiliwili69

Share this post


Link to post
Share on other sites
8 hours ago, chiliwili69 said:

Overall your 8086K at 5.0GHz gives a very good framerate. Have your tried to go beyond 5.0GHz? what Vcore voltage and CPU cooler do you use?

 

It's a Noctua NH U14S which apparently is a decent air cooler, but the chip needs more. I'm running adaptive + offset mode in BIOS so that the voltage is reduced when the chip doesn't need it, with a slight bump in LLC. Prime95 AVX large FFTs (stress memory more than CPU) I see Vcore as reported in CPU-Z at 1.344v. Core Temp reports 84C for the hottest core.

 

I haven't gone beyond 5.0ghz because unfortunately Prime95 in power-virus mode (small FFTs) makes the CPU hit 100C and it thermal throttles itself. This seems bad and scary. Every now and then I think about maybe getting a large AIO and trying for 5.1ghz, but it's an expense and part of me still feels at the mercy of the developers (they could make an optimization next patch that suddenly gives us extra headroom, and my efforts would be wasted). Partly I spend so much time on map making and admin for Combat Box that I would just like to fly when I can. I have taken to cranking up the eye candy and running at 40 FPS ASW forced on. Works great most of the time except with a plane moving quickly across my field of vision.

Share this post


Link to post
Share on other sites

Alonzo, so with your stout rig you have times when ASW is needed? If that’s the case my rig doesn’t have a prayer lol. I take  it planes get blurry when they cross your path in ASW. That kind of thing is why I hesitate to get into vr. 

Share this post


Link to post
Share on other sites
1 hour ago, TheSNAFU said:

Alonzo, so with your stout rig you have times when ASW is needed? If that’s the case my rig doesn’t have a prayer lol. I take  it planes get blurry when they cross your path in ASW. That kind of thing is why I hesitate to get into vr. 

 

There is no rig currently that can run this sim in VR without fps dropping below 80 or 90 at times.

The more powerful the less it does, but it will depending on graphics settings and amount of activity in the displayed image.

 

Share this post


Link to post
Share on other sites
1 hour ago, TheSNAFU said:

Alonzo, so with your stout rig you have times when ASW is needed? If that’s the case my rig doesn’t have a prayer lol. I take  it planes get blurry when they cross your path in ASW. That kind of thing is why I hesitate to get into vr. 

 

It also depends a lot on the map and how you're playing. The Moscow and Stalingrad maps are the least intensive, Kuban is in the middle, and the new Rhineland map is most intensive. If you're willing to lower the settings a bit, deal with ASW when you're very near to the ground, and don't expect miracles, you can have a great gameplay experience in VR. The feeling of being inside the plane is amazing. But yeah, there are times when the game drops down from the frame cap to something less.

 

Most of the made-for-VR games that avoid this are using a lot of tricks to make the rendering a lot simpler. But any not-made-for-VR game can have a problem. Look at SkyrimVR for example -- amazing game, old engine, can be an amazing experience but not always at the frame cap.

Share this post


Link to post
Share on other sites

I thought I read somewhere that AWS won’t be activated unless the FPS drop under 80 for some period of time. Is that true? What happens when AWS is turned off and FPS drop below 80? 

Share this post


Link to post
Share on other sites
1 hour ago, TheSNAFU said:

I thought I read somewhere that AWS won’t be activated unless the FPS drop under 80 for some period of time. Is that true? What happens when AWS is turned off and FPS drop below 80? 

 

Depending on how much it drops, you may see some slight stuttering.

This seems to vary somewhat between different machine configs.

I can turn it off and still get a pretty smooth experience. In fact I used to fly with it always off.

I have since went back to running with it on and mine looks fine with it.

 

Oculus updated to ASW 2 back in May and it does a better job I think. Only works in games that support depth buffering, and fortunately IL-2 does.

Easy to turn off - in game just hit ctrl+numpad 1. Or use the Oculus Tray Tool available over on the Oculus forums.

 

 

Edited by dburne

Share this post


Link to post
Share on other sites

I did some flying today with ASW turned off (and deliberately reduced settings to allow for closer-to-80 FPS most of the time). Overall, I'm on the fence as to which I prefer.

 

Running at 40 FPS with ASW forced on, it's a more consistent experience. Very few things cause any kind of frame rate hiccup, because ASW is always active and giving a kind of performance buffer to the game. I can also run higher shadows and clouds like this. But there are some artifacts around the cockpit frame if you move your head rapidly, and if you zoom with the Migoto mod on a plane moving across your vision, you see double.

 

Running at mostly-80 FPS with ASW disabled, I encounter < 80 FPS fairly regularly. But 70 FPS is pretty smooth and you don't have any artifacts. But then some sections of flying, for example over a large town on Rhineland, multiplayer, in the cloud, with raindrops on the window, I was down to 55 FPS and really noticing the sluggish framerate. But then zooming with Migoto you don't get doubled frames, so identification is a bit better.

 

Then there's the problem that the Migoto mod, wonderful as it is, robs you of a few FPS. On my system it's about 5 FPS (measured just by looking at the frame rate, not scientific). That's actually a fair bit and could be the difference between a constant 80 and something less than that.

 

So overall, I'm still not sure what I prefer. 40-ASW for a more consistent experience, better eye candy, but ASW artifacts? Or 80-when-I-can-get-it with less eye candy and a less consistent experience? It's hard to decide.

Share this post


Link to post
Share on other sites

Just for fun I ran the test with my Reverb, set SS to the native resolution of 2k x 2k, all other settings to the directions given and reprojection off.

Results: Avg: 52.067 - Min: 42 - Max: 91. 

This is compared to the 131 fps avg with 1080p monitor.  Not too bad considering it's like running two 4k monitors.   There are stutters, but with WMR reprojection on, it smooths most of that out.

I reran with the Lefuneste's mod on and only saw a 1-2 fps drop, which is well worth it.

I also had fpsVR running (which hopefully didn't cause any fps hit).

My median frametime for GPU was 18.1 ms and for CPU 12.4 ms, so as expected my 1080 is clearly the limiter. 

For those interested here's the fpsVR report (note CPU is OC'd to 5 GhZ).

 

fpsVR Report:
App: Il-2 HMD: HP Reverb VR Headset VR1000-2xxx (90.0009994506836 Hz, IPD 61.0)
GPU: NVIDIA GeForce GTX 1080 (26.21.14.3630) CPU: Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz
Delivered fps: 35.4  Duration: 3.6min.
GPU Frametimes:
Median: 18.1 ms
99th percentile: 23.4 ms
99.9th percentile: >30 ms
frametime <11.1ms(vsync): 30.1%
CPU frametime:
Median: 12.4 ms
99th percentile: >30 ms
99.9th percentile: >30 ms
frametime <11.1ms(vsync): 31.5%
Max. SteamVR SS: 192%
Render resolution per eye: 2228x2178(by SteamVR settings, Max.) (HMD driver recommended: 1608x1572)

 

Frametimes # Il-2 # HP Reverb VR Headset VR1000-2xxx # 12_14_2019 10_45_44 AM.png

Edited by 71st_AH_statuskuo
  • Like 1

Share this post


Link to post
Share on other sites
3 hours ago, 71st_AH_statuskuo said:

fpsVR Report

Thanks for being the first to run it in VR and report it. I also have to do it.

 

The histogram of the GPU and CPU frametimes of fpsVR is quite nice. It really gives you an idea of the distribution. Yes, clearly your 1080 is the weakest point now.

Share this post


Link to post
Share on other sites

WallterScott just confirmed me that he obatained exactly same results with High and Low Clouds. So, his 2080Ti was not bottlenecking the test at any moment (as expected).

 

Taking now a closer look to the dozen of test done (very few to try to extract solid conclusions) we can see that the 9900K chips are clearly going well beyond expected.

I think this could be due to the L3 cache size. or perhaps that they (Dburne and walterscott) are the only ones with 2080Ti and 9900K.

 

Also, the difference between Dburne and Walterscott is difficult to explain since both are running almost identical systems.The 5.2 and 5.3 difference would only give 5 fps more, but the difference is 17fps. The other difference is the RAM true latencies. This might influence the game more than we were expecting.

 

Again, still very few data to extract conclusions.

 

Anyone out there with a 2080Ti and a different CPU?? Or anyone with a 9900K and another GPU.??

Share this post


Link to post
Share on other sites
2 hours ago, chiliwili69 said:

WallterScott just confirmed me that he obatained exactly same results with High and Low Clouds. So, his 2080Ti was not bottlenecking the test at any moment (as expected).

 

Taking now a closer look to the dozen of test done (very few to try to extract solid conclusions) we can see that the 9900K chips are clearly going well beyond expected.

I think this could be due to the L3 cache size. or perhaps that they (Dburne and walterscott) are the only ones with 2080Ti and 9900K.

 

Also, the difference between Dburne and Walterscott is difficult to explain since both are running almost identical systems.The 5.2 and 5.3 difference would only give 5 fps more, but the difference is 17fps. The other difference is the RAM true latencies. This might influence the game more than we were expecting.

 

Again, still very few data to extract conclusions.

 

Anyone out there with a 2080Ti and a different CPU?? Or anyone with a 9900K and another GPU.??

 

Could be a number of things.

Combination of both the extra 100 Mhz in CPU core speed and the faster ram even with higher latency on his system.

Could be a difference perhaps in what his 2080 Ti boost to on GPU clock versus what mine boost to. GPU's can vary certainly on quality of the silicon.  Or what is memory speed he has set for 2080 Ti versus mine.

May have to do with cooling as well on the GPU, as with Nvidia Boost it will vary sometimes much depending on GPU temperature.

 

It would be difficult to truly nail down why the difference between the two. Just too many possible scenarios on these PC systems.

 

 

 

Share this post


Link to post
Share on other sites

Am very Interested in difference between 9600k, 9700k and 9900K in IL-2, I notice I got similar ST Passmark results to 9900K, 

Unfortunately do not have a 2080ti for comparison 😎

Also have to apologise for not posting the complete test results, we have been having rolling blackouts in my country, hopefully will be able to run the benchmark when have time (and Power at the same time 🤨

 

Cheers, Dakpilot 

Share this post


Link to post
Share on other sites

Are there any ways to lock a 2080ti to a particular boost clock? Doesn’t need to be very fast, could even be standard non-OC speed, just to take out the variance. Or could the person with the 9900K and faster ram deliberately downclock the RAM to slower speed and looser timings? For science....

Share this post


Link to post
Share on other sites
6 hours ago, Alonzo said:

Are there any ways to lock a 2080ti to a particular boost clock? Doesn’t need to be very fast, could even be standard non-OC speed, just to take out the variance. Or could the person with the 9900K and faster ram deliberately downclock the RAM to slower speed and looser timings? For science....

 

Yes with MSI Afterburner you can:

https://forums.evga.com/Guide-How-to-force-max-voltage-curve-overclock-with-msi-afterburner-m2820280.aspx

 

EVGA Precision X1 has a boost lock feature, but I do not find it much effective.

I think still the card boost can be affected by GPU temps at some point though.

 

Edited by dburne

Share this post


Link to post
Share on other sites

Motherboard: ROG MAXIMUS XI HERO
 CPU:                 9700K
 CPU Freq:        5.0 Ghz
 L3 cache:        12 MB
 RAM type:        DDR4
 RAM size:        32 GB
 RAM Freq:        3200 MHz
 RAM Latency: 16
 GPU:                 2080Ti
 STMark:           3015

 

2019-12-17 22:05:16 - Il-2 4.003
Frames: 26487 - Time: 180000ms - Avg: 147.150 - Min: 102 - Max: 239

 

All settings as per OP.

 

System upgraded yesterday and fresh Windows 10.

Share this post


Link to post
Share on other sites
37 minutes ago, marklar said:

2019-12-17 22:05:16 - Il-2 4.003
Frames: 26487 - Time: 180000ms - Avg: 147.150 - Min: 102 - Max: 239

 

Thank you so much for your test.

 

I confirm two things:

 

- Your PC delivers identical results than Alonzo PC. Both with same overclock and same cache. So there is no difference between 2080 or 2080Ti in this test.

 

- The difference with WallterScott and Dburne is only due to cache, overclock and memory latency. Since you also have a 2080Ti but obtain less fps than them.

 

Have you tried to overclock your CPU to 5.2 by disabling 4 cores, so you only have 4 cores active?

 

Your test make me to think that the difference between dburne and wallterscott is explained by OC (5-7fps) and latency (10-12 fps)

 

We need more test!!

Share this post


Link to post
Share on other sites
4 hours ago, chiliwili69 said:

Your test make me to think that the difference between dburne and wallterscott is explained by OC (5-7fps) and latency (10-12 fps)

 

We need more test!!

 

If Walterscott has time, it would be very interesting for him to increase his memory latency and decrease his clock speed (one at a time, independently) to see the effect. For example, is it more beneficial to be at 5.3ghz instead of 5.0ghz, or more beneficial to be at 8.xx ns latency instead of 10.xx ns memory latency?

 

I don't know how to do the chart, but it would be interesting to do a FPS regression against both clock speed and memory latency, since it seems like both of those relate to FPS.

Share this post


Link to post
Share on other sites

 

8 hours ago, chiliwili69 said:

Have you tried to overclock your CPU to 5.2 by disabling 4 cores, so you only have 4 cores active?

 

 

Not yet. I only put the hardware together on Monday and overclocked it to 5GHz yesterday. I will leave it for a week or two to check how stable the system is and then I will try to add extra Hz.

Share this post


Link to post
Share on other sites
8 hours ago, marklar said:

Not yet. I only put the hardware together on Monday and overclocked it to 5GHz yesterday. I will leave it for a week or two to check how stable the system is and then I will try to add extra Hz.

 

For stability testing I suggest a recent build of Prime95 (with AVX) and then the "large FFT" option. This stresses the CPU and memory but doesn't put quite as much heat through as the smaller FFTs which are more like power viruses. The large FFTs run overnight should be enough to approximate the stability you need for IL2. In the past I've tried other tools such as OCCT, but I find IL2 is actually quite picky about hardware stability, and the Prime95 test is unfortunately the one you need.

Share this post


Link to post
Share on other sites
On 12/16/2019 at 12:23 PM, Dakpilot said:

Am very Interested in difference between 9600k, 9700k and 9900K in IL-2, I notice I got similar ST Passmark results to 9900K, 

Unfortunately do not have a 2080ti for comparison 😎

Also have to apologise for not posting the complete test results, we have been having rolling blackouts in my country, hopefully will be able to run the benchmark when have time (and Power at the same time 🤨

 

Cheers, Dakpilot 

This may not map entirely to Intel chips, but when I tried running with SMT disabled in my R7 3800X I lost a large chunk of frame rates, even though it is a 8 core system. 

Share this post


Link to post
Share on other sites
6 hours ago, Alonzo said:

For stability testing I suggest a recent build of Prime95 (with AVX) and then the "large FFT" option.

 

If it does not crash in games then it's stable enough for me :).

  • Haha 1

Share this post


Link to post
Share on other sites
On 12/18/2019 at 4:56 AM, Alonzo said:

is it more beneficial to be at 5.3ghz instead of 5.0ghz, or more beneficial to be at 8.xx ns latency instead of 10.xx ns memory latency?

 

I don't know how to do the chart, but it would be interesting to do a FPS regression against both clock speed and memory latency, since it seems like both of those relate to FPS

 

From past experiments with the old Balapan benchmark we saw that memory frequency influences IL-2, but we were not looking at the true latency, just frequency.

 

That´s why I created a dedicated column for true latency en in the benchmark table.

 

I don´t know how to do a two variable regression with GoogleDocs (I have other multivariable tools for that but not GoogleDocs). But before introducing a new variable we first need to see a clear influence as an independent variable. So I created another graph in the results table to visually see the tendency.

We can see some tendency, but it is also true that better PCs (equipped with better CPUs and more OC) has better true latency.

 

lantecy.thumb.jpg.36ed173046b1dab84710beb69dccfa77.jpg

Share this post


Link to post
Share on other sites

CPU: 3600x stock RAM: DDR4 16 GB 3200Mhz CAS Lat. 16 GPU: GTX 1080 Ti Mainboard: ASROCK B450 Pro4

 

STMark: 2841

 

Monitor:

2019-12-23 21:50:07 - Il-2 4.003
Frames: 23560 - Time: 180000ms - Avg: 130.889 - Min: 88 - Max: 232

 

VR:

2019-12-24 14:35:16 - Il-2 4.003
Frames: 10863 - Time: 180000ms - Avg: 60.350 - Min: 44 - Max: 91
 

 

 

fpsVR Report:
App: IL-2 Sturmovik: Battle of Stalingrad HMD: HP Reverb VR Headset VR1000-2xxx0 (90.001 Hz, IPD 63.2)
GPU: NVIDIA GeForce GTX 1080 Ti (26.21.14.3136) CPU: AMD Ryzen 5 3600X 6-Core Processor             
Delivered fps: 57.64  Duration: 3.2min.
GPU Frametimes:
Median: 14.8 ms
99th percentile: 20.7 ms
99.9th percentile: 27.1 ms
frametime <11.1ms(vsync): 11.8%
CPU frametime:
Median: 11.3 ms
99th percentile: 19.3 ms
99.9th percentile: 27.1 ms
frametime <11.1ms(vsync): 47.6%
Max. SteamVR SS: 100%
Render resolution per eye: 2205x2160(by SteamVR settings, Max.) (HMD driver recommended: 2205x2160)

 

 

Frametimes # IL-2 Sturmovik_ Battle of Stalingrad # HP Reverb VR Headset VR1000-2xxx0 # 23.12.2019 22_08_13.png

 

 

Interesting:

Thanks to the systematic benchmarking I moved my installation from my second NVMe drive (Corsair Force MP 510) to my first NVMe drive (Samsung 970 EVO Plus 1TB). While this did not improve my FPS, it removed on the other hand 3 repeatable little sutters during the benchmark replay (For example everytime under the bridge a short FPS hang on the second drive is now gone completly on the NVMe in the first slot. This seems to indicate some stutters are related to loading stuff during play.)

 

Alternative VR settings (AA off, shadow low, 150% supersampling)

 

fpsVR Report:
App: IL-2 Sturmovik: Battle of Stalingrad HMD: HP Reverb VR Headset VR1000-2xxx0 (90.001 Hz, IPD 63.2)
GPU: NVIDIA GeForce GTX 1080 Ti (26.21.14.3136) CPU: AMD Ryzen 5 3600X 6-Core Processor             
Delivered fps: 59.16  Duration: 3.3min.
GPU Frametimes:
Median: 14.8 ms
99th percentile: 20.5 ms
99.9th percentile: 28.5 ms
frametime <11.1ms(vsync): 15.5%
CPU frametime:
Median: 10.7 ms
99th percentile: 19 ms
99.9th percentile: >30 ms
frametime <11.1ms(vsync): 56.3%
Max. SteamVR SS: 149%
Render resolution per eye: 2692x2637(by SteamVR settings, Max.) (HMD driver recommended: 2205x2160)

 

Frametimes # IL-2 Sturmovik_ Battle of Stalingrad # HP Reverb VR Headset VR1000-2xxx0 # 24.12.2019 18_14_46.png

Edited by Gomoto
VR added

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...