Jump to content

SYN_Vander BENCHMARK v6 to measure IL-2 performance in monitor & VR


chiliwili69
 Share

Recommended Posts

=VARP=Tvrdi
5 hours ago, chiliwili69 said:

 

You have to delete the .msnbin file of the SYN_VANDER files in the mission folder as explained by Alonzo above.

Thanks.

My rig.

CPU: i5 7600 (3.5Ghz, turbo 4.0Gz)

GTX 1660 OC 6GB (latest nividia drivers)

RAM 16GB (2x8) DDR4 2400MHZ HyperX Fury

SSD regular drive 98% health (IL2 installed on C directly)

 

CPU test at 1080p resolution in game:
Frames: 4258 - Time: 60000ms - Avg: 70.967 - Min: 60 - Max: 92

 

When Im online I play on 1440p and FPS is high (currently limited to 80 in game settings). Still on some servers I notice slight slowdown when looking around with my Edtracker pro. After some updates it was better but lately its like that. I have only FHD, distant bulidings and sharpen filter ON. HIGH settings in general, low shadows, no mirrors, 3X landscape, 100km distance, high clouds, 2X FXAA, no vsync (I have gsync monitor)....

Is it my CPU or GPU?

 

Edited by =VARP=Tvrdi
Link to comment
Share on other sites

BTW read this below, its in Dutch but use your web translator, as it seems Nvidia is advising to  delete an specific Windows update as can causing frame drops. 

https://nl.hardware.info/nieuws/76259/nvidia-raadt-windows-gebruikers-aan-nieuwste-patches-te-verwijderen

Just incase. 

 

EDIT: The improvement update is arriving https://nl.hardware.info/nieuws/76282/microsoft-brengt-noodoplossing-voor-prestatieproblemen-kb5001330-uit

Edited by Dutch2
Link to comment
Share on other sites

DBCOOPER011

I now know that IL2 behaves better with the Ryzen cpu's better then the Intel ones. I just bought a 5800X and X570 for a second system today and did a quick CPU test mission. This was on a stock system other then the memory being 3666MHZ. No tweaks, PBO, or anything else of that nature yet. The max FPS and average was higher then anything I had on the 11700k. Will run the  VR tests tomorrow...

 

Frames: 7431 - Time: 60000ms - Avg: 123.850 - Min: 107 - Max: 173

Edited by DBCOOPER011
Link to comment
Share on other sites

DBCOOPER011

Below are the test results I obtained from my 5800X. This was conducted with stock settings as any type of over/under clock I did was negligible. A -10 curve optimization was pretty close to the stock settings, I'll probably use that for routine purposes. I pretty much got a few FPS on the CPU test from the 11700k and the VR tests were almost the same..

 

CPU
Frames: 7206 - Time: 60000ms - Avg: 120.100 - Min: 103 - Max: 161

VR Test 1
Frames: 5271 - Time: 60000ms - Avg: 87.850 - Min: 78 - Max: 91
VR Test 2
2021-04-26 11:20:18 - Il-2
Frames: 5347 - Time: 60000ms - Avg: 89.117 - Min: 81 - Max: 92

 

Link to comment
Share on other sites

-Six-Motivated

PC Specs

 

Motherboard: MSI MPG Z390 GAMING EDGE AC

CPU: i7 8700K
CPU Freq4.9 GHz

L3 cache: 12 MB

Cores: 6 

Threads: 12

RAM typeDDR4

RAM size: 16GB (2x8GB)
NB Freq: 4300 MHz
RAM Freq: 3200 MHz
RAM timings: 16-18-18-38

GPU: RTX 3080

CPU Cooler: NZXT Kraken Z63

RAM Model: G.SKILL Trident Z RGB
GPU Model: ASUS TUF RTX 3080 OC

 

CPU Test Results

 

First Run

2021-04-27 18:47:49 - Il-2
Frames: 5311 - Time: 60000ms - Avg: 88.517 - Min: 78 - Max: 117

 

Second Run

2021-04-27 18:50:09 - Il-2
Frames: 5303 - Time: 60000ms - Avg: 88.383 - Min: 74 - Max: 116

Link to comment
Share on other sites

chiliwili69
On 4/22/2021 at 9:17 PM, =VARP=Tvrdi said:

Is it my CPU or GPU?

 

Thank you for your test. I think that depending on the scene your bottleneck could be the CPU or the GPU.

In dense scenarios (many planes, tanks, AI in general) you will be more CPU bottlenecked (most of top CPU are also bottlenecked).

But other cases with just few planes but heavy clouds your GPU might be at 100% load.

To know exactly how bottlenecked is your GPU you can use the free tool MSI Afterburner, and configure a trend line for GPU load. You can see your load while playing.

This is only valid for GPU load, not CPU load, since CPU% load values are meaningless in general (the IL-2 heavy thread jump from core to core).

On 4/25/2021 at 7:34 AM, DBCOOPER011 said:

I now know that IL2 behaves better with the Ryzen cpu's better then the Intel ones.

 

Hey, you just demostrate that this is not true anymore with the 11th Gen of Intel

On 4/26/2021 at 8:34 PM, DBCOOPER011 said:

CPU
Frames: 7206 - Time: 60000ms - Avg: 120.100 - Min: 103 - Max: 161

VR Test 1
Frames: 5271 - Time: 60000ms - Avg: 87.850 - Min: 78 - Max: 91
VR Test 2
2021-04-26 11:20:18 - Il-2
Frames: 5347 - Time: 60000ms - Avg: 89.117 - Min: 81 - Max: 92

 

Thanks for this test, could you just add your Mobo/RAM/cooling data for the Ryzen setup?

 

As you said, in VR the R5/R7 Ryzen are on pair with the 11th gen Intel.  They are almost always at 90fps in this heavy test.

 

  • Like 1
Link to comment
Share on other sites

chiliwili69
8 hours ago, -Six-Motivated said:

CPU: i7 8700K

Thanks, it is the first test of a 8700K. Pretty much aligned with similar CPUs.

Link to comment
Share on other sites

-Six-Motivated
3 hours ago, chiliwili69 said:

Thanks, it is the first test of a 8700K. Pretty much aligned with similar CPUs.

 

I was honestly surprised to see how much better Ryzen CPUs perform in this game compared to Intel.

Link to comment
Share on other sites

=VARP=Tvrdi
On 4/28/2021 at 8:59 AM, chiliwili69 said:

 

Thank you for your test. I think that depending on the scene your bottleneck could be the CPU or the GPU.

In dense scenarios (many planes, tanks, AI in general) you will be more CPU bottlenecked (most of top CPU are also bottlenecked).

But other cases with just few planes but heavy clouds your GPU might be at 100% load.

To know exactly how bottlenecked is your GPU you can use the free tool MSI Afterburner, and configure a trend line for GPU load. You can see your load while playing.

This is only valid for GPU load, not CPU load, since CPU% load values are meaningless in general (the IL-2 heavy thread jump from core to core).

 

I did that test. If I limit FPS to 100 my GPU goes to 85%....That is on 1440p resolution online on all servers and full servers.

Link to comment
Share on other sites

DBCOOPER011

Chili,

 

Sorry for the late reply. I have a 5800X/Asus X570 Tuf wifi/Artic Cool 2-360. I did a couple CPU runs with tightened ram and got a little better performance. attached is Zen timings and Aida info. This CPU is strange in that the best performance I'm getting is with a negative curve on all cores, nothing else except with tweaking the ram.

 

CPU

Frames: 7279 - Time: 60000ms - Avg: 121.317 - Min: 107 - Max: 166

Frames: 7497 - Time: 60000ms - Avg: 124.950 - Min: 112 - Max: 173

 

 

ZenTimings_Screenshot.png.0fdca96cef82964ac48e78d8b976b430.pngAida.PNG.ed12cffdcd2e0a1dfb8f28d51b4f32db.PNG
 

Link to comment
Share on other sites

biglouis

Small IL-2 patch and I cannot load the benchmark mission now. It just doesn't load and goes back to mission menu. No error message. Any ideas?

 

I got my 11900k working much better following some BIOS updates but I did not get the chance to retest IL-2 yet (I gained FPS in other CPU bound titles so I'm very excited to try in IL-2!).

Link to comment
Share on other sites

SYN_Vander
5 minutes ago, biglouis said:

Small IL-2 patch and I cannot load the benchmark mission now. It just doesn't load and goes back to mission menu. No error message. Any ideas?

 

I got my 11900k working much better following some BIOS updates but I did not get the chance to retest IL-2 yet (I gained FPS in other CPU bound titles so I'm very excited to try in IL-2!).


I will re save the mission to the latest game version.

Link to comment
Share on other sites

chiliwili69
On 5/9/2021 at 2:46 PM, biglouis said:

Small IL-2 patch and I cannot load the benchmark mission now. It just doesn't load and goes back to mission menu. No error message. Any ideas?

 

I have run the benchmark today with no problems.

 

Have you tried to delete the .msnbin file as explained here.

 

@SYN_Vander I think there is no need to recreate the mission since it runs OK with current version and patches.

 

BTW, Someone told me that the issues of the latest AMD cards has been solved. Could any one with a 6700XT, 6800XT or 69000XT confirm that.

It would be good to know how they perform given the scarcitty of the NVIDIA cards.

  • Thanks 1
  • Upvote 1
Link to comment
Share on other sites

SYN_Vander
12 minutes ago, chiliwili69 said:

 

I have run the benchmark today with no problems.

 

Have you tried to delete the .msnbin file as explained here.

 

@SYN_Vander I think there is no need to recreate the mission since it runs OK with current version and patches.

 

BTW, Someone told me that the issues of the latest AMD cards has been solved. Could any one with a 6700XT, 6800XT or 69000XT confirm that.

It would be good to know how they perform given the scarcitty of the NVIDIA cards.

 

Yeah, it doesn't load too slow without the .msbin anyways. But then I propose to remove the .msbin from the download and/or add the removal of the .msbin file as part of the instructions.

Link to comment
Share on other sites

PA_Willy
6 hours ago, chiliwili69 said:

 

BTW, Someone told me that the issues of the latest AMD cards has been solved. Could any one with a 6700XT, 6800XT or 69000XT confirm that.

It would be good to know how they perform given the scarcitty of the NVIDIA cards.



Yes please!

Link to comment
Share on other sites

chiliwili69
9 hours ago, SYN_Vander said:

But then I propose to remove the .msbin from the download

 

Yes, I have modified the zip file from the post so there is no .msnbin file in it.

 

I was also thinking that most of the VRtest1 with the Zen3 and new Intel 11thgen CPUs will provide a value very close to 90fps, so there is no real measurement of the performance.

I was thinking in raising the IL-2 settings for VRtest1 (which are the same for VR test2), so more usual settings are used and lower values will be delivered by top rigs. And then we can get more info about the gains of different CPUs, RAMs, etc.

 

What all you think?

 

(I will need then to generate a new tab to don´t mix results)

Link to comment
Share on other sites

Dutch2
Posted (edited)
11 hours ago, chiliwili69 said:

 

Yes, I have modified the zip file from the post so there is no .msnbin file in it.

 

I was also thinking that most of the VRtest1 with the Zen3 and new Intel 11thgen CPUs will provide a value very close to 90fps, so there is no real measurement of the performance.

I was thinking in raising the IL-2 settings for VRtest1 (which are the same for VR test2), so more usual settings are used and lower values will be delivered by top rigs. And then we can get more info about the gains of different CPUs, RAMs, etc.

 

What all you think?

 

(I will need then to generate a new tab to don´t mix results)

 

I must admit thats what I do, while testing I always keeping my own precious adjustments and when someone claims he did find the FPS holy grail, (which is btw 90% BS),  I do two runs with this FPS holy grail and two runs without and plotting this in an Frame-Time graph to compare. 

 

What I more would like to see in the benchmark is the possibility to check adjustments on the visual game aspects in combination to the FPS, (In my case F-T 🧐) The visual aspects like Jaggies and shimmer in the cockpit, terrain distance, clouds, planes edges/rims, wing wires,  is an discussion with a long history that did start from day one when RoF was released and an aspect thats surrounded by lots of myths. 

 

 

 

 

Edited by Dutch2
Link to comment
Share on other sites

JG1_loopy

I have a 2k monitor 2560x1440, in earlier post it said i should have a DSR of x3, but the DSR range in graphics setting is 0.5 to full.

what setting should i use

Link to comment
Share on other sites

chiliwili69
8 hours ago, JG1_loopy said:

I have a 2k monitor 2560x1440, in earlier post it said i should have a DSR of x3, but the DSR range in graphics setting is 0.5 to full.

what setting should i use

 

The 4K test can be only be done in native 4K monitors.

Initially we were using DSR to test also with other size monitors but the DSR process itself consume some fps in the process, so results can not be compared.

You can try to connect your PC to a 4K TV instead.

Link to comment
Share on other sites

chiliwili69
7 hours ago, ICDP said:

Why is MSAA used and set at 8X in the 4K test?  It seems pointless to me.

 

Well, the settings used in these benchmark are a bit unrealistic for normal play game. (nobody use MSAAx8 in 4K)

For the monitor tests, they have been thought to load to the max, or just the CPU (CPU test) leaving GPU unload, or just the GPU (4K test) leaving the CPU not loaded.

 

The MSAA is loading the GPU well, and thats why it has been maxed out. So, the test will reach first the GPU limit before reaching the CPU limit.

 

For the VR the VRtest1 is a mix of CPU&GPU load, and the VRTest2 is bringing the GPU load to the G2 requirements.

Link to comment
Share on other sites

I would say that 4K with max graphical settings is already shifting the focus from CPU to GPU without the need for 8X MSAA.  In my experience If you push GPU loads in unrealistic ways you get unrealistic results.  For example I could push unecessary levels of Super Sampling on an RTX 3070 at 4K until it overloaded the limited VRAM capacity and I was getting 20 FPS with lows of 1 FPS.  An AMD 6800 could get 50 FPS at the same settings because it had 16GB but it would not be a realistic result.

Link to comment
Share on other sites

-Six-Motivated

Has anyone tested to see if Hardware Accelerated GPU Scheduling (HAGS) and/or Windows Game Mode had an affect on your benchmark runs?

Link to comment
Share on other sites

chiliwili69
Posted (edited)
11 hours ago, ICDP said:

I would say that 4K with max graphical settings is already shifting the focus from CPU to GPU without the need for 8X MSAA.  In my experience If you push GPU loads in unrealistic ways you get unrealistic results.  For example I could push unecessary levels of Super Sampling on an RTX 3070 at 4K until it overloaded the limited VRAM capacity and I was getting 20 FPS with lows of 1 FPS.  An AMD 6800 could get 50 FPS at the same settings because it had 16GB but it would not be a realistic result.

 

I would agree, but 4K is 8.3 Millionpixels and some top GPUs could manage all those pixels relatively well produce very high fps (more than 150fps) and then hit the CPU limit before hitting the GPU limit.

 

So using 4K with 8xMSAA we are loading the GPU to the max (without hitting usual VRAM limit) to try to hit the GPU limit and no the CPU limit.

 

Even though there could be some cases (imaging having a 3080 with a low CPU R5-3600), then the current 4K test with MSAAx8 will be constrained in any case by CPU. (There is a test like that in the table, and I marked it in orange to be aware of that)

 

11 hours ago, -Six-Motivated said:

Has anyone tested to see if Hardware Accelerated GPU Scheduling (HAGS) and/or Windows Game Mode had an affect on your benchmark runs?

 

So far, I think nobody has tested that.

You or anyone else can be the first to do it...  😉 

 

Since HAGS this shift CPU software load to GPU hardware load the results could vary depeding on how each is the bottleneck.

Edited by chiliwili69
Link to comment
Share on other sites

Posted (edited)
2 hours ago, chiliwili69 said:

 

I would agree, but 4K is 8.3 Millionpixels and some top GPUs could manage all those pixels relatively well produce very high fps (more than 150fps) and then hit the CPU limit before hitting the GPU limit.

 

So using 4K with 8xMSAA we are loading the GPU to the max (without hitting usual VRAM limit) to try to hit the GPU limit and no the CPU limit.

 

Even though there could be some cases (imaging having a 3080 with a low CPU R5-3600), then the current 4K test with MSAAx8 will be constrained in any case by CPU. (There is a test like that in the table, and I marked it in orange to be aware of that)

 

I get the concept of overloading the GPU but we can't lose sight of the fact that benchmarks are there to give relative performance under normal conditions.  So If I want to get a 4K GPU and CPU for Il-2 I can look at a chart that tells me the realistic performance I will get with all details set to max for example.  Or an outline on optimising for performance, such as the impact of SSAO, or HDR on FPS.  To be honestly frank, if we are tesitng at 4K and worrying about a CPU being bottlenecked then it is because the CPU is so old it is not relevant at 4K anyway.  Or to put it another way we need to let people know when a CPU becomes the bottleneck rather than the GPU.  It help them ensure they have a balanced PC which is critical for gamers in 2D or VR.

 

Please take this as constructive criticism because I do value the work you have been doing over the years for our community.  I would suggest a Benchmark table for all settings maxed at 1080p, 1440p and 4K and drop all the pre-requisites for latency tests and other useless stuff.  So people just list system details, resolution and results at max settings.  In my opinion the results table is a mess with so much info that has no bearing on what most people look for in benchmark tables.

Edited by ICDP
Link to comment
Share on other sites

ghyslain

Hello gents! Not sure if this has ever been suggested, but I just did the following tweak and I can finally get stable 90 fps in multiplayer in IL-2.

 

In the nvidia control panel > manage 3d settings > bottom of the screen, look for Virtual reality pre rendered frames. I've bumped mine from 1 to 3 and holy moly. I can now run even IL-2 with perfectly stable 90 fps on high pre-set graphic settings, which was something I could only dream of previously.
The only drawback of such a tweak is to induce some end to end latency, 2/90s (22ms) worth of latency added up in this case.

 

Furthermore, I just bumped down the virtual reality max pre-rendered frames from 3 to 2 and my fps are still fully stable at 90fps.

 

Specs :

 

R9 5900x

EVGA RTX 3080 ftw3

4x8GB ram Corsair 3600MHz

 

And before we conclude: oh of course, you have a 5900x, of course you'll get good fps in VR. I will add that I was getting awful and constant stutter before doing the above mentioned tweak.

 

Cheers!

Link to comment
Share on other sites

Posted (edited)

Just as an exmaple, I did the same 4K test using your benchmark mission at the settings below in sim, along with max building draw distance.  This gives good graphics IQ and what I would call typical settings that certainly are mainly GPU rather than CPU bound.

 

1979376735_Settings4K.thumb.png.fdef6979dde2d7882830d2a570f9b646.png

 

 

System 1:  RTX 3080 FE, 5900X CPU. 32GB DDR 3600 CL16.

System 2:  6700 XT, 5600X CPU, 32GB DDR 3400 CL14.

 

Results.png.b104a702963ddef8740ad98fd34f3d6a.png

 

So these numbers have a delta that is right in line with what we should expect to see at 4K for these GPUs as it matches the results I get from other games.  This also indicates that neither CPU are causing a bottleneck in IL2.  The 6700XT has a 192Bit bus that begins to suffer at 4K, yet it still delivers very playable performance in IL2 at max settings in 4K by reaching the 60FPS at 4K baseline.  I would not call the 6700XT a 4K card of course but for IL2 at max settings I would say good enough.

 

When I ran the benchmark using your suggested setting I got results that utterly killed performance (minimum FPS) on both GPUs and that widened the delta from ~40% to 100% in favour of the 3080, which is not a realistic result and nor will it ever be for pure rasterisation performance.  It also highlighted an issue with some of the results reported on your spreadsheet.  Or am I to believe a 2070S, or 1080Ti is faster than a 3080 at 4K, or that a 3090 is over twice as fast as my 3080 at the same settings?  So it looks to me that many have run their 4K test at much lower MSAA settings (as a guess).  I'm not blaming you for that, just highlighting that if we rely on people to follow too many instructions there is a much higher chance they get it wrong.

 

I think your benchmark criteria is trying to test too much at once and needs a more limited scope and a better, more realistic baseline for testing at 1080p, 1440p, 4K and in VR.  We should be showing what people can expect as baseline performance on typical but well balanced PCs.  Well assuming people can buy the actual parts of course.

 

So for example if someone manages to grab a 3080FE and finds they are 25% slower than the expected numbers we can help them find the problem.  I had a guy on another forum argue that I was lying when I stated my old RTX 2080 was only ~15% slower than his RTX 3080 in Cyberpunk at 1440p.  I pointed out that his old 3770K CPU was making his 3080 perform ~30% below where it should be as he was massively CPU constrained.

 

Let me finish by again stressing I wish this to be taken purely as constructive criticism.

Edited by ICDP
Link to comment
Share on other sites

chiliwili69
9 hours ago, ICDP said:

we can't lose sight of the fact that benchmarks are there to give relative performance under normal conditions.

 

Thanks for your contructive feedback. It is always good to know all point of view and areas of improvement.

 

In the OP I put the purposes of this particular benchmark:


1- To know if your hardware&software is performing as expected (comparing to peers)
2- To know the best CPUs/RAM and GPUs for IL-2.
3- To know the performance cost of every graphic setting and the trade-offs (visual gain vs performance)

 

The item 1 is currently covered for people who has similar CPUs and/or similar GPUs. For example, I have seen that my 5600X is performing as other 5600X in the CPU test (regardeless of the GPU) and also my 3080 is performing as other 3080 in the GPU test (regardeless of the CPU, being not a CPU limit).

For example, If we built a correlation with column AC(Public GPU benchmark) and the AE(4K avg fps) we see that all are ok except one outlier @WheelwrightPL where his 2080Ti is not performing as expected. So he can work on that.

 

Untitled.png.287d21db6821a50f272d86f5030d0a18.png

 

The item 2 is to indivually know what are the best CPUs/RAM (assuming GPU is not constrained) to execute the particular IL-2 code to build the scene.

Also to know the best GPU (assuming CPU is not the constraint) to render the IL-2 scenes.

Currently the top CPU is the 5950X and the top GPU is the 3090.

 

The item 3 can be used indivually by each one with their own settings. In the past with an older benchmark (before referred rendering), I made a large set of tests to calculate the impact of each graphic option. This was shown here.

 

having said that, in your sentence you use "under normal conditions". How you define a "normal conditions" which is valid for everyone?

There are people who just go to a top GPU having an old CPU.

There are people who go to a top CPU but keep their old GPU.

There are people who play at 1080p, others at 4K , or other wide dimessions or multipole monitors, and other go to a large variety of VR devices.

So, the IL-2 settings can have multiple combinations of values that will be "normal" for every individual.

 

 

10 hours ago, ICDP said:

So If I want to get a 4K GPU and CPU for Il-2 I can look at a chart that tells me the realistic performance I will get with all details set to max for example.

 

The purpose of this benchmark is not that one. That table tells you just the individual performance of CPU , and the individual performance of GPU.

The real gain obtained when upgrading CPU and/or GPU will depend on: The CPU, the GPU, the resolution, the IL-2 settings, SP or MP, the plane, the scenario, etc.

So it will be almost impossible to know that.

10 hours ago, ICDP said:

To be honestly frank, if we are tesitng at 4K and worrying about a CPU being bottlenecked then it is because the CPU is so old it is not relevant at 4K anyway.

 

The 4K test using the MSAAx8 is purely syntetic. It is just stressing more the GPU. So there will be more room to test it with bad CPUs, not necessarily old.

10 hours ago, ICDP said:

Or to put it another way we need to let people know when a CPU becomes the bottleneck rather than the GPU.

This is beyond the purpose of the benchmark.

For a given CPU&GPU, people need to play with the IL-2 settings affecting only CPU and the IL-2 settings affecting only GPU. This together with the eye candy vs performance taste of each individual.

The procedure is simple:

1.- Go to a low resolution mode (1080p), low clouds and no MSAA

2.- Then, start to raise and test with all the settings you like most. The more you raise the lower the fps you have. Using you typical plane, scenario and SP/MP.

3.- Once the step 2 is fine, start to increase your monitor resolution (or SS in VR) and clouds (and MSAA if used) until GPU is 80-90% loaded. Then you are fine.

10 hours ago, ICDP said:

I would suggest a Benchmark table for all settings maxed at 1080p, 1440p and 4K

If I do that, then the goal of item2 above will not be achieved. We (well at least me) want to know what is the best CPU for IL-2 and separately what is the best GPU for IL-2.

IT is not a table to give typical performances at "normal conditions"

10 hours ago, ICDP said:

drop all the pre-requisites for latency tests and other useless stuff.

 

Here you are quite right. The AIDA test is too much. I will try to drop some columns to simplify the info displayed. I also think that CPU cooler, RAM model and GPU brand is a bit irrelevant. I will drop those column as well and update instructions.

Link to comment
Share on other sites

chiliwili69
Posted (edited)
9 hours ago, ICDP said:

This also indicates that neither CPU are causing a bottleneck in IL2.

 

First at all thanks for making this comparison. It is interesting.

Here, how do you know exactly that the CPU (5900X) is not bottlenecking the test with the 3080?

Accoding to other test (mine as well) a not CPU constrained 3080 at 4K with MSAAx8! and Extreme clouds (but Low preset) is giving aorund 141-144 fps. You test is only 107 with no MSAA.

7 minutes ago, chiliwili69 said:

Here, how do you know exactly that the CPU (5900X) is not bottlenecking the test with the 3080?

 

One way to know that would be to repeat exactly the test with a lower resolution, for example 2.5K. If you get something closer to 107 it means the bottleneck was the CPU. If you get something closer to 180fps means the bottleneck was not the CPU.

9 hours ago, ICDP said:

When I ran the benchmark using your suggested setting

 

Please, if you don´t mind, let us know your CPU and GPU test results for each PC. Just to have more data for the analysis. Thanks (even if you think the values are wrong).

9 hours ago, ICDP said:

2070S, or 1080Ti is faster than a 3080 at 4K

 

A 2070S or a 1080Ti is not faster than a 3080 at 4K.  In the GPU test (for not constrained CPU) a 3080 should give about 141-144 fps. A 3070 gives 116 fps. A 1080Ti gives about 90-95 fps. And a 2070S about 80 fps.

 

If your 3080 is not givig about 140 fps at 4K with the instructions/settings of OP, then we should investigate that first.

 

Edited by chiliwili69
Link to comment
Share on other sites

chiliwili69
9 hours ago, ICDP said:

I'm not blaming you for that, just highlighting that if we rely on people to follow too many instructions there is a much higher chance they get it wrong.

 

Yes, all we make mistakes (for example I run the my VRtest2 with the wrong SS, marked for that in yellow).

When something is odd we review that and people re-run test and re-check. But I would not put my hand at the fire to say that the table is 100% free of error.

Outliers are quickly identified.

9 hours ago, ICDP said:

So for example if someone manages to grab a 3080FE and finds they are 25% slower than the expected numbers we can help them find the problem

 

This is exactly what we do with this benchmark. Becuase the test isolate the CPU performance from the GPU performance we can diagnose where it is the problem.

Also for example if a guy run the CPU test and GPU test are they results are alinged to their peers, and then run VRtest and get bad performance it would be due to VR specifc issues, not CPU or GPU. So we can isolate the issue.

  • Like 2
Link to comment
Share on other sites

RufusK

Because @chiliwili69 noted a possible improvement with AMD GPUs, I reran the benchmark with the RX 6900XT. My hardware is the same as the previous test, quoted below. I have updated drivers and the game is now version 4.601.

 

CPU test
Frames 6019 - Avg. 100.3 - Min. 83 - Max. 143 -> about a 5% improvement

 

GPU test
Frames 4560 - Avg. 77.1 - Min. 66 - Max. 90 -> very little change

 

 

 

On 3/11/2021 at 4:12 PM, RufusK said:

Motherboard:   ASUS TUF Gaming X570
 CPU:                5800X
 CPU Freq:      4.49 Ghz
 L3 cache:      32 MB
 Cores:           8 
 Threads:       16 
 RAM type:     DDR4
 RAM size:       32Gb (2x16GB)
 NB Freq:        1796.4 MHz 
 DRAM Freq:   1796.4 MHz 
 RAM timings:     20-20-20-40-85

 

MCLK:      1800
FCLK:       1800
UCLK:      1800

 

GPU:        6900XT

 

CPU test
Frames 5701 - Avg. 95.0 - Min. 81 - Max. 137

 

GPU test
Frames 4560 - Avg. 76.0 - Min. 65 - Max. 90

 

These results are pretty disappointing. Do I need to get better RAM or improve the timings? Or is this simply the 6000-series curse?
I have no experience with RAM overclocking, so any advice would be appreciated.

 

Link to comment
Share on other sites

3 hours ago, RufusK said:

Because @chiliwili69 noted a possible improvement with AMD GPUs, I reran the benchmark with the RX 6900XT. My hardware is the same as the previous test, quoted below. I have updated drivers and the game is now version 4.601.

 

CPU test
Frames 6019 - Avg. 100.3 - Min. 83 - Max. 143 -> about a 5% improvement

 

GPU test
Frames 4560 - Avg. 77.1 - Min. 66 - Max. 90 -> very little change

 

For reference my 6700XT got - Avg 106, min 89 and max 147.  I was also able to increase my score by CPU score test by 6% on the 3080 rig by overclocking.  If this test was trully not GPU bound then these anomolies should not happen.  When I use 4K settings below the performance delta between the 3080 and 6700XT PCs are right where they should be with the 3080 being roughly 40-45% ahead at 4K.

 

Humour me and do a test using the following settings to see where your 6900XT lies.  If you don't have a 4K monitor just enable AMD Dynamic Super Resolution.  Make sure to enable along with max building draw distance as well.  At least it will give you something to compare against.

 

1979376735_Settings4K.thumb.png.fdef6979dde2d7882830d2a570f9b646.png

 

 

System 1:  RTX 3080 FE, 5900X CPU. 32GB DDR 3600 CL16.

System 2:  6700 XT, 5600X CPU, 32GB DDR 3400 CL14.

 

Results.png.b104a702963ddef8740ad98fd34f3d6a.png

 

Link to comment
Share on other sites

chiliwili69
10 hours ago, RufusK said:

I have updated drivers and the game is now version 4.601.

 

CPU test
Frames 6019 - Avg. 100.3 - Min. 83 - Max. 143 -> about a 5% improvement

 

GPU test
Frames 4560 - Avg. 77.1 - Min. 66 - Max. 90 -> very little change

 

Thank you very much for re-running the test for CPU and GPU 4K. (I assume it is a native 4K, right? I ask that because DSR affect the performance).

 

So the issue is still there. @LF_Gallahad told me that his squad mates having also latest AMD told him that the issue was solved. But apparently is still there.

 

 

7 hours ago, ICDP said:

For reference my 6700XT got - Avg 106, min 89 and max 147.

 

This is interesting. In the 4K test you got significantly more than any other 6800XT or 6900XT. What did you obtain with the CPU test? And what Mobo do you use?

7 hours ago, ICDP said:

by overclocking.

 

I understand overcloking GPU?

7 hours ago, ICDP said:

Humour me and do a test using the following settings to see where your 6900XT lies.

 

Yes. It will be interesting to know what do you achieve with those settings.  

Perhaps, the issue is with synthetic settings used for the 4K test and CPU test.

  • Sad 1
Link to comment
Share on other sites

Hi Chili.  Those numbers are for the CPU test on the 6700XT.  From memory my 3080 got 126 avg 105 min and 178max.

 

For a test on the 3080 I set an overclock of +750 memory and +150 core and also set fan speed to 100% to prevent throttling.  I then ran your CPU test and settings and the result was a 6% increase over stock.  I found this interesting because if the CPU test is not GPU limited then an overclock should have no effect.  So the fact my OC gave a measurable increase and the fact 6000 series GPU are seriously underperforming leads me to believe something is loading the GPU.  

 

I plan to test later with all settings low and 720 then work up from there to see what causes the most impact.

Link to comment
Share on other sites

LF_Gallahad
Posted (edited)

@ICDP and @chiliwili69 my Radeon 6700XT arrived.

 

I was scared after seeing this problem was not solved or it was a problem after all so the first thing I tried was firing up the mission "Fire in the river" from Bodenplatte single missions.

 

My old results with 5600X and Gtx 1080 with all maxed out and MSAAx2 at 1440p was: 65 fps in the ramp start.

With my new 6700XT and the Intel 11700K, all maxed out and MSAAx2 at 1440P is: 105 fps in the ramp start

also, with MSAAX4 is 86 fps.

 

This one with MSAAx2

unknown.png?width=1618&height=910

 

This one with MSAAx4

unknown.png

 

So far, this is inline with I have seen in some other games/tests.

 

From 65 fps to 108

 

unknown.png

 

As a side note I disabled every AMD thingy in their drivers (like I usually do in Nvidia)

 

The only thing that I have different from other systems is that I changed my AMD to a Intel. 

I will try and do the Benchmarks ASAP.

 

 

Edited by LF_Gallahad
  • Thanks 1
Link to comment
Share on other sites

Glad to hear you are getting the performance you expected and that it is right in line with the average we see in other games.  This matches my own testing and results.

  • Thanks 1
Link to comment
Share on other sites

LF_Gallahad
Posted (edited)
6 minutes ago, ICDP said:

Glad to hear you are getting the performance you expected and that it is right in line with the average we see in other games.  This matches my own testing and results.

Thanks for your information and research, friend!

 

 

I am having a blast with other sims too. So far and the most importat thing is that I am seeing and improvement and I am happy with it in my new 1440p screen. Also, neither artifacts or strange things can be seen 😃

 

I was lucky enough to get this XFX Merc 319 6700XT for 50€ over MSRP and selling my old Nvidia and ram made up the total cost.

Edited by LF_Gallahad
Link to comment
Share on other sites

37 minutes ago, LF_Gallahad said:

Thanks for your information and research, friend!

 

 

I am having a blast with other sims too. So far and the most importat thing is that I am seeing and improvement and I am happy with it in my new 1440p screen. Also, neither artifacts or strange things can be seen 😃

 

I was lucky enough to get this XFX Merc 319 6700XT for 50€ over MSRP and selling my old Nvidia and ram made up the total cost.

 

I did the same test and settings on my 3080 and it gives me ~160 FPS on the ramp.  So I would say that at lower resolutions Il2 favours Nvidia by ~15% but when you are getting over 100FPS at 1440P it becomes fine margins.  At 4K and with my Pimax 8KX I feel it seems about where it should be in relative performance between AMD and Nvidia.

 

I limt the FPS in all my games to 58 to force them to stay within my 4K Freesync monitor VRR range.  VRR monitors have been the biggest tech improvement in decades in my opinion and the fact that AMDs Freesync open standards beat Nvidia proprietry G-Sync was great for consumers.

  • Thanks 1
Link to comment
Share on other sites

chiliwili69
On 5/22/2021 at 9:22 AM, ICDP said:

Those numbers are for the CPU test on the 6700XT.  From memory my 3080 got 126 avg 105 min and 178max.

 

OK, thanks. I thought it was from the GPU test. So, if I recap from your SYN_Vander tests using the OP settings you achieve:

 

System 1:  RTX 3080 FE, 5900X CPU. 32GB DDR 3600 CL16.  (with stok settings)

CPU Test:  126 avg 105 min and 178max (from memory)

GPU 4K Test: ??

VR test: Not done

 

System 2:  6700 XT, 5600X CPU, 32GB DDR 3400 CL14. (with stok settings)

CPU Test: Avg 106, min 89 and max 147

GPU 4K Test: ??

VR test: Not done

 

Please, confirm if my understanding is correct. Just for completion, could you provide the GPU 4K tests.

 

Then, if the above figures are correct:

In your System1, your 5900X is delivering 126avg with RAM at 3600. This is alligned with other tests (in Avg, min and max).

In you System2, your 5600X is delivering 106avg with RAM at 3200. Here your are below from what is expected from a 5600X (which is around 115-120avg). In fact, these numbers are aligned with the Rufusk CPU test (Avg 100, Min 83, Max 143) using a 5900X with a 6900XT. Here is where we think there is a conflict with the AMD cards.

You can try to put your 3080 in your System2 and run the CPU test. So you should get 115-120 fps.

 

 

On 5/22/2021 at 9:22 AM, ICDP said:

I then ran your CPU test and settings and the result was a 6% increase over stock.  I found this interesting because if the CPU test is not GPU limited then an overclock should have no effect.  So the fact my OC gave a measurable increase and the fact 6000 series GPU are seriously underperforming leads me to believe something is loading the GPU.

 

I am glad you did that. Could you show exactly the numbers you achieved?, specially the min and max.

Firstly, In don´t know how many repeated test did you do (normally I do 3 o more test for this kind of analysis since the noise level in fps is a around 3-4%).

Secondly, There could be a very very small effect for the redendered clouds at low which are calculated by GPU. But you could monitorize your 3080 GPU load while you run the test, just to be usre it is ALWAYS below 80% (or lower). I mean, a 3080 at 1080p should never constrain a test.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...