Jump to content

SYN_Vander BENCHMARK v6 to measure IL-2 performance in monitor & VR


Recommended Posts

Posted (edited)
10 hours ago, chiliwili69 said:

 

Here I would not go to the 13600K, if budget allow. In all previous tests across the Intel generations the i5 were allways below the i7 and i9. 

I think the caches sizes matter for IL-2, specially in VR. I could go to the table to find examples and this is what it is in my mind if someone ask me.

 

And if budget allows, I would go for the i9 for the same reason, look at cache sizes and higher stock turbo:

 

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-13700KF&id=5061

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i9-13900KF&id=5055

 

The SingleThreaded number was quite well correlated with the IL-2 VR performance (if GPU is not a constraint).

 

I agree that a properly tuned 13900K would have an advantage over a properly tuned 13700K/13600k by a certain margin, but by how much is the question. The 13900K has slightly more L1 to L3 cache, the boost frequency is higher at stock frequencies, but it is also dependent upon the silicon quality and therefore the amount of voltage required of it for those frequencies.

 

The additional e-cores on the 13900K for the most part are irrelevant for gaming purposes. I have a constant 90fps during online/offline gaming in VR with the settings below with the G2 using openxr. There are a few settings I keep off due to personal preference (shadow quality,mirror,etc..) but overall I don't really have a cpu bottleneck at all it seems with this cpu. Very rarely it may dip slightly during situations where there is an excessive amount of aircraft/AI/ground objects etc.. that may effect the cpu usage, but it is rare.

 

I however have a slight GPU bottleneck now with the 4090 when upping the MSAA antialiasing from 2X to 4X or higher. Not too much at 4X but at 8X its way too much.

 

 

 

Screenshot 2023-01-02 201640.png

Edited by DBCOOPER011
  • Like 1
Posted (edited)
21 hours ago, DBCOOPER011 said:

 

My recommendation would be a 13700K or 13600K processor, and either a Z790-D4 or Z690-D4 motherboard. Raptorlake can obtain/sustain higher frequencies than Alderlake and the 13700K is the same price as a 12900K. Or you can save 100 bucks with the 13600K. I'm using a cheap Asus Prime 790-P mobo, and its working great for me and get 150+ FPS on the settings I have above. I prefer the Asus bios as there is a lot of options available to configure. I believe that upgrading to DDR5 might be a negligible gain at this time.

Thank you DBCOOPER011 about your recommendations i will possibly go the route you indicate about DDR4 vs DDR5 choosing for now to stay into the DDR4 route.

https://www.techpowerup.com/review/intel-core-i9-12900k-alder-lake-ddr4-vs-ddr5/7.html

 

I want to ask you about Asus motherboards are you happy with them? i have read that there had been some issues about their manufacture quality and that they didnt update their bios ( talking about  Asus Prime 690-P mobo , i think ) and couldnt work with the new generation intel cpus or not working XMP RAM profiles....

What do you think , should i trust Asus or should i go to Gigabyte?

 

i already own gigabyte z590 aorus elite ax and i have zero problems so far...what do you think about gigabyte z690 aorus master? Does it worth the extra money compared to  Asus Prime 690-P mobo?

Edited by dgiatr
Posted (edited)
On 12/28/2022 at 5:20 PM, NoBreaks said:

Since only about 1/3rd the records show the tester has a 4k monitor, that inherently restricts those testers to only the test where, by design, a CPU bound scenario is created, which means the GPU is going to be sitting there, bored.

I did not quite get it well. For me those playing full 4K or more add an additional load on the GPU compared to the one playing full HD(1080). Logical as there are 4 times more pixels.

When the CPU sends an instruction to the GPU then the GPU needs to process all the additional pixels for a 4K display. 

At a given instant in the sim the CPU load stays basically the same but the load of the GPU increases if you go from an HD to a 4K monitor.

So why would the 4K testers create a CPU bound scenario with GPU waiting bored. I would say the CPU will have to wait if the load on the GPU is high.

 

 

Edited by IckyATLAS
Posted (edited)
4 hours ago, dgiatr said:

I want to ask you about Asus motherboards are you happy with them? i have read that there had been some issues about their manufacture quality and that they didnt update their bios ( talking about  Asus Prime 690-P mobo , i think ) and couldnt work with the new generation intel cpus or not working XMP RAM profiles....

What do you think , should i trust Asus or should i go to Gigabyte?

 

i already own gigabyte z590 aorus elite ax and i have zero problems so far...what do you think about gigabyte z690 aorus master? Does it worth the extra money compared to  Asus Prime 690-P mobo?

Its depends on how you configure your motherboard I guess. I prefer using adaptive voltage that allows the VRM load lines to dictate how much voltage is used. I also use Thermal Velocity Boost (TVB), that allows the cpu cores to boost to higher frequencies based on temperature. For me, this allows the least amount of voltage needed for allcore workloads, and also keeps the temps down. But additionally, allows much higher frequencies to be achieved during light load situations.

 

ASUS has a really good bios set configured for adaptive/TVB tuning IMO, and hence why I have an ASUS motherboard now. I haven't had any problems with any of the ASUS boards I've had so far with the bios and/or XMP being achieved. But if your doing a static OC, I believe any motherboard brand will do..

 

Edited by DBCOOPER011
Posted
2 hours ago, IckyATLAS said:

I did not quite get it well. For me those playing full 4K or more add an additional load on the GPU compared to the one playing full HD(1080). Logical as there are 4 times more pixels.

When the CPU sends an instruction to the GPU then the GPU needs to process all the additional pixels for a 4K display. 

At a given instant in the sim the CPU load stays basically the same but the load of the GPU increases if you go from an HD to a 4K monitor.

So why would the 4K testers create a CPU bound scenario with GPU waiting bored. I would say the CPU will have to wait if the load on the GPU is high.

 

 

Hi....I understand your question, and I'm sorry if my original statement caused confusion.  In retrospect, it was poorly worded.  I have edited it, so that it reads

 

Since only about 1/3rd the records show the tester has a 4k monitor, that inherently restricts testers without 4k monitors to only the test where, by design, a CPU bound scenario is created, which means the GPU is going to be sitting there, bored. 

 

It was explained better, earlier in that same post (unedited):

 

Because testers who don't have 4k monitors cannot post results showing higher-resolution (monitor) results, the test can't really offer a comparison where higher-end, newer GPUs are almost certain to perform better. 

 

I hope this clarifies, and I appreciate your bringing it to my attention.

  • Thanks 1
Posted
20 hours ago, NoBreaks said:

Bottom line: It's already happened (at least once; actually more) that a person factually looked at the test results as proof of their own absolutely wrong idea concerning AMD hardware (actual published specs confirm this).  If it happened once, it can happen any number of times.  So, it is factual that these test results have actually been misinterpreted - whether that's a fault of the test or not is another matter, I suppose.  But is definitely what happened.

 

I don´t think this happened. You took some words of @Voyager out of context and extracted your own wrong interpretation of them.

 

Let me recap, but please don´t make a mountain of this grain of salt.

 

Voyager just said this after the latest test of a 7900XTX:

 

I've suspected that the RDNA2 problems were due to their memory architecture. Remember their top end card had the same non-cache bandwidth as the 5700 XT, and performed the same or slightly worse in Il-2.

One of the many things they changed in RDNA3 was to go from a 256 bit bus to a 384 bit bus like the nVidia cards have had for a while. I sort of wonder if Il-2, while it uses less vram than other flight sims, ends up accessing far more of it than other flight sims tend to?

 

And his words were well founded. For IL-2 (both in VR and monitor, but specially in VR) the 5700XT performs as bad as a 6900XT. In this forum thread we assume we only talk about IL-2, not other games. You can test this yourself, take a 5700XT and a 6900XT an performs as many runs as you wish in any kind of PC.

 

Vogayer was making an good observation about the bus size of both cards (256 bit in 6900XT and 5700XT), and he was wondering it about as a potential explanation of the bad performance of all previous AMD cards 5XXX and 6XXX series.

 

I have croped some of the few test with the 5700XT and 6XXX grouped by Intel/AMD CPU. They gave similar numbers on CPU test as expected since the purpose is to bottleneck the CPU (Vogayer knows that pretty well, I can tell you that). And you will see that in the GPU 4K test the 5700XT is as bad as the other 6XXX cards:

 

 

amds-GPUs.thumb.png.9ed8d2b75148f32493f1877d8a1e87e1.png

 

20 hours ago, NoBreaks said:

It happens that this sim shows terrible CPU utilization

 

BTW, How do you measure CPU utilization when running IL-2 in monitor?

Posted
20 hours ago, NoBreaks said:

you have to consider what "equivalent" GPUs even means

 

Equivalent was meaning what I said when I used that word, ie. the Public Passmark index for GPUs:

https://www.videocardbenchmark.net/high_end_gpus.html

 

It has a large dataset of tested GPUs in different machines, but you can take any other public synthetic benchmark for that.

 

The real unknown thing is that we really don´t know why the AMD cards performs particularly bad in IL-2, both in monitor and VR comparing with their "equivalents" in NVIDIA.

 

Posted (edited)
1 hour ago, chiliwili69 said:

 

I don´t think this happened. You took some words of @Voyager out of context and extracted your own wrong interpretation of them.

 

Let me recap, but please don´t make a mountain of this grain of salt.

 

Voyager just said this after the latest test of a 7900XTX:

 

I've suspected that the RDNA2 problems were due to their memory architecture. Remember their top end card had the same non-cache bandwidth as the 5700 XT, and performed the same or slightly worse in Il-2.

One of the many things they changed in RDNA3 was to go from a 256 bit bus to a 384 bit bus like the nVidia cards have had for a while. I sort of wonder if Il-2, while it uses less vram than other flight sims, ends up accessing far more of it than other flight sims tend to?

 

And his words were well founded. For IL-2 (both in VR and monitor, but specially in VR) the 5700XT performs as bad as a 6900XT. In this forum thread we assume we only talk about IL-2, not other games. You can test this yourself, take a 5700XT and a 6900XT an performs as many runs as you wish in any kind of PC.

 

Vogayer was making an good observation about the bus size of both cards (256 bit in 6900XT and 5700XT), and he was wondering it about as a potential explanation of the bad performance of all previous AMD cards 5XXX and 6XXX series.

 

I have croped some of the few test with the 5700XT and 6XXX grouped by Intel/AMD CPU. They gave similar numbers on CPU test as expected since the purpose is to bottleneck the CPU (Vogayer knows that pretty well, I can tell you that). And you will see that in the GPU 4K test the 5700XT is as bad as the other 6XXX cards:

 

 

amds-GPUs.thumb.png.9ed8d2b75148f32493f1877d8a1e87e1.png

 

 

BTW, How do you measure CPU utilization when running IL-2 in monitor?

I'm sorry, but that's not correct.

 

What Voyager said (and this is a direct cut and paste) is:  Remember their top end card had the same non-cache bandwidth as the 5700 XT, and performed the same or slightly worse in Il-2.

 

The cards do not, in fact, have the same bandwidth.  If you read what I already posted - where I did the legwork to get the factual data from the specs:

 

The bandwidth of a 5700XT is 448 GB/s.  The bandwidth of a 6900XT is 512 GB/s (14.28% greater than a 5700XT).  Their "top end card" (RDNA2) was a 6950XT, which has a bandwidth of 576 GB/s - a whopping 28.5% greater than a 5700XT.

 

So, you see, the statement was inaccurate.  I didn't interpret anything, or take anything out of context.  Bandwidth and bus are not the same thing, but the confusion there isn't mine.

 

Edited by NoBreaks
Posted

Incidentally, I *have* run (multiple) tests with both 5700XT and 6900XT, on the exact same platform...and I can assure you that (again, excluding the "CPU" 1080p test and looking at the actual GPU test)...a 6900XT scores *twice* what a 5700XT does.   As expected.  And, this is absolutely consistent with what several reliable online sources indicate about comparing the two: The 6900 scores about twice what the 5700XT does.

 

But, since everyone here is convinced IL2 is that much different...well, here's your benchmark.  And yes, I've run the tests many times, and confirmed the settings more than once.

 

6900XT
1080p
2022-12-31 16:55:09 - Il-2
Frames: 6066 - Time: 60000ms - Avg: 101.100 - Min: 83 - Max: 145

4k
2022-12-31 17:18:17 - Il-2
Frames: 5342 - Time: 60000ms - Avg: 89.033 - Min: 80 - Max: 97

 

 

5700XT

1080p

2023-01-01 12:39:41 - Il-2
Frames: 5988 - Time: 60000ms - Avg: 99.800 - Min: 81 - Max: 142

4k
2022-12-31 17:47:44 - Il-2
Frames: 2712 - Time: 60000ms - Avg: 45.200 - Min: 39 - Max: 49

 

Now, I realize most of you think I'm a liar or something (already accused me of such)...but if need be, I can possibly arrange to have the results confirmed by a third party.

 

Or, we could just accept what several reliable online sources can tell us about 5700XT vs 6900XT.

Posted
3 hours ago, chiliwili69 said:

BTW, How do you measure CPU utilization when running IL-2 in monitor?

 

Well, to be honest, if you're just going to dismiss it as "meaningless" (as you said already) there's not really much point in my telling you...is there?


I use tools for testing that are *widely* recognized throughout the industry as reliable.  As with any other tool and any other tester, I draw conclusions based upon evidence, data and experience.


When conditions in a game cause a 3090 to run about the same as a 1080 due to load on the CPU - while the CPU is scarcely loaded 20% and has *several* threads/cores sitting there doing nothing...


...well, that's poor CPU utilization.  The 3090 and the 1080 get very similar frame rates during this state, but the key is that the 3090 is reduced to ~30% load while the 1080 is still up around 90%.  This is observable to be associated with load changes on the CPU; it happens predictably and reliably.  The 3090 is sitting there bored because the CPU is bound - not unlike your "CPU test".

 

The problem is that this occurs at a *relatively* small amount of CPU load, while both my 5600X3D and 3090 are barely even doing anything.

 

Also, unlike your CPU test, this scenario wasn't specifically designed to create a bound CPU...it just happens under certain conditions.


And yes, I've seen it, and I can duplicate it at will, across multiple platforms.  And the 3090 is a fairly current high-end Nvidia GPU, not the junk everyone here seems to think AMD is.


But, as I said, you've already indicated it's meaningless (although I doubt the rest of the industry or the tool's creators would agree).

Posted
4 hours ago, NoBreaks said:

Well, to be honest, if you're just going to dismiss it as "meaningless" (as you said already) there's not really much point in my telling you...is there?

 

Are you going to name the tool, or just derail the entire thread with your pedantry?

 

This is the problem. You're acting like you don't actually want to engage with the folks on the thread, that you prefer to score internet points and prove how superior you are. You haughtily call people out for making ad hominem attacks, when no such attacks were made.

 

My prediction is you'll keep doing this for a few more pages, but you'll find people stop responding to you and write you off as a troll, or someone with a weird idea of social norms, or someone "on the spectrum" or some such. Alternatively you could start communicating like a normal person rather than trying to 'win' each interaction.

 

For anyone still reading this who cares, you can look at IL2 CPU performance using the built-in Windows tooling. With perfmon you can ask it to graph individual threads CPU usage, rather than overall usage. If you do that, you'll find there's one 'hot' thread that is using close to 100% of a CPU, a second 'warm' thread that is using maybe 60-80%, and then a bunch of other threads that aren't really doing much. This is a common problem in simulators, that the hot thread limits the overall performance. This is why, as the data in Chili's spreadsheet has shown, if you want to play IL2 with high framerates you need very good single core performance. The spreadsheet also shows that memory latency and frequency are a factor, which is further implied by the big jump in performance going from an AMD 5800X to a 5800X3D with its large 96MB cache but lower clock speed.

  • Upvote 4
Posted
10 hours ago, NoBreaks said:

So, you see, the statement was inaccurate.

 

Yes, you are right in that point. In the first phrase he mentioned bandwidth and in the second phrase he talked about bus size. He might used bus size in the first phrase to be more precise.

10 hours ago, NoBreaks said:

And, this is absolutely consistent with what several reliable online sources indicate about comparing the two: The 6900 scores about twice what the 5700XT does.

 

Nobody doubts that. In fact, if you go to the Passmark GPU benchmark you will see that the 6900XT scores 26555 and the 5700XT scores 16728.  So, yes, there is a big difference between the two in all other sources. Our surprise was when we were testing the AMD cards (in general) in IL-2.

10 hours ago, NoBreaks said:

And yes, I've run the tests many times, and confirmed the settings more than once.

 

Thank you for runing the test. Please, provide all the other details of the PC as indicated in the instructions so I can update this info to the table.

Posted
10 hours ago, NoBreaks said:

Now, I realize most of you think I'm a liar or something (already accused me of such)...but if need be, I can possibly arrange to have the results confirmed by a third party.

 

No, I don´t think you are a liar. You are another source of information and facts to the table.

It is intersting what you are obtaining.

As you said the CPU tests is has expected (both are CPU contrained).

In the 4K your results are interesting. The 6900XT performs much better than previous tests (this could be due to changes in the IL-2 engine or better drivers of AMD) which means this is good for all 6900XT in IL-2.

On the contrary the 5700XT peforms worse than the previous test we had (only one). Again, this could be due to changes in the IL-2 engine or who knows.

But this gives us new info and enrich the table.

 

Although the 6900XT still gave you a lower score in the 4K test than a "equivalent" NVIDIA card (for example 3080Ti with passmark score 25435).

 

It is a pitty that you don´t use VR, it would be interesting to see how both cards performs in VR. But the info you bought is interesting.

10 hours ago, NoBreaks said:

but if need be, I can possibly arrange to have the results confirmed by a third party

 

If possible yes, another data to the table is always welcome.

Posted
8 hours ago, NoBreaks said:

while the CPU is scarcely loaded 20% and has *several* threads/cores sitting there doing nothing...

 

What wanted to say is that the typical CPU utilization reported by Task Manager or MSI afterburner is a number that is not represeting how much teh CPU is constraining IL-2 engine.

IL-2 engine has several threads and they are distributed across the cores. But it seems there are one or two threads which are very heavy and bottleneck the core for an  small fraction of time only since the threads are jumping from core to core. That´s why you will not see a core at 100%.

 

You can read more here and the subsequent posts:

 

Posted (edited)
6 hours ago, chiliwili69 said:

 

Yes, you are right in that point. In the first phrase he mentioned bandwidth and in the second phrase he talked about bus size. He might used bus size in the first phrase to be more precise.

 

Nobody doubts that. In fact, if you go to the Passmark GPU benchmark you will see that the 6900XT scores 26555 and the 5700XT scores 16728.  So, yes, there is a big difference between the two in all other sources. Our surprise was when we were testing the AMD cards (in general) in IL-2.

 

Thank you for runing the test. Please, provide all the other details of the PC as indicated in the instructions so I can update this info to the table.

 

Well, I'm glad we can finally agree that someone did actually make inaccurate statements that were (as I said) very misleading.  I'm sorry, but there it is.  As I mentioned earlier, the problem with this is that we don't know or control who's reading this (and it's not just members).  Inaccurate technical advice is inaccurate technical advice; it matters to me and it darn well ought to matter to everyone in *any* gaming/sim community.

 

I'm also glad I'm able to share proof that even in this IL2 benchmark, the 5700XT vs 6700XT comparison is very consistent with most any other sources' comparisons of the two...they're just not the same, as was the original claim.  I'll gladly put in the details, shortly.

 

But, if I may - and this has really been my point all along:

 

In the software/technology business, we have something called a 'sanity check".  Even if the software (benchmark) tells us something, we're still always responsible to see if that data stands up to 'common sense'.  When the records that *appeared* to indicate a 5700XT is the same as a 6900XT were made available, the 'sanity check' should have kicked in:

 

"Hang on a sec...every reputable source on the planet says a 6900XT is twice as capable as a 5700XT ...so why are these numbers not consistent with that?"  When I ran the test, I got results that are consistent with every source I've looked at...so it makes perfect sense.  Just can't say that about the other records.

 

Instead of applying common sense and realizing there's just something wrong...we assume it's because "AMD sucks" - moreover, we actually begin to fabricate totally inaccurate reasons they suck:  "Well, you know these RDNA2 cards...even the top end cards have the same bandwidth..."

 

See what happened there?  It just went from complete fallacy to (supposedly) absolute fact, based on nothing more than inaccurate information.

 

What should have happened - IMHO - is the questionable data should be vetted.  (This is not at all unusual in the technology industry, nor is it unreasonable).  In fact, if you care about the usefulness of your study, you absolutely must be prepared to challenge data that doesn't seem to make sense.  And, sorry, but even in IL2, any GPU performance test that concludes a 5700XT is the same as a 6900XT...well, that's a red flag, period.

 

Instead, what's actually happened is it's all been lumped into "AMD sucks in this sim".  People begin to see conclusions that just aren't supported by the actual data, just because of the inherent (and misguided) bias against anything AMD.

 

I also spent time last night with some tests same platform AMD vs equivalent Nvidia card - it is exactly as you said, you'll be pleased to hear.  I will also post that here in a bit - but please keep in mind, not once did I say anything about AMD vs Nvidia before now.  I never denied AMD isn't as good as Nvidia in this sim (or anywhere else), but somehow it got turned into that.  Wasn't me.

 

I do think that any comparison of AMD vs Nvidia has to include factors the benchmark and spreadsheet just don't show...but yes, my (single) test also concludes that an AMD card doesn't run in the benchmark as well as a comparable  equivalent Nvida card.

 

The thing is, though - and I touched on this before - is when we say 'comparable', what are we actually considering?  Is 'equivalent' the same thing?  There are ways this benchmark and spreadsheet don't test/record in which AMD cards might actually be "better" than Nvidia...even in IL2.  I do intend to do more test runs to illustrate this - but please keep in mind it takes considerable time and some effort to swap back and forth on the same platform, change settings, run multiple tests, confirm settings and results, etc...

 

...and, if I'm being honest, I'm very hesitant to even utter the suggestion in this forum that AMD might compete with Nvidia in some ways.

 

 

 

 

 

 

Edited by NoBreaks
  • Like 1
Posted
8 hours ago, Alonzo said:

For anyone still reading this who cares, you can look at IL2 CPU performance using the built-in Windows tooling. With perfmon you can ask it to graph individual threads CPU usage, rather than overall usage. If you do that, you'll find there's one 'hot' thread that is using close to 100% of a CPU, a second 'warm' thread that is using maybe 60-80%, and then a bunch of other threads that aren't really doing much. This is a common problem in simulators, that the hot thread limits the overall performance. This is why, as the data in Chili's spreadsheet has shown, if you want to play IL2 with high framerates you need very good single core performance. The spreadsheet also shows that memory latency and frequency are a factor, which is further implied by the big jump in performance going from an AMD 5800X to a 5800X3D with its large 96MB cache but lower clock speed.

 

Unfortunately perfmon's clunkiness makes it hard to identify threads and the like. IMO the quickest way for a snapshot view is probably with process explorer by sysinternals or you can record a fixed duration sample with very sleepy (works well in conjunction with the benchmark test). To save everyone the trouble I believe your results will look very much like this which shows that a thread which makes calls to a function of the UI's middleware chews through CPU cycles like nobody's business.

Posted (edited)
4 hours ago, chiliwili69 said:

 

What wanted to say is that the typical CPU utilization reported by Task Manager or MSI afterburner is a number that is not represeting how much teh CPU is constraining IL-2 engine.

IL-2 engine has several threads and they are distributed across the cores. But it seems there are one or two threads which are very heavy and bottleneck the core for an  small fraction of time only since the threads are jumping from core to core. That´s why you will not see a core at 100%.

 

You can read more here and the subsequent posts:

 

I appreciate the clarification - but please be aware I didn't look at any data and conclude the CPU was constraining the IL2 engine.  Quite the opposite, in fact.  I'm saying that when these conditions occur, the sim frame rate plummets while the GPU is at a very low state of utilization (~30%), and you can see in the CPU load that it gets *slightly* busier but nothing even close to what would explain why the GPU is being starved.

 

I agree about bottlenecking the core, however, what I have seen on multiple platforms and can reproduce at will shows it will continue for as long as the conditions which cause it are present - not a small fraction of time only.

 

I'm not expecting to see a core at 100%.  What I see (and can replicate at will) shows that every time the CPU load approaches "X" due to conditions in the sim, the GPU will become 'starved' for data to process and (provided the conditions continue) the GPU load will drop (in my case, to as little as 30%) - the net effect of which is a frame rate that is essentially the same as a 1080 in the same situation.

 

And if you're not looking at CPU and GPU load during the test runs (which the benchmark doesn't do of itself) then you'd never see this taking place.  You  might just as easily conclude (strictly as an example) "this card sucks in IL2", when actually this seems to show far more about the sim than it does about the hardware.

 

As I said earlier the problem when this occurs is that the CPU is actually sitting there with **plenty** of capacity left.  It got a little busy, yes, but it's still only loaded at a relatively small part of what it can do - but meanwhile, my 3090 has practically fallen asleep because it's not getting data.

 

Once again: When it gets to the point that a 3090 is being caused to run on the level of a 1080 (and yes, I have proof) while the 3090 is only at ~30% load...well, to me at least *that is* poor CPU utilization.

 

Hope that makes sense.

40 minutes ago, firdimigdi said:

Unfortunately perfmon's clunkiness makes it hard to identify threads and the like. IMO the quickest way for a snapshot view is probably with process explorer by sysinternals or you can record a fixed duration sample with very sleepy (works well in conjunction with the benchmark test). To save everyone the trouble I believe your results will look very much like this which shows that a thread which makes calls to a function of the UI's middleware chews through CPU cycles like nobody's business.

*This.*  Thank you!

 

Although it appears firdimigdi is more experienced and knowledgeable in this area that I am, I believe he's making the point I just lack the technical skill to illuminate.

 

@ firdimigdi I don't want to misunderstand, so please let me ask you: Are you saying it looks like the "chewing through CPU cycles" is happening when it isn't/shouldn't be necessary?  IOW, is there some reason (that you can see) that this is problematic?  Would I be correct in saying it's just (sorry to oversimplify) "poor CPU utilization'?

 

Your time and effort in replying are appreciated.

Edited by NoBreaks
Posted
8 hours ago, Alonzo said:

 

Are you going to name the tool, or just derail the entire thread with your pedantry?

 

This is the problem. You're acting like you don't actually want to engage with the folks on the thread, that you prefer to score internet points and prove how superior you are. You haughtily call people out for making ad hominem attacks, when no such attacks were made.

 

My prediction is you'll keep doing this for a few more pages, but you'll find people stop responding to you and write you off as a troll, or someone with a weird idea of social norms, or someone "on the spectrum" or some such. Alternatively you could start communicating like a normal person rather than trying to 'win' each interaction.

 

For anyone still reading this who cares, you can look at IL2 CPU performance using the built-in Windows tooling. With perfmon you can ask it to graph individual threads CPU usage, rather than overall usage. If you do that, you'll find there's one 'hot' thread that is using close to 100% of a CPU, a second 'warm' thread that is using maybe 60-80%, and then a bunch of other threads that aren't really doing much. This is a common problem in simulators, that the hot thread limits the overall performance. This is why, as the data in Chili's spreadsheet has shown, if you want to play IL2 with high framerates you need very good single core performance. The spreadsheet also shows that memory latency and frequency are a factor, which is further implied by the big jump in performance going from an AMD 5800X to a 5800X3D with its large 96MB cache but lower clock speed.

 

ad ho·mi·nem
/ˌad ˈhämənəm/
adjective

(of an argument or reaction) directed against a person rather than the position they are maintaining.

 

Asking if I'm some guy who posts 10 years ago, and name-calling ("haughtily", etc) are directed at me personally and thus, by definition, Ad hominem.

 

I've asked politely.   Please discuss on the technical merits and leave the personal attacks out of it.

 

As for perfmon - I guess I just failed to include that among the "widely respected" tools ...as firdimigdi said: "clunkiness" (for lack of a better word).  Strictly my opinion, of course.

 

 

 

  • Like 1
  • Haha 1
Posted
20 minutes ago, NoBreaks said:

@ firdimigdi I don't want to misunderstand, so please let me ask you: Are you saying it looks like the "chewing through CPU cycles" is happening when it isn't/shouldn't be necessary?  IOW, is there some reason (that you can see) that this is problematic?  Would I be correct in saying it's just (sorry to oversimplify) "poor CPU utilization'?

 

Unfortunately all I can do is guess.

 

And my guess would be biased from having been in the unfortunate position of having worked with Scaleform (the middleware in question) in a project years ago for a title released on gaming consoles. As I recall at the time from my colleagues who were in charge of integrating said middleware it was a pain in the ass to optimize.

 

So going by that and considering that it appears to be a UI-related function for placing/rendering UI elements on the screen and it exhibits this behavior even when there are no UI elements on the screen that I can see I can only guess that it is either a function that's in a high frequency loop for a reason or a function that gets polled by multiple loops (it seems to be tied to the number of AI's present). I am certain that the developers are well aware of this, which is also why we had specific mention of a change in UI correlated to performance in recent interview videos (and also as I've mentioned before there was an ad on the russian forum for a UI developer knowledgeable in Noesis, which is a different UI middleware).

Posted

So, as promised, here are the details of the 5700XT v 6900XT test:

 

From CPUZ:

 

CPUZ.thumb.jpg.e8ec3c575f4bea290250ead8c5b20c2f.jpg

 

I believe this shows all the info asked for.

 

As before, the results are:

6900XT
1080p
2022-12-31 16:55:09 - Il-2
Frames: 6066 - Time: 60000ms - Avg: 101.100 - Min: 83 - Max: 145

4k
2022-12-31 17:18:17 - Il-2
Frames: 5342 - Time: 60000ms - Avg: 89.033 - Min: 80 - Max: 97

 

5700XT

1080p

2023-01-01 12:39:41 - Il-2
Frames: 5988 - Time: 60000ms - Avg: 99.800 - Min: 81 - Max: 142

4k
2022-12-31 17:47:44 - Il-2
Frames: 2712 - Time: 60000ms - Avg: 45.200 - Min: 39 - Max: 49

 

(This is the first time I've posted this 'result' so kindly indulge me if I'm overlooking something)

3 minutes ago, firdimigdi said:

 

Unfortunately all I can do is guess.

 

And my guess would be biased from having been in the unfortunate position of having worked with Scaleform (the middleware in question) in a project years ago for a title released on gaming consoles. As I recall at the time from my colleagues who were in charge of integrating said middleware it was a pain in the ass to optimize.

 

So going by that and considering that it appears to be a UI-related function for placing/rendering UI elements on the screen and it exhibits this behavior even when there are no UI elements on the screen that I can see I can only guess that it is either a function that's in a high frequency loop for a reason or a function that gets polled by multiple loops (it seems to be tied to the number of AI's present). I am certain that the developers are well aware of this, which is also why we had specific mention of a change in UI correlated to performance in recent interview videos (and also as I've mentioned before there was an ad on the russian forum for a UI developer knowledgeable in Noesis, which is a different UI middleware).

Thank you...and if I might press you just another moment or two...

 

Would your guess allow for the conditions I've described above (assuming they're even related, of course).

 

Put another way:  Does what I described above (concerning CPU utilization) seem like something that actually can (does) happen in the sim, in your opinion?

Posted

And lol I do sincerely appreciate the "pain in the ass" factor.  I've worked in computers my entire adult life, was a National Service Manager for a very large international technology interest...and I always marveled at what the software folks had to deal with.

 

But, with that said, my comments aren't concerned with how difficult it is (and I'm sure it is)...I just want to know if a. It is what it appears to be, and b. (possibly) if there's even a remote chance it could improve at some point.

Posted (edited)
17 minutes ago, NoBreaks said:

Put another way:  Does what I described above (concerning CPU utilization) seem like something that actually can (does) happen in the sim, in your opinion?

 

What you describe is possible if thread locking occurs or if a thread is stalled waiting for whatever reason:

https://learn.microsoft.com/en-us/gaming/gdk/_content/gc/system/overviews/finding-threading-issues/threads-locking

 

I have no idea if this is the case with IL-2 because it would require investing too much time in profiling it and even if one did that and proved it is the case there's nothing a player can do about it but it's possible, sure. And once more, if it is the case, then the developers are surely aware of it.

 

EDIT:

I must say tho, this tangent is severely derailing a very specific and very useful thread.

 

Edited by firdimigdi
  • Like 1
Posted (edited)

...there's nothing a player can do about it but it's possible, sure.

 

*sigh* Would that it were.

 

Thanks.

Edited by NoBreaks
Posted
29 minutes ago, firdimigdi said:

I must say tho, this tangent is severely derailing a very specific and very useful thread.

 

 

I appreciate your perspective. I'll try again to explain: My perspective is that the benchmark has been misinterpreted before, and cited as proof of inaccurate conclusions concerning hardware performance. 

 

Meanwhile, the benchmark might actually be causing the same issues I've seen outside the test runs. That part is yet unknown, but I don't think anyone can say it's not, for the very simple reason that no one really knows what causes it.

 

How could anyone say - accurately - that this is *not* happening? We simply have to acknowledge the possibility exists.

 

In any event, I'm not sure how a simple discussion (provided its kept civil) is harming anything.  Suppose there is some relationship between the two...are we actually afraid to consider that?

 

If technical people don't ask questions and consider answers, how does anything ever improve?

 

If there is that much opposition to considering potential for improvement, then perhaps my time and contribution here are misguided. I'll gladly step out.

 

But I haven't seen anyone else willing to consider things outside the popular (sometimes inaccurate) opinions here.

Posted
19 minutes ago, NoBreaks said:

In any event, I'm not sure how a simple discussion (provided its kept civil) is harming anything.  Suppose there is some relationship between the two...are we actually afraid to consider that?

 

Not at all, but it warrants its own thread instead of hijacking the thread that is very specifically about "which hardware and which tweaks yield the best performance for IL-2 at its current state". This thread is excellent help for people who are very focused on just this game on their limited free time and budget and would like information as to what they can change in terms of hardware or hardware settings to increase the game's performance and their enjoyment of it.

  • Thanks 1
  • Upvote 6
Posted (edited)

Hi guys I've just discovered something regarding the Hud settings, now i know there is a performance penalty when displaying the Hud but i am still getting the performance drop even when having the settings/flight interface/show HUD by default set to off. 

I have always had this set to off as I don't use the Hud and  prefer to keep things as realistic as I possible.

I have run some benchmark tests to show what i mean.

 

Cpu test

Test1 is with Hud on by default in settings but disabled by with H key then run the test with P.

Test2 is with Hud off by default in settings (which confirms this with the Hud disabled message at the start of test) then press P.

Test3 is with Hud off by default in settings but by Pressing H to enable then H again to disable you gain the performance lift as per Test1.

 

I'm not as technically minded as others but is seems like there is something is still running in the background even when the Hud is off in the settings unless you enable it with H then disable with H again. To rule out that there isn't something wrong at my end maybe someone could check this also. I hope all this makes sense.

 

Test1
Settings/flight interface/show hud by default = Yes (hud enabled)

press H (hud disabled)
press P

2023-01-06 11:49:22 - Il-2
Frames: 7951 - Time: 60000ms - Avg: 132.517 - Min: 112 - Max: 181

2023-01-06 11:51:33 - Il-2
Frames: 7961 - Time: 60000ms - Avg: 132.683 - Min: 111 - Max: 183

2023-01-06 11:53:47 - Il-2
Frames: 7946 - Time: 60000ms - Avg: 132.433 - Min: 112 - Max: 181

 

Test2
Settings/flight interface/show hud by default = No (hud should be disabled)

Press P

2023-01-06 12:13:01 - Il-2
Frames: 7524 - Time: 60000ms - Avg: 125.400 - Min: 104 - Max: 166

2023-01-06 12:15:06 - Il-2
Frames: 7578 - Time: 60000ms - Avg: 126.300 - Min: 103 - Max: 166

2023-01-06 12:17:06 - Il-2
Frames: 7701 - Time: 60000ms - Avg: 128.350 - Min: 105 - Max: 170

 

Test3
Settings/flight interface/show hud by default = No (hud disabled)

press H (hud enabled) press H (hud disabled)
press P

2023-01-06 12:19:21 - Il-2
Frames: 7955 - Time: 60000ms - Avg: 132.583 - Min: 113 - Max: 188

2023-01-06 12:21:39 - Il-2
Frames: 7962 - Time: 60000ms - Avg: 132.700 - Min: 111 - Max: 183

2023-01-06 12:23:43 - Il-2
Frames: 7941 - Time: 60000ms - Avg: 132.350 - Min: 108 - Max: 186

Edited by shirazjohn
  • Like 1
  • Upvote 4
Posted

With 9 PC builds over 20+ years (building 10 is coming in Mar 2023), This is what I have found to get the best experience:

  • Buy the best hardware you can afford
  • Invest in the best cooling you can afford (...yes, I owned a phase change cooling unit at one time in the late to mid 2000s that ended up in the trash can)
  • Run game at max graphic settings
  • keep your fames per sec to where you don't notice stuttering with VSync enabled

Running any game over an avg of 60 fps with no stutters at max settings is the same experience as running at an avg of 120 fps at max settings.

 

That said, figuring out which setting gets the best experience is where I think the conversation here should lie (running HDR, SSAO, Sharpen... etc) which is what I see here a lot and is a very good use of time of the smart people we that have provided very constructive criticism over the years and gave us cool tools to help us chase the dragon! (Yes - I love Steely Dan, he's a lyrical genius).

 

What I think is a waste of our time here are the "rate my new system" posts.

 

Keep up the good work contributors on helping out the less knowledgeable and those that ask for help BEFORE buying. LOL

  • Confused 1
Posted (edited)
6 hours ago, shirazjohn said:

Hi guys I've just discovered something regarding the Hud settings, now i know there is a performance penalty when displaying the Hud but i am still getting the performance drop even when having the settings/flight interface/show HUD by default set to off. 

I have always had this set to off as I don't use the Hud and  prefer to keep things as realistic as I possible.

I have run some benchmark tests to show what i mean.

 

 

Nice find, I just ran the CPU test and got another 3 fps or so and about 1 fps in the VR2 test..

 

2023-01-06 11:58:51 - Il-2
Frames: 9488 - Time: 60000ms - Avg: 158.133 - Min: 138 - Max: 211 (hud off)

2023-01-06 12:08:19 - Il-2
Frames: 9256 - Time: 60000ms - Avg: 154.267 - Min: 132 - Max: 207 (hud on)

 

VR Test 2:

2023-01-06 12:34:03 - Il-2
Frames: 5168 - Time: 60000ms - Avg: 86.133 - Min: 67 - Max: 91 (hud off)

2023-01-06 12:37:43 - Il-2
Frames: 5101 - Time: 60000ms - Avg: 85.017 - Min: 69 - Max: 91 (hud on)

Edited by DBCOOPER011
added info
  • Upvote 1
Posted (edited)
59 minutes ago, DBCOOPER011 said:

Nice find, I just ran the CPU test and got another 3 fps or so and about 1 fps in the VR2 test..

Its not very useful when the you disable the Hud in settings but it is still using resources as if we're enabled but at least i know now i have to use the H key regardless.  

 

Also not sure if the benchmark is supposed to be run with the Hud on or off, there aren't any instructions for this as far as I can see just map and icons.

 

But i do get more consistent results with hud off.

Edited by shirazjohn
Posted
22 hours ago, shirazjohn said:

Also not sure if the benchmark is supposed to be run with the Hud on or off, there aren't any instructions for this as far as I can see just map and icons.

 

We really can agree whatever we want. In princinple it should be the one which gives better performance, which correspond to the Test1 or Test3 methods you described.

 

Perhaps is simpler to have the option checked on (Settings/Flight interface  --> show HUD by default = Yes (hud enabled)) and then press H to turn off the HUD before pressing P.

Posted (edited)
2 hours ago, chiliwili69 said:

Perhaps is simpler to have the option checked on (Settings/Flight interface  --> show HUD by default = Yes (hud enabled)) and then press H to turn off the HUD before pressing P.

Hi Chilli yes i agree i think this would be the best solution for the benchmark. 

 

For me part of the reason for disabling the Hud in the flight interface was to save mapping the H key to my hotas as i don't use the Hud anyway, but i think it will be simpler to do this now.

Edited by shirazjohn
Posted
On 1/2/2023 at 10:25 AM, NoBreaks said:

 

Ah, excellent - an actual link!  :)  :)  :) Thank you.

 

Having that reference makes it crystal clear where the idea comes from, thus allowing the reader to make a more informed decision about what's being presented.  There appears to be a good deal of testing in that review, and it would *seem* to warrant at least some consideration.

 

Now that there is an actual reference, let's look at something closely...here's a quote from that review:

 

One major issue although affecting relatively few gamers is poor VR RX 7900 XTX performance compared with the RTX 4080.  It’s going to need some attention from AMD’s driver team before we can recommend the RX 7900 XTX for the best VR gaming.

 

It says "affecting relatively few gamers" ...so, it doesn't seem appropriate to base our entire opinion on what one review says about 'relatively few gamers'...at least to me.  I don't know if this really proves "AMD sucks compared to Nvidia for VR" lol

 

In fact it even goes on to say that more attention from AMD's driver team is needed...these days, it's not at all uncommon for hardware to ship with immature drivers...everything (including Nvidia and my stupid TV for that matter) is *constantly* being updated.  So to be more accurate, we might say (for example) "AMD might not be as good as Nvidia for VR right now, but then the product's just been released and driver updates are all but certain, at which point AMD might actually be even better than Nvidia."

 

And, incidentally, I'm not sure I can agree with the idea of generalizing "all AMD sucks for VR" based on a review that's only covering the very latest GPU from AMD - if that is the actual source; we don't know since no sources have been given.  That's among the problems with not giving sources.

 

And the biggest problem with not giving sources, of course, is that a reader can't make an informed decision when there's nothing for them to actually consider (except opinion, that is).

 

Babeltech is about the only professional reviewer that I've seen who is bench marking AMD and nVidia GPUs in VR with frame time graphs. They also have frame time graphs of the 6950 XT and 3080 TI from the 4080 FE review: 

 

The $1199 RTX 4090 Founders Edition Performance of 50+ Games
https://babeltechreviews.com/is-the-rtx-4080-fe-worth-1200-today-50-game-analysis/

 

If you compare the lines used in the 4080 to the 7900XTX review, they are the same data set. 

 

You seem to be under the misunderstanding that we are professional paid reviewers with money to spend. Yes, if I was doing these for work, of course I would have purchased a 5700 XT, 6900 XT, 2080 TI, 3080, 3090 and at least one 12*00 and 13*00 CPU with appropriate ram version.

 

And I would have expensed it to the company and/or written it off for tax purposes (and I could certainly use a tax write-off like that).

 

As it is, I've only been able to test on the 1080 TI and 3080 Ti with the I7-4770K, R7-3800X, 5800X, 5800X3D, against the Reverb G1 and Vive Pro 2 headsets. One has to draw the line somewhere. 

 

The reality is, we have to make judgements based on the data we've got. And based on the data available at the time, the 5700 XT appeared to be comparable to a 1080 TI in Il-2 (which is reasonable since it is comparable in other games) and the 6900 XT was significantly underperforming. 

 

With your test results, it appears I had seriously over estimated the 5700 XT's performance in Il-2. Give the few 1080 TI 4K benchmarks show it running about on par with your new 6900 XT 4K benchmark, which is also running twice the 5700 XT. 

 

As for the 448 GB/s vs 512GB/s vs 484 GB/s, yes they are not identical, but when the 30, 40 and RDNA3 cards are running at 936 GB/s, 1008 GB/s and 960 GB/s respectively, it's not a meaningful difference. 

 

Now, that said, the cache thrash theory was based on the mistaken belief that the 5700 XT was performing on par with a 1080 TI. But with the 4K testing, in Il-2, it is pretty far behind it, and it seems to correspond more directly with the number of compute units, but I'm not yet seeing anything that lines up with how it performs on nVidia cards. 

 

Addendum: re-reveiwing the available test results, are you sure you 5700 XT results are good? The only other 4K 5700 XT result I can see is significantly outperforming your card in 4K, and the CPU is weaker. It's performing much closer to the 1080 Ti and 2080 Ti cards, so I'm wondering if something might have munged your test runs? Did it meet expected perform in other benchmarks or was it slow there too? 

  • Upvote 1
MilitantPotato
Posted (edited)

I'd like to politely rerequest we have a benchmark that doesn't stall the render engine trying to manage ai so we can have an actual gpu biased test. We could avoid extremely memory intensive and unreasonable settings like 8x msaa and get a viable comparison between graphics cards. We could even use the same benchmark but with all ai except the chased il2 removed.

 

It would make both the 4k and vr3 tests actually meaningful and more inline with what folks might expect during multi-player or the majority of single player missions where time stretching covers up the actually cpu demanding portions.

Edited by MilitantPotato
MilitantPotato
Posted (edited)

Anecdotally I've  never had framerates nearly low as this benchmark gives, except for areas online with several fires and lots of ground targets, and even then when fully cpu bound i get 50% more fps. In single player I'll get some odd warping of high speed ai and a sense of slow motion at the height of a mission but my framerates are maxed with expected gpu usage.

 

Also given the way amd drivers offload a fair amount of the render pipeline to cpu cores, the unusually cpu bound nature of this benchmark could be skewing results in a way that doesn't reflect most real gameplay experiences, maybe.

Edited by MilitantPotato
Posted
23 hours ago, MilitantPotato said:

I'd like to politely rerequest we have a benchmark that doesn't stall the render engine trying to manage ai so we can have an actual gpu biased test. We could avoid extremely memory intensive and unreasonable settings like 8x msaa and get a viable comparison between graphics cards.

 

The MSAAx8 was used as a way to bottleneck the GPU. I was experimenting with MSAA in 4K and saw that the 8X was putting the card at 100%:

 

Today, we have extremely powerfull GPU like the 4080 and 4090. So, as you observed, in the GPU test at 4K, the bottleneck is not eh GPU (as intented) but the CPU.

 

Ideally we should have a way to load more the GPU to be constrained by the GPU and not the CPU, but all GPU demanding items has been mazed out (clouds, MSAA, and 4K).

 

If we had 8K monitors then will be perfect for testing but they are not common. I was thinking to used supersampling in the NVIDIA panel for the 4K test.

 

Does anybody in the forum know how we could saturate the GPU running the SYN_Vander benchmark?

Posted

x4/x8 SSGAA through Nvidia Inspector with x4MSAA or x8MSAA. I have got a 3080 at 4K and my GPU is at 98-100%.

Check this:
 

 

 

Posted

I did CPU tests on version 5.004b.

 

Motherboard:   Asus Rog Maximus Z790 Apex
 CPU:                 Intel Core i9 13900k
 CPU Freq:         6.1 Ghz
 Cores:               8P+16E
 Threads:           32 
 RAM size:         32Gb (2x16GB)
 RAM Freq:        8000 MHz 
 NB Freq:           5000 MHz 
 RAM timings:   36-46-46-58 CR2
 GPU:                 Nvidia 4090 FE
 HMD:                Varjo Aero

 

2023-01-12 20:58:58 - Il-2
Frames: 9857 - Time: 60000ms - Avg: 164.283 - Min: 141 - Max: 221

2023-01-12 21:01:02 - Il-2
Frames: 9701 - Time: 60000ms - Avg: 161.683 - Min: 140 - Max: 213

2023-01-12 21:03:00 - Il-2
Frames: 9793 - Time: 60000ms - Avg: 163.217 - Min: 140 - Max: 218
 

  • Thanks 2
Posted
14 hours ago, WallterScott said:

Intel Core i9 13900k

 

Thanks!. It is a top machine!, and as usual you set again the top bar in IL-2 performance. ?

It is the first 13900K tested here.

 

Would you be so kind to run the VR tests?  You will be the first to run IL-2 test with Aero.

  • Upvote 1
Posted
20 hours ago, WallterScott said:

I did CPU tests on version 5.004b.

 

Motherboard:   Asus Rog Maximus Z790 Apex
 CPU:                 Intel Core i9 13900k
 CPU Freq:         6.1 Ghz
 Cores:               8P+16E
 Threads:           32 
 RAM size:         32Gb (2x16GB)
 RAM Freq:        8000 MHz 
 NB Freq:           5000 MHz 
 RAM timings:   36-46-46-58 CR2
 GPU:                 Nvidia 4090 FE
 HMD:                Varjo Aero

 

 

 

Nice! Are you running that 6.1ghz all core?

Posted
8 часов назад, chiliwili69 сказал:

Would you be so kind to run the VR tests?  You will be the first to run IL-2 test with Aero.

I'm very interested myself. I will conduct tests with Aero in the near future.

 

2 часа назад, DBCOOPER011 сказал:

Nice! Are you running that 6.1ghz all core?

Thanks. I'm setting 6.1 for only two cores. 6.0 for 4 cores, 4.9 for 6 and 4.8 for all. Adaptive 1.5 V, LDC 4 (asus)

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...