chiliwili69 Posted November 8, 2022 Author Posted November 8, 2022 15 hours ago, DBCOOPER011 said: VR Test 2: Frames: 5316 - Time: 60000ms - Avg: 88.600 - Min: 83 - Max: 92 HEY! this performance is amazing. You are again at the top limit in VR. No more options can be raised to load the CPU. All the data above you provided is gold for future 13th gen owners.
-3RW-ghyslain Posted November 11, 2022 Posted November 11, 2022 Anecdotal report : Specs : Cpu : i9 13900kf Gpu : RTX 4090 (baseline Gigabyte edition card) Ram : 2x16GB 5200MHz DDR5 PSU : 1200W Storage : 2TB M2 SSD HMD : Pimax 8KX, normal FOV, 75hz Everything running at stock speed, other than the ram running at their designated OC XMP profile. Graphic settings : High preset, with maxed out textures (4k textures unchecked though, since no noticeable difference yet massive performance drop), shadows disabled, AA disabled (I prefer sharpness to bluriness induced by anti-aliasing), mirrors disabled. PiTool running at 1.0 render resolution and steam VR at 100%. I flew today in the Finnish MP server, over busy frontline areas with 10+ planes involved in furballs. My frame time kept around 8-9ms, with some very rare frames sinking in the 15-20ms (think 1% low, if not 0.1% low). Overall I had a really smooth experience. I could probably run 90hz on it with those settings, given that would require 11.1ms and lower frametime. The screen clarity we get with those settings is out of this world... Furthermore, I tried to run large FOV preset, again 1.0 render resolution and 100% steamVR supersampling but failed to get it stable below 13.3ms frame time (I'd have it around 13-14ms frametime, in Berloga server which is devoids of any assets really). So large preset is still a no go with that resolution, but to be fair, we only lose 20 degrees of FOV. And said outer edge of the FOV is blurry anyway due to lense distortion, so no big loss. Cheers! 2
1Sascha Posted November 12, 2022 Posted November 12, 2022 Going to throw in more anecdotal stuff here: Been playing on Stalingrad summer and Moscow winter in VR these past few days, and even with slightly upped graphics settings, 1.23 effective PD and with flights of multi-engine bombers and lots of fighters on the map, everything was smooth for me. No comparison to Kuban and especially Normandy. I'm going to guess that both Kuban and Normandy are either generally more detailed and thus more demanding on the HW than the older maps or that it's some sort of combination of a more detailed map and bomber-AI that leads to the stutters I'm encountering. Perhaps we should have a new benchmark mission/track that's set on either Kuban or Normandy - and if it's only for the purpose of trying to isolate what factor or factors (or combination of factors) are probable culprits here. I'm very sure that this isn't a fluke since I've played both the older maps all week now for at least an hour per day and never encountered anything but 79/80 FPS and very smooth flying. Whereas with Kuban and especially Normandy it's a bit of a lottery. Some career missions run fine, others are near unplayable once there are lots of planes around me. I still suspect the AI is somehow involved, but I cannot be certain of that. S. 1
MilitantPotato Posted November 12, 2022 Posted November 12, 2022 7 hours ago, 1Sascha said: Going to throw in more anecdotal stuff here: Been playing on Stalingrad summer and Moscow winter in VR these past few days, and even with slightly upped graphics settings, 1.23 effective PD and with flights of multi-engine bombers and lots of fighters on the map, everything was smooth for me. No comparison to Kuban and especially Normandy. I'm going to guess that both Kuban and Normandy are either generally more detailed and thus more demanding on the HW than the older maps or that it's some sort of combination of a more detailed map and bomber-AI that leads to the stutters I'm encountering. Perhaps we should have a new benchmark mission/track that's set on either Kuban or Normandy - and if it's only for the purpose of trying to isolate what factor or factors (or combination of factors) are probable culprits here. I'm very sure that this isn't a fluke since I've played both the older maps all week now for at least an hour per day and never encountered anything but 79/80 FPS and very smooth flying. Whereas with Kuban and especially Normandy it's a bit of a lottery. Some career missions run fine, others are near unplayable once there are lots of planes around me. I still suspect the AI is somehow involved, but I cannot be certain of that. S. The biggest source of stutters for me is having distant buildings on, I've debated moving il-2 to a slow mechanical to see if that helps since it seems IL-2 doesn't load things gracefully, and the nvme just causes extreme cpu usage every time a new asset loads. Southern Normandy is especially brutal both for frame-rate and stutter 1
chiliwili69 Posted November 14, 2022 Author Posted November 14, 2022 On 11/12/2022 at 12:12 PM, 1Sascha said: Perhaps we should have a new benchmark mission/track that's set on either Kuban or Normandy - and if it's only for the purpose of trying to isolate what factor or factors (or combination of factors) are probable culprits here. The initial purpose of this benchmark was to analyze what hardware CPU (with CPU test) and GPU (with GPU test) deliver better performance. And also to compare performance with your peers. What you are looking for is a different thing. You want to know what kind of objects (pasive or AI), or items of maps are generating sttuters and low perforamce. To do that you can go to the IL-2 Mission Editor and create a basic flight mission with no objects, and from there try different maps, and adding then different tyes of objects and see when the performance of that basic flight is degraded or appear sttuters. This requires a lot of time of course. Any volunteer??
1Sascha Posted November 15, 2022 Posted November 15, 2022 12 hours ago, chiliwili69 said: What you are looking for is a different thing. I'm aware of that and I wasn't implying "replacement" or change of purpose of the test. Maybe call it an expansion of the purpose of this test. It is rather remarkable to me that FPS aren't an issue for my setup on some maps, while it struggles rather badly on others - without me having changed anything and during the same IL-2 session. I'd really like to isolate the cause of this and I'd also like to know if other setups are having the same problems under the same circumstances. Sadly, I'm not knowledgeable enough to come up with a "scientific" way of looking into this. Heck, I'm not even sure what's causing this on my system - especially since it's not all missions on Kuban or Normandy that produce stutters/FPS drops for me. S.
firdimigdi Posted November 15, 2022 Posted November 15, 2022 (edited) 13 minutes ago, 1Sascha said: I'm aware of that and I wasn't implying "replacement" or change of purpose of the test. Maybe call it an expansion of the purpose of this test. It is rather remarkable to me that FPS aren't an issue for my setup on some maps, while it struggles rather badly on others - without me having changed anything and during the same IL-2 session. I'd really like to isolate the cause of this and I'd also like to know if other setups are having the same problems under the same circumstances. Sadly, I'm not knowledgeable enough to come up with a "scientific" way of looking into this. Heck, I'm not even sure what's causing this on my system - especially since it's not all missions on Kuban or Normandy that produce stutters/FPS drops for me. S. 13 hours ago, chiliwili69 said: What you are looking for is a different thing. You want to know what kind of objects (pasive or AI), or items of maps are generating sttuters and low perforamce. To do that you can go to the IL-2 Mission Editor and create a basic flight mission with no objects, and from there try different maps, and adding then different tyes of objects and see when the performance of that basic flight is degraded or appear sttuters. This requires a lot of time of course. Any volunteer?? I've made a sample mission that does this partly this available below - get the stuttertest - plane spawner on flare.zip file then you can shoot a flare and a FW190 will spawn and start circling - keep on spawning them until it induces stutter (note: this is on the smallest map the game has to offer to discount map-related assets causing extra load): There are other stutter test related missions in that thread dealing with vehicles (they don't induce stutter) and statics/passive/unlinked-AI (they don't induce stutter), what does induce it eventually is plane AIs (and IIRC, in a Kuban test map I had done, I also found that moving ships will do this as well even if they are far away and not visible to you)- they also cause the Scaleform function mentioned in that thread to consume more and more CPU time with each plane AI present. Read all about it in there. Edited November 15, 2022 by firdimigdi typo
-3RW-ghyslain Posted November 16, 2022 Posted November 16, 2022 16 hours ago, 1Sascha said: I'm aware of that and I wasn't implying "replacement" or change of purpose of the test. Maybe call it an expansion of the purpose of this test. It is rather remarkable to me that FPS aren't an issue for my setup on some maps, while it struggles rather badly on others - without me having changed anything and during the same IL-2 session. I'd really like to isolate the cause of this and I'd also like to know if other setups are having the same problems under the same circumstances. Sadly, I'm not knowledgeable enough to come up with a "scientific" way of looking into this. Heck, I'm not even sure what's causing this on my system - especially since it's not all missions on Kuban or Normandy that produce stutters/FPS drops for me. S. anecdotal report in that regard : I used to experience very frequent stutters, off of a r9 5900x rig paired with an rtx 3080 despite being nowhere near gpu bound. I tweaked my nvidia control panel settings to 2 virtual reality pre-rendered frames (from a default of 1). The stutters went away with that change.
chiliwili69 Posted November 16, 2022 Author Posted November 16, 2022 On 10/28/2022 at 10:02 PM, MilitantPotato said: 4K Test Frames: 9857 - Time: 60000ms - Avg: 164.283 - Min: 137 - Max: 205 Thank you for updating your tests with the 4K GPU test. In the table there is another 4K test performed by @=SFG=BoostedStig where 185fps was obtained with a 4090 and a 5800X3D. When the 4K was designed for monitor, it was concieved as a test to bottleneck the GPU before the CPU, that´s why it is in 4K and with all options that load GPU at max and all the others which load CPU at the minimun. But, since the 4090 is soooo powerfull in this test the CPU became a bottleneck before the GPU. I think this is the reason why you obtain 20fps less that your peer with the 4090. In fact, I also don´t know if the 185fps mark obtained is also constrained by the 5800X3D. We will need a 4K test from someone with a 13900K and a 4090, just to verify that. 22 hours ago, firdimigdi said: I've made a sample mission that does this partly this available below - get the stuttertest - plane spawner on flare.zip file then you can shoot a flare and a FW190 will spawn and start circling - keep on spawning them until it induces stutter (note: this is on the smallest map the game has to offer to discount map-related assets causing extra load) Thanks for bringing this here. It is the thread that go exactly to that matter. And with a procedure to test. I remember I read it in May but didn´t test it since I was quite happy with the peformance of my Index, but would take a second look when I can afford enough time.
TG-55Panthercules Posted November 16, 2022 Posted November 16, 2022 Just got my new PC up and running (PSU was DOA and I had to deal with that and some travel IRL). Haven't had a chance to unbox and setup my new Reverb G2 so haven't been able to run any VR tests, but I did run the v6 benchmark for 1080 monitor (from the first post in this thread) just to see how my rig was performing out of the box (haven't done any OC tweaking either just yet), and got the following: Frames: 7976 - Time: 60000ms - Avg: 132.933 - Min: 107 - Max: 187 This was with the following rig (Skytech Prism II): Motherboard: ASRock Z690-C/D5 CPU: Intel Core i9 12900K CPU Freq: 5087.56 MHz Cores: 8P+8E Threads: 24 RAM Size: 32 GB (2x16GB) RAM Freq: 4800 MHz (2x2394.1 MHz) NB Freq: 3591.2 MHz RAM Timings: 38-38-38-70 GPU: 3090 Windows 10 IL-2 GB ver. 5.002b Haven't figured out how to address the hyperthreading or fast/slow core concepts for this CPU yet, but if I am interpreting the spreadsheet correctly it seems like the rig is running pretty much as would be expected out of the box. Guess I'll try to get my VR working and run those benchmarks for baselines before starting to tweak/overclock anything. 1
DBCOOPER011 Posted November 17, 2022 Posted November 17, 2022 (edited) 7 hours ago, TG-55Panthercules said: Just got my new PC up and running (PSU was DOA and I had to deal with that and some travel IRL). Haven't had a chance to unbox and setup my new Reverb G2 so haven't been able to run any VR tests, but I did run the v6 benchmark for 1080 monitor (from the first post in this thread) just to see how my rig was performing out of the box (haven't done any OC tweaking either just yet), and got the following: Frames: 7976 - Time: 60000ms - Avg: 132.933 - Min: 107 - Max: 187 This was with the following rig (Skytech Prism II): Motherboard: ASRock Z690-C/D5 CPU: Intel Core i9 12900K CPU Freq: 5087.56 MHz Cores: 8P+8E Threads: 24 RAM Size: 32 GB (2x16GB) RAM Freq: 4800 MHz (2x2394.1 MHz) NB Freq: 3591.2 MHz RAM Timings: 38-38-38-70 GPU: 3090 Windows 10 IL-2 GB ver. 5.002b Haven't figured out how to address the hyperthreading or fast/slow core concepts for this CPU yet, but if I am interpreting the spreadsheet correctly it seems like the rig is running pretty much as would be expected out of the box. Guess I'll try to get my VR working and run those benchmarks for baselines before starting to tweak/overclock anything. That looks like a very suitable system for IL2 in VR, and looks like a pretty good score for stock settings w/o XMP. You should be able to hit a fairly constant 90Mhz with the G2 no problem after a bit tweaking.. As an FYI with the G2, you should go into computer management and disable the "Holographicshell" app for VR ops. It's a current problem affecting the G2 causing stutters and fps loss... Edited November 17, 2022 by DBCOOPER011
chiliwili69 Posted November 17, 2022 Author Posted November 17, 2022 12 hours ago, TG-55Panthercules said: This was with the following rig (Skytech Prism II): Motherboard: ASRock Z690-C/D5 Thank you for testing it just out of the box. Yes, as you saw your are well aligned. I saw you finally went for this pre-built PC (for 2999$ I guess). For that price I think it is good deal. And you have future enhancements for 13th gen CPUs and future better DDR5 RAM that will improve over time. As say above, it is a solid PC for IL-2 VR. And a 3090 for the G2 is perfect. Regarding RAM, what is the model of RAM? (CPU-Z will tell you). Is XMP profile ON or OFF?
TG-55Panthercules Posted November 17, 2022 Posted November 17, 2022 (edited) 9 hours ago, chiliwili69 said: Regarding RAM, what is the model of RAM? (CPU-Z will tell you). Is XMP profile ON or OFF? The RAM is Kingston Fury DDR5. Not sure how to check about the XMP profile - is that done in the startup BIOS? CPU-Z reports SPD Ext. as XMP 3.0, and has a column for timings headed XMP-4800 (that reads 38-38-38-70), but I don't know if that means it's on or off. [EDIT] I checked in the startup BIOS and there is a setting for XMP Profile but it only has 2 options - "Auto" and "Profile 1" (the latter reading as DDR5-4800 38-38-38-70 1.10V). It was set on Profile 1. Edited November 17, 2022 by TG-55Panthercules
chiliwili69 Posted November 17, 2022 Author Posted November 17, 2022 10 minutes ago, TG-55Panthercules said: Not sure how to check about the XMP profile You RAM is this one then: https://www.kingston.com/en/memory/gaming/kingston-fury-beast-ddr5-memory?speed=4800mt%2Fs&total (kit) capacity=16gb&kit=kit of 2&dram density=16gbit&profile type=intel xmp but, What "Part Number is showing your RAM in the SDP"?
TG-55Panthercules Posted November 17, 2022 Posted November 17, 2022 Same as the one you showed - KF548C38-16 (but the DRAM is shown as manufactured by Micron Technology rather than Samsung as in your screenshot).
chiliwili69 Posted November 17, 2022 Author Posted November 17, 2022 7 minutes ago, TG-55Panthercules said: KF548C38-16 OK, then you have one XMP profile to activate. And it is already ON. This is changed from BIOS and also from some dedicated tunning tools like Intel XTU I think. https://www.kingston.com/datasheets/KF548C38BB-16.pdf Other RAM modules of Kingston have other XMP memory profiles to choose: https://www.kingston.com/datasheets/KF560C40BB-8.pdf The RAM kit you have is quite OK for now. If one day, when RAM DDR5 will be cheaper, you what to switch your RAM for a better one you can look at this table: https://www.memorybenchmark.net/latency_ddr5_intel.html or go to PCpartpicker and sort it by FirstWordLatency. I really know nothing about RAM tunning. It is something that require some time. But the good thing is that an XMP profile is something that works, has been tested before, it is a proven profile.
TG-55Panthercules Posted November 17, 2022 Posted November 17, 2022 Thanks for that info. I don't think I've messed with my RAM settings in any of my previous overclocking/tweaking efforts. I did get sporty when I OC'd my previous i7-7700K - even de-lidded it to help with the heat it was generating, but I'm hoping to be able to avoid such complicated shenanigans with this new rig.
dburne Posted November 18, 2022 Posted November 18, 2022 6 hours ago, chiliwili69 said: OK, then you have one XMP profile to activate. And it is already ON. This is changed from BIOS and also from some dedicated tunning tools like Intel XTU I think. https://www.kingston.com/datasheets/KF548C38BB-16.pdf Other RAM modules of Kingston have other XMP memory profiles to choose: https://www.kingston.com/datasheets/KF560C40BB-8.pdf The RAM kit you have is quite OK for now. If one day, when RAM DDR5 will be cheaper, you what to switch your RAM for a better one you can look at this table: https://www.memorybenchmark.net/latency_ddr5_intel.html or go to PCpartpicker and sort it by FirstWordLatency. I really know nothing about RAM tunning. It is something that require some time. But the good thing is that an XMP profile is something that works, has been tested before, it is a proven profile. Typically one just sets it to the ram's XMP profile and done. Some like to get into overclocking and pushing the ram further or tightening the timings some, me personally whilst I do that with my CPU's and to a smaller extent my GPU's I have always preferred to just set my ram to it's XMP Profile.
Voyager Posted November 18, 2022 Posted November 18, 2022 I do wonder what we could expect Zen 4 with DDR5 to perform at. A bunch of Microcenters are doing 32GB of DDR5-6000 for free with a Zen 4 7700X and up CPU, and $50 off an AM5 motherboard. Really tempting, even though some of my other stuff runs best of 64Gb, and I don't really need a CPU upgrade as much as a GPU upgrade. But still, don't think we're going to see sales like this for another year at least.
MilitantPotato Posted November 19, 2022 Posted November 19, 2022 I wonder if the spreadsheet should include frap's "frames" output too. I've noticed over the course of a few runs I can have fps numbers that are 4 or more fps different, with nearly identical total frames rendered, which seems like a more consistent metric, and at the least a much easier way to find percentage differences between chips. One of frap's shortcomings is not showing average, 1%, and .1% stats, which the later two being far more useful for performance comparison. Frames is sorta a way to see the overall performance of someone's run, even if for some reason they had a system stutter that caused a frame or two to be very slow.
chiliwili69 Posted November 19, 2022 Author Posted November 19, 2022 16 hours ago, Voyager said: I do wonder what we could expect Zen 4 with DDR5 to perform at. A bunch of Microcenters are doing 32GB of DDR5-6000 for free with a Zen 4 7700X and up CPU, and $50 off an AM5 motherboard. Yes, we hvae no data yet for IL-2. Previous Zen3 RAM speed was limited to 4000Mhz, so it is still to be seen if those Zen4 will beat Intel 13th gen in IL-2. 6 hours ago, MilitantPotato said: I wonder if the spreadsheet should include frap's "frames" output too. In the past benchmarks (Remagen, Chili, Samuel, Balapan, they are all in different tabs in the same google spreadsheet of the SYN_VANDER) you will notice that we were also reporting the total frames for the run of the benchmark. But this total frames number is just equivalent to the formula Average_fps = Total_Frames / total_seconds being total_seconds the duration of the run, 60 seconds in the case of SYN_VANDER. Please, let show me those runs you say they show diferent fps but same total frames. Regarding stat, it is a pitty that fraps don´t report those 99% or 99.9% percentiles, but you can calculate them apart in excel (or other tools) if you export you frametimes output.
MilitantPotato Posted November 19, 2022 Posted November 19, 2022 13 hours ago, chiliwili69 said: Yes, we hvae no data yet for IL-2. Previous Zen3 RAM speed was limited to 4000Mhz, so it is still to be seen if those Zen4 will beat Intel 13th gen in IL-2. But this total frames number is just equivalent to the formula Average_fps = Total_Frames / total_seconds being total_seconds the duration of the run, 60 seconds in the case of SYN_VANDER. Please, let show me those runs you say they show diferent fps but same total frames. Ah this is true on my test results also. I guess I was putting too much weight on the min/max framerates when looking at the spreadsheet which can vary by 5+% seemingly randomly for me. I don't have the results from the largest difference I had seen, but 4-5% variance seems common in the min/max values with nearly identical "average." Benchmarking is annoying ?
BladeMeister Posted November 20, 2022 Posted November 20, 2022 Wouldn't it be worth adding a section for 2560x1440 as many people run that as there go to resolution? S!Blade<><
chiliwili69 Posted November 20, 2022 Author Posted November 20, 2022 9 hours ago, MilitantPotato said: but 4-5% variance seems common in the min/max values with nearly identical "average." Even with exactly the same PC it is difficult to make two identical runs. There are many background processes from Windows and many other background processes from other apps that are afecting the results. The tracking of the headset is never the same, either inside-out or with basestation. Also, the applied overclock is not always exactly the same, it depends on the temperature of each core, etc. So, we could see differences of 1-2% in the avg_fps but also bigger in the min/max values. That´s why I usually run three time the bench, just to see that that background "noise" is OK. So, using this benchmark to quantify small differences of 1 or 2 fps is really not useful. The important thing is that we can identify things like "Zen3 is much much better than Zen2" or "Zen3 is better than 10th gen of Intel" or "RAM is an important factor" and also verify that the numbers obtained by each of us are "aligned" with other tests. 8 hours ago, BladeMeister said: Wouldn't it be worth adding a section for 2560x1440 as many people run that as there go to resolution? The problem of that 2.5K test is that you will be bottlenecked by CPU in many cases. In fact, some tests of the 4090 at 4K were most likely bottlenecked by the CPU, using the settings in such a way to minimize the CPU load and maximize GPU load.
BladeMeister Posted November 20, 2022 Posted November 20, 2022 (edited) 4 hours ago, chiliwili69 said: The problem of that 2.5K test is that you will be bottlenecked by CPU in many cases. In fact, some tests of the 4090 at 4K were most likely bottlenecked by the CPU, using the settings in such a way to minimize the CPU load and maximize GPU load. Yep! My friend and I have been testing the exact same career mission, 2560x1440, as each other, a D day invasion mission, and this is what we are concluding. I will probably make a thread to share what we have experienced/learned and the conclusions that we have come to so as not to sidetrack this thread here, but generally until our CPUs can handle more IPC and actually utilize multiple cores/threads more efficiently or at least optimize and spread the load across the multiple cores/threads, our GPUS are completely bored waiting on the CPUs to send more data to display. Of course offloading/sharing a lot of the data processing/physics calculations to the GPU would greatly help, but sadly we are not there yet. Generally I think it says a lot about the fact that the Devs have probably optimized this engine about as far as they can with current CPU performance/ IPC handling routines and their sim engine's core design. We have not seen a truely new flight sim engine in well over a decade(MSFS?, never used it???) as even DCS is a rewrite, so maybe this new big development from 1C is a new engine with a different approach to multicore CPUs and how to better utilize/optimize IPC and share the load with our GPUs. I wish I would have read this complete thread earlier. It would have saved me some time. S!Blade<>< One more question please Sir. My monitor's native resolution is 2560x1440. Would changing that to a 1080 resolution and running this test skew the results in any way? S!Blade<>< Edited November 20, 2022 by BladeMeister
NoBreaks Posted November 20, 2022 Posted November 20, 2022 (edited) 6 hours ago, BladeMeister said: One more question please Sir. My monitor's native resolution is 2560x1440. Would changing that to a 1080 resolution and running this test skew the results in any way? S!Blade<>< Sorry to jump in uninvited but unless I'm mistaken, running at 1080 would alter the result (though not sure if that's what you mean by 'skew'). It won't skew the outcome into showing "artificially false" results (it will be what it is and in proportion I'm sure), but it will definitely alter the outcome at least a little, because this whole thing is about GPU v CPU. So, if you've run tests before at 1440, then going down to 1080 would put even *less* load on the GPU (although, to be clear, we do have to consider at some point there's a limit to how low one can load the GPU). You can still do the test at 1080 (or any other res) just factoring in that the less work we give the GPU we're just adding to what is already a disparate situation. Admittedly I didn't read much of this at all, but above there is discussion of changing settings to effect more load on the CPU, which would help to "balance" things; also mention of running at 4k (which *should* have the same effect: give the GPU more to do). So now, my thoughts lean toward two things: 1. Testing at 4k - though I don't have a 4090, I do have a 3090 and it's definitely affected by this, and I have a 4k 120hz monitor, so it's a reasonable test case. 2. Testing using a (much) lesser GPU, the idea being to 'find the bottom' so to speak; IOW at what GPU level will the load be roughly the same both during/not during the periods where the CPU becomes loaded (at same res, settings, etc). I think there is such a point, and fortunately I have the hardware on hand to try it. Incidentally, FWIW: I've tested now across several platforms, Intel and AMD; 9900k and 5600X, 5800X, 5800X3D and I feel confident in saying there's just not a lot of difference in these (otherwise comparable) CPUs. Although this isn't the SYN_Vander test prescribed in this thread, it is a very 'standardized' test I've been using to make things consistent test-to-test across various platforms, so I feel it's proven to be fairly reliable. I'm not comparing my data to others (except one); in fact, I wasn't really aware of this thread/discussion until most recently, where I've been looking at this issue with my buddy for some time now. As long as we're consistent "in-house" our internal method is valid. Point is, (IMHO of course) I think it's going to require a ***LOT*** more CPU (and/or better usage of resources by the game) before the situation approaches 'balanced'. I will also say that, this all seems to confirm the idea that these high-end GPUs just aren't built for (relatively) low-end resolutions any more (like 1080 and even perhaps 1440), as the CPUs can't keep 'em busy. What's interesting to me about that, is that (broadly and generally speaking) these lower resolutions *still* far outweigh others in terms of 'installed user base'...last I looked, 1080 still outnumbers all other resolutions *combined* among gamers, with 1440 just starting to gain a little ground over the past few years. 4k, though it is also gaining over time, is only a small fraction of the total. The problem is the cost is prohibitive, I think, for the majority, to have not only a *good* high-res monitor, but also the GPU to drive it. It can easily outweigh the balance of a system for those two components and put total cost into the $3500+ range. Just outta reach for a lot of people. Meanwhile, the GPUs are making huge strides in high-res support. Most anything since the high-end 20-series, even a 1080Ti, can actually support higher resolutions and still get decent frame rates...and at the upper end, they've surpassed 4k and are now routinely discussing 8K support. Crazy, but there it is, and it was bound to happen. Anyhow, I'll be trying to work in the couple extra tests soon and it'll help show what's what (though I think many here already know what is to be expected). Edited November 20, 2022 by NoBreaks
chiliwili69 Posted November 20, 2022 Author Posted November 20, 2022 6 hours ago, BladeMeister said: One more question please Sir. My monitor's native resolution is 2560x1440. Would changing that to a 1080 resolution and running this test skew the results in any way? This SYN_Vander benchmark has 5 tests. 2 of them are done with monitor and 3 of them in VR. For the monitor tests: the first is called CPU test (since the purpose is to bottleneck the PC by the CPU only) and it is performed at a low resolution (1080p). the second is called GPU test (since the purpose is to bottleneck the PC by the GPU only) and it is performed at the highest resolution (4k) I think your question here is referred to the CPU test. Right? If this is the case, any monitor with a resolution higher than 1080p can run IL-2 at a lower resolution without affecting the results of the CPU test. I have run the CPU test in my 4K monitor (windows is at 4K at 100%scale) but runing IL-2 at 1080p (no full screen), and also run the CPU test with another 1080p monitor I have at home and they deliver the same results. So, when you run the CPU test you must set windows to your native resolution (2560x1440) but IL-2 is configured for 1080p with no full screen.
BladeMeister Posted November 20, 2022 Posted November 20, 2022 33 minutes ago, chiliwili69 said: This SYN_Vander benchmark has 5 tests. 2 of them are done with monitor and 3 of them in VR. For the monitor tests: the first is called CPU test (since the purpose is to bottleneck the PC by the CPU only) and it is performed at a low resolution (1080p). the second is called GPU test (since the purpose is to bottleneck the PC by the GPU only) and it is performed at the highest resolution (4k) I think your question here is referred to the CPU test. Right? If this is the case, any monitor with a resolution higher than 1080p can run IL-2 at a lower resolution without affecting the results of the CPU test. I have run the CPU test in my 4K monitor (windows is at 4K at 100%scale) but runing IL-2 at 1080p (no full screen), and also run the CPU test with another 1080p monitor I have at home and they deliver the same results. So, when you run the CPU test you must set windows to your native resolution (2560x1440) but IL-2 is configured for 1080p with no full screen. Why no full screen during this benchmark mission? I have read that GBs is more efficient if full screen is checked in graphics options while playing it. I for some reason have always run GBs without the full screen option engaged until recently while doing my own testing. Is it better to run full screen or not while playing GBs for enjoyment? S!Blade<><
NoBreaks Posted November 20, 2022 Posted November 20, 2022 41 minutes ago, BladeMeister said: Why no full screen during this benchmark mission? I have read that GBs is more efficient if full screen is checked in graphics options while playing it. I for some reason have always run GBs without the full screen option engaged until recently while doing my own testing. Is it better to run full screen or not while playing GBs for enjoyment? S!Blade<>< From the game loading screen (see below). Can't say exactly why the game's developers say that, but that's what it says. FWIW it's also discussed all over the internet; everything I've read about it corroborates that fullscreen is faster, just that it sacrifices task-switching stability (which I personally don't do or recommend while playing a game anyway).
chiliwili69 Posted November 21, 2022 Author Posted November 21, 2022 11 hours ago, BladeMeister said: Why no full screen during this benchmark mission? Full screen is set OFF to have a common setting for all tests. In VR, and depeding on the device and monitor native resolution, some full screen modes doesn´t work well. Also, because when monitor resolution is bigger than resolution of the tests (CPU test in 4K or 2.5K monitors) the system has to do a re-scaling of the frame drawn at 1080p to fit in the 4K or 2.5K monitors or any other resolution. I was doing my on testing for all the options in the past (look here) and saw that full screen OFF is giving a bit better performance (for a 1080p test in a 4K monitor). Normally when I play in VR (I always play in VR, IL-2 has no sense to me in monitor) I set it OFF. I don´t fully understand why that note said that full screen give 15% more performance. I don´t know under what circustances they have measure it. But anyone can test that by themselves. Just make three runs with ON and three runs with OFF and compare.
BladeMeister Posted November 21, 2022 Posted November 21, 2022 Thanks for the information Gents! S!Blade<>< 1
MilitantPotato Posted November 23, 2022 Posted November 23, 2022 (edited) I did a quick test on the 4k settings and gained 3-4% switching to full screen according to fraps. Min and Max values are consistently higher on full screen though, by 6-10%. This isn't valid for the spreadsheet since I'm using a new cpu (5800x3d) and completely different ram speed and timings from my earlier 5800x benchmarks. Full: Frames: 10982 - Time: 60000ms - Avg: 183.033 - Min: 155 - Max: 228 Windowed: Frames: 10613 - Time: 60000ms - Avg: 176.883 - Min: 141 - Max: 216 Running OCAT instead of fraps shows some noticeable uplifts in 99th, and 99.9th percentile frametimes, of ~10% Average FPS in OCAT is closer, about a 2% difference (within margin of error IMO) So i think there some improvement running fullscreen (not sure about VR yet) but it's pretty minimal. Edited November 23, 2022 by MilitantPotato
firdimigdi Posted November 24, 2022 Posted November 24, 2022 11 hours ago, MilitantPotato said: So i think there some improvement running fullscreen (not sure about VR yet) but it's pretty minimal. It carries over to VR as well - accounts for an average of about .1 to .2msec frametime decrease; there is also a much smaller benefit from running 720p fullscreen vs 1080p as well for VR; if you don't have spectators watching your monitor, or you turn it off, you might as well opt for 720p fullscreen.
chiliwili69 Posted November 24, 2022 Author Posted November 24, 2022 (edited) 13 hours ago, MilitantPotato said: Running OCAT instead of fraps Thank you for this tests. I don´t know why Fraps tests showed a diference in fps but the OCAT test not. Perhaps is the variability of the tests? Normally I do at least 3 test per option just to see how big is the noise of the test. In any case , it is not the 15% that it was claimed by the launch pop-up. 2 hours ago, firdimigdi said: It carries over to VR as well - accounts for an average of about .1 to .2msec frametime decrease At 90Hz this is just 1%-2% improvement. Difficult to measure with the variability of the test. How did you measure that? Also for the 1080p to 720p improvment? Edited November 24, 2022 by chiliwili69
firdimigdi Posted November 24, 2022 Posted November 24, 2022 (edited) 25 minutes ago, chiliwili69 said: At 90Hz this is just 1%-2% improvement. Difficult to measure with the variability of the test. How did you measure that? Also for the 1080p to 720p improvment? In the past I did it with multiple runs with SteamVR dev data output, then OpenXR Tookit data output when that was available and also CapframeX in both instances (when I was using steamvr and then when I started using OpenComposite) and it all consistently aligned to that conclusion. In fact you can see the reduction of frametime in the main menu right after the game loads (which is much less variable) but it also translates to in-flight reduction. Honestly though I wouldn't fret over it too much, it's mostly a matter of convenience (do you alt-tab frequently, do you have other people watching as you play or vice versa, do you capture for a live stream, have you pinned any desktop windows inside your VR gamespace [changing screen mode can affect those] etc) than performance but if none of the above matter then I see no reason why not to take advantage. Edited November 24, 2022 by firdimigdi typo, formatting 1
NoBreaks Posted November 24, 2022 Posted November 24, 2022 3 hours ago, chiliwili69 said: In any case , it is not the 15% that it was claimed by the launch pop-up. It says "up to 15%", and it's exceptionally unlikely they'd put it there without any reason. If there's doubt, the obvious thing to do is ask them what the figure is based on. I'm certain it will vary depending on a number of factors (like everything else in all games, 'YMMV'). And - again - the concept is discussed/corroborated all over the internet...it's not just an isolated claim in this game. You just have to be willing to do the legwork. As mentioned above, it comes down to preference regarding task-switching. 1 1
chiliwili69 Posted November 24, 2022 Author Posted November 24, 2022 I was always using Fullscreen OFF, but I will try it ON. Any gain on performance is always good.
dburne Posted November 24, 2022 Posted November 24, 2022 I never really saw any difference full screen off or on, but has been a while since I checked it.
NoBreaks Posted November 24, 2022 Posted November 24, 2022 Well, again (and this is key): It varies. One system to another (because of other factors/variables on that system v others), one game to another (for obvious reasons), and so forth. Also, we have to consider that most people aren't really able to see a difference on the order of 6 frames of 60, 10 frames of 80, or 15 of 100 (all of which would be ~10-15%). It can be *measured* (but even that is subject to differences)...but being noticeable is another matter. Some people *claim* to be able to see small differences in frame rates, but that's generally misguided and very easy to prove inaccurate. All that said, however, I think it's still true to say that the vast majority of sources reflect that exclusive full screen yields better performance (even if perhaps at the expense of reliable task switching).
firdimigdi Posted November 24, 2022 Posted November 24, 2022 1 hour ago, dburne said: I never really saw any difference full screen off or on, but has been a while since I checked it. You won't really see a diff without a frametime overlay, .1 to .2 ms won't show up as a framerate reduction really unless it's verging on the upper limit - if you are doing >10.5ms on a 90Hz display for example.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now