FoxbatRU Posted March 1, 2020 Posted March 1, 2020 (edited) Motherboard: GIGABYTE Z390 AORUS PRO CPU: 8600K CPU Freq: 4.9 Ghz (5.0/5.1) L3 cache: 9 MB Cores: 6 (number of active physical Cores) HT: Off (not equipped) RAM type: DDR4 RAM size: 32 GB NB Freq: 4700 MHz RAM Freq: 3866 MHz (this is 4xDRAM freq if Dual channel) RAM Latency: 18 (CAS Latency CL) (17/17) GPU: Palit GeForce RTX 2080 JetStream STMark: 2973 (3033/3098) 2020-03-02 00:34:45 - Il-2 Frames: 26313 (27288/27730) - Time: 180000ms - Avg: 149.883 (151.600/154.056) - Min: 104 (106/107) - Max: 238 (239/241) Скрытый текст Redid since initially forgot to disable "H". Edited March 2, 2020 by FoxbatRU
chiliwili69 Posted March 3, 2020 Author Posted March 3, 2020 On 3/1/2020 at 11:03 PM, FoxbatRU said: Frames: 26313 (27288/27730) - Time: 180000ms - Avg: 149.883 (151.600/154.056) - Min: 104 (106/107) - Max: 238 (239/241) Thanks for your triple test! It shows clearly how you gain 2 fps for each 0.1GHz you increase the overclock. Your memory at 3866MHz is also great. It is nice to see how a relatively cheap and 2-year old chip (i5-8600K) is delivering performance closer to the top line. Which frequency do you normally use for playing, 4.9, 5.0 or 5.1? I wonder how your system will perform with 4400MHz RAM memory.
FoxbatRU Posted March 3, 2020 Posted March 3, 2020 Judging by the minimum frame, then yes, 100 MHz for a processor after 4.9 GHz adds 1-2 fps (since there is a measurement error). Perhaps 1fps adds a decrease of 1 latency in memory. My memory is average, now inexpensive. PVS416G400C9K According to the profile, it should work as 4000 MHz (19-19-19-39). But on my motherboard, it only works reliably at 3866 MHz. Timings I picked up 18-17-17-35. Plus a little conjured with secondary timings. If you enable 17-17-17-35, as I did for the test, then the test passes normally. But in the TestMem program, it sometimes detects errors. Therefore, in life I use 3866 (18-17-17-35). As for the processor, when I bought it a year ago, I specially took 8600K, not 9600K. 8600K was easier to scalp. If I then knew that the chance of a successful crystal is greater at 9700K or 9900K, I would probably choose from them. Although there was certainly no extra money. Yes, and then I was afraid of liquid cooling. I use the processor for life as 4.9 GHz always and across all cores. With air-cooled games, there are no problems or noise. At 5 and 5.1 GHz, the programs basically work fine, but in the Prime95 test one or two cores sooner or later show an error. Yes, and in the IL-2 a couple of times at these frequencies after a while there is a crash to the desktop. And this is not due to temperature or insufficient voltage (I experimented). Just in the silicone lottery, I got 4.9. What I am basically satisfied with. I do not know how the system would work with 4400 MHz, perhaps it would not have changed much. In other games, judging by other people's tests, more than 3200-3600 does not add much. Sorry, I did not think of conducting such a test. And all the settings of Nvidia and in the game, I already rolled back to the beautiful ones. I'll have to try someday.
FoxbatRU Posted March 3, 2020 Posted March 3, 2020 (edited) I was wrong, a further increase in the memory frequency still gives small advantages. Perhaps this somehow still affects the smoothness of the picture. But here you probably need to look at more subtle parameters (1% LOW & 0,1% LOW). I tested with my graphics settings Nvidia and in the game itself. They are a bit stricter. For example, extreme clouds and SSAO. But in the game, the shadows are high, not ultra (ultra ruffles). And all this at a resolution of 2560x1440 with G-sync turned on. Therefore, data cannot be compared with the general ones from the table, but can be compared internally. In Low, I did not lower other settings, I only switched the checkbox of Low/Ultra in the game itself. I conducted the test once with each frequency. It would be more correct to do them three to five times, and then take the average value. But it was laziness. Therefore, in the minimum FPS one can assume an error of +/- 1 fps. @chiliwili69 thank you for the good track! In it, you can check various settings, both on the processor and on the load on the video card. I will use it to optimize my settings. Edited March 3, 2020 by FoxbatRU
hayraddin Posted March 3, 2020 Posted March 3, 2020 I have been running the tests,I will paste all the information in the correct format when I get home, but basically, with a Ryzen 3900x, RTX 2070 super, and 32Gb RAM overclocked to 3600 cl14 fast tight timings(66ns AIDA64 latency) and 3093 ST Mark score, all I manage to get is 134 average FPS. How is it possible that Intel, even old CPUs like the 8600k, even if it has lower ST Mark score, runs faster than AMD, something must be badly optimized somewhere? I know that Intel is better for single thread than Ryzen, generally, but the 3900x is a newer CPU, with increased IPC, and in benchmarks it does actually match very closely the 9900k in most of the games, talking about a maybe 5FPS difference in some games, and in other games Ryzen is actually higher FPS than Intel. I don't think that its just because of higher frequency on intel, because Ryzen frequency and Intel frequency are not directly comparable, due to infinity fabric and other stuff running on the back, and the ST Mark score is higher in my 3900x than many intel CPUs, yet I still get up to 20-30fps lower than the 8600k or 9700k. I don't know if this is something we should flag to the devs? Maybe it would get better if we could get Vulkan? Ryzen 3900x has been an upgrade from me, coming from a 2700x, which was giving me around 110 Average FPS, but to be honest I expected a bigger upgrade, I wish we could reach 150-160 fps those Intel's are doing in high-end AMD cpu's as well. I dont know, just throwing some questions in the air. Have a good afternoon ? 1
hayraddin Posted March 4, 2020 Posted March 4, 2020 Motherboard: MSI B450 Tomahawk CPU: Ryzen 9 3900x CPU Freq: 4.6 Ghz L3 cache: 4 x 16 MB Cores: 12 HT:On RAM type: DDR4 RAM size: 32 GB NB Freq: 1800 MHz RAM Freq: 3600 MHz RAM Latency: 14 GPU: RTX 2070 Super STMark: 3060 2020-03-03 00:37:47 - Il-2 Frames: 24019 - Time: 180000ms - Avg: 133.439 - Min: 93 - Max: 235 2
Alonzo Posted March 4, 2020 Posted March 4, 2020 21 hours ago, hayraddin said: How is it possible that Intel, even old CPUs like the 8600k, even if it has lower ST Mark score, runs faster than AMD, something must be badly optimized somewhere? Well, IL2 does seem to crave memory frequency and low latency much more than other games. My suspicion is there's some high contention locking going on in there somewhere, and the lock/unlock stuff simply runs faster on Intel chips than on Infinity Fabric. As for suggestions, first AMD does interesting dynamic boost stuff. Are you sure you're hitting the same frequencies (and maintaining them) as when running STmark? Second, some people have had success disabling a chiplet (or something) which means less cross-infinity fabric chatter for the Ryzen CPU. This might help if all you're trying to do is get IL2 to run well, but it does kind of make those extra cores a paperweight. I don't think anyone credible on the forum here would ever advise AMD if the only job is to run IL2. If that's the only job, Intel is the best choice. But we all do more than just running IL2 on our rigs, so it gets more complicated.
hayraddin Posted March 4, 2020 Posted March 4, 2020 Yea, agree with you. I suspect there is something else on the background going on, the difference in DCS from Intel to AMD seems to be much lower than in Il2, 40fps difference between 3900x and 9900k is a really big difference, in benchmarks of old and new games, they test up to 40 games 9900k vs 3900x, and yes, Intel is generally better, but not by 40 fps. The biggest difference I've seen was 14fps in Starcraft, even other old games that are single threaded run at around 1-3% faster in the 9900k, 30-40fps is a 30% difference. It's complicated as you said to check the exact frequency with Ryzen CPUs, as PBO tends to shift it up or down based on CPU load and Temps, amongst other things, and Il2 seems to use more than 1 main core I noticed, but I mainly run the main core at 4.5-4.6Ghz and the other ones at a bit less than that. I wish I could just buy an intel for IL2, but sadly I do CG/VFX work and I need the extra threads/cores, so I'm stuck on this side of things.
chiliwili69 Posted March 5, 2020 Author Posted March 5, 2020 (edited) On 3/3/2020 at 12:45 PM, FoxbatRU said: I do not know how the system would work with 4400 MHz, perhaps it would not have changed much. In other games, judging by other people's tests, more than 3200-3600 does not add much. Sorry, I did not think of conducting such a test. I was not asking you to run at 4400, I was just wondering it. The top performer (WallterScott) is using 4400 memory with the 9900K. So, I wonder how hiw system will run at 3866 MHz. So I think memory also matter even above 3200. Just compare tests with same CPU and same GPU but different RAM speed. For example, Dburne at 5.2 GHz (160.2 fps) and tests of Wallterscott at 5.0GHz (166.5 fps) and 5.3GHz (173.4). So we could interpolate that wallterscott would achieve 170 fps for 5.2 GHz. So about +10 fps more for +1200MHz. According to greif correlation the gain factor of memory speed is 0.0167. So 0.0167*1200 should theoretically give +20fps. So more than real test +10fps. But still substantial On 3/3/2020 at 7:41 PM, FoxbatRU said: I was wrong, a further increase in the memory frequency still gives small advantages I read this now. Yes, your tests show the gain with the RAM speeds. Although the gain is small with that settings. On 3/3/2020 at 7:41 PM, FoxbatRU said: thank you for the good track! In it, you can check various settings, both on the processor and on the load on the video card. I will use it to optimize my settings. You are welcome. That´s the purpose of the benchmark. To have a rule to measure. On 3/4/2020 at 4:31 AM, hayraddin said: CPU: Ryzen 9 3900x Thanks you for your test. You performance is aligned with other 3900X. Just a comment. When you report 4.6GHz, is this frequency mantained at that fix value during the whole track? Lower performance of Ryzen chips could be due to lower NB freq. Just a thought. 13 hours ago, hayraddin said: It's complicated as you said to check the exact frequency with Ryzen CPUs, as PBO tends to shift it up or down based on CPU load and Temps, amongst other things, and Il2 seems to use more than 1 main core I noticed, but I mainly run the main core at 4.5-4.6Ghz and the other ones at a bit less than that. You can use MSI afterburner to trend the freq of each core while you run the test. You can also experiment with disabling 6 of your 12 cores (and also disable the SMT). There will be less heat generated and you would be able to keep higher frequencies. In theory. Edited March 5, 2020 by chiliwili69 2
hayraddin Posted March 5, 2020 Posted March 5, 2020 Yea, I tried disabling half of the cores and was almost the same, then I disabled also SMT and it was also the same, +-1-2FPS, so no noticeable difference. My CPU is running at 60deg max when running the benchmark, not cool, but also not super hot, ryzen can go up to 90. Ill do one more run and try to benchmark the average frequency the CPU is running at, ill try to find a software that does that.
VR-DriftaholiC Posted March 11, 2020 Posted March 11, 2020 (edited) Motherboard: Asus ROG STRIX X570-I GAMING CPU: AMD Ryzen 7 3800X CPU Freq: 4590 Mhz L3 cache: 2x 16MB Cores: 8 Core - 16 Threads HT: On RAM type: DDR4 RAM size: 32 GB NB Freq: 1600 MHz RAM Freq: 3200 MHz RAM Latency: 16 GPU: RTX 2070 Super STMark: 2662 https://pcpartpicker.com/list/Mrxvsk https://valid.x86.fr/91kcet 2020-03-11 15:50:26 - Il-2 Frames: 25895 - Time: 180000ms - Avg: 143.861 - Min: 95 - Max: 258 Edited March 11, 2020 by driftaholic
chiliwili69 Posted March 12, 2020 Author Posted March 12, 2020 On 3/11/2020 at 11:41 PM, driftaholic said: STMark: 2662 Thank you for your test. It is the first 3800X being tested. You performance is aligned with other Ryzen chips, although a bit above them. Strange thing is your STMArk result. According to other Ryzens at similar frequencies you should achieve more in STMark.
VR-DriftaholiC Posted March 13, 2020 Posted March 13, 2020 (edited) It's at 4.5Ghz, stock only boosts to 4.2, I can stick it to stock and test again if it helps. Anyone not running manual clocks will see performance greatly affected by temperatures just like a GPU. Edited March 13, 2020 by driftaholic
ICDP Posted March 14, 2020 Posted March 14, 2020 (edited) IL-2 version: 4.004 Motherboard: Gigabyte X570 Aorus Pro CPU: Ryzen 3900X CPU Freq: 4.5 Ghz L3 cache: 4x16 MB Cores: 12 HT: On RAM type: DDR4 RAM size: 32 GB NB Freq: 1863 MHz RAM Freq: 3733 MHz RAM Latency: 16 (CAS Latency CL) GPU: RTX 2080 STMark: 3063 HP Reverb: Resolution Per Eye - 2208x2160 Frames: 12303 - Time: 180000ms - Avg: 68.350 - Min: 47 - Max: 91 Going from the results for similar systems my scores are right in line with 5GHz 8700K with 1080Ti (my 2080 has similar performance to 1080Ti). The faster results for the Reverb on the 9900KS at 5.4GHz are far more likely to be due to the 2080Ti on that system. So it seems Reverb is GPU limited rather than CPU, it was the same with my Pimax 5K+. Edited March 14, 2020 by ICDP
SCG_Wulfe Posted March 15, 2020 Posted March 15, 2020 (edited) Box 4.004 ******************************************************************************* Motherboard: Gigabyte Z390 Gaming X CPU: 9600K CPU Freq: 5.1 Ghz L3 cache: 9 MB Cores: 6 (number of active physical Cores) HT: Off (HyperThreading. If Cores=Threads then HT is Off) RAM type: DDR4 RAM size: 32 GB NB Freq: 4300 MHz RAM Freq: 2600 MHz (this is 2xDRAM freq if Dual channel) RAM Latency: 18 (CAS Latency CL) GPU: 1080 STMark: 2962 ******************************************************************************** Non - VR (regular test) 2020-03-15 13:16:22 - Il-2 Frames: 22799 - Time: 180000ms - Avg: 126.661 - Min: 90 - Max: 215 ******************************************************************************** In VR (1660x2084 per eye) 2020-03-15 13:36:40 - Il-2 Frames: 8327 - Time: 180000ms - Avg: 46.261 - Min: 43 - Max: 91 ********************************************************************************* Edited March 15, 2020 by SCG_Wulfe
chiliwili69 Posted March 15, 2020 Author Posted March 15, 2020 On 3/14/2020 at 6:44 PM, ICDP said: Going from the results for similar systems my scores are right in line with 5GHz 8700K with 1080Ti (my 2080 has similar performance to 1080Ti). The faster results for the Reverb on the 9900KS at 5.4GHz are far more likely to be due to the 2080Ti on that system. So it seems Reverb is GPU limited rather than CPU, it was the same with my Pimax 5K+. Thank for reporting the test in VR. You can really know if your CPU is a bottleneck in VR with the track. Just run it with subsampling, for example at 50%SS. If you improve your performance then guilty guy is GPU.
jarg1 Posted March 15, 2020 Posted March 15, 2020 (edited) FYI version 10 of PassMark has been released but you can still find verion 9 in the download section. The CPU Single Threaded test in version 10 has changed so results will not be comparable to the previous tests. Edited March 15, 2020 by jarg1
chiliwili69 Posted March 15, 2020 Author Posted March 15, 2020 2 hours ago, SCG_Wulfe said: STMark: 2962 Thanks for your test. IT is strange that a 9600K at 5.1 give below 3000, it should give around 3080 or 3090. Perhaps the freq is not maintained during the test. For the VR test, did you an Odyssey+ at 128%SS? 10 minutes ago, jarg1 said: FYI version 10 of PassMark has been released but you can still find verion 9 in the download section. The CPU Single Threaded test in version 10 has changed so results will not be comparable to the previous tests. Arghhh! That the reason why STMarks don´t fit with previous tests. The same for all GPU Passmark. We can do two things now to keep coherent numbers: - Option A: Remove all STMark values from list and update all the Passmark G3D values for each video card. - Option b: Keep values of Performance Test V9 and indicate that in the instructions. So we keep consistency with v9 values
jarg1 Posted March 15, 2020 Posted March 15, 2020 14 minutes ago, chiliwili69 said: Arghhh! That the reason why STMarks don´t fit with previous tests. The same for all GPU Passmark. We can do two things now to keep coherent numbers: - Option A: Remove all STMark values from list and update all the Passmark G3D values for each video card. - Option b: Keep values of Performance Test V9 and indicate that in the instructions. So we keep consistency with v9 values Your instructions do reference version 9 but it is likely that anyone running this recently would have downloaded the newer version. Also, I'm getting higher scores now, around 3300 at 5.3 Ghz, but perhaps the change may depend on the the CPU type.
SCG_Wulfe Posted March 15, 2020 Posted March 15, 2020 4 hours ago, chiliwili69 said: Thanks for your test. IT is strange that a 9600K at 5.1 give below 3000, it should give around 3080 or 3090. Perhaps the freq is not maintained during the test. For the VR test, did you an Odyssey+ at 128%SS? Arghhh! That the reason why STMarks don´t fit with previous tests. The same for all GPU Passmark. We can do two things now to keep coherent numbers: - Option A: Remove all STMark values from list and update all the Passmark G3D values for each video card. - Option b: Keep values of Performance Test V9 and indicate that in the instructions. So we keep consistency with v9 values well I actually realized and downloaded v9 for the test, so removes that variable. but yes possible it fluctuated a bit and lost me a bit of score, though I ran it twice with pretty much the same result. VR test was odyssey plus at 100% SS
whitav8 Posted March 17, 2020 Posted March 17, 2020 Small detail - WRT SuperSampling, please list the resolution (e.g. 2400x2600) instead of SS percentage which changes frequently with new revisions of SteamVR. 1
chiliwili69 Posted April 9, 2020 Author Posted April 9, 2020 Today I have downloaded the new version 4.005 and before enjoy it I decided to run the Remagen test to see if it is still valid. So Yes, it still run. I run it three times, but forgot to disabled the G-Sync of my monitor. So I run it again three times with G-Sync disabled. These are the results: WITH G-SYNC ON: Frames: 22162 - Time: 180000ms - Avg: 123.122 - Min: 86 - Max: 215 Frames: 22215 - Time: 180000ms - Avg: 123.417 - Min: 83 - Max: 214 Frames: 21929 - Time: 180000ms - Avg: 121.828 - Min: 81 - Max: 213 WITH G-SYNC OFF (this is how it should be run the test): Frames: 22644 - Time: 180000ms - Avg: 125.800 - Min: 83 - Max: 219 Frames: 23072 - Time: 180000ms - Avg: 128.178 - Min: 86 - Max: 218 Frames: 22790 - Time: 180000ms - Avg: 126.611 - Min: 85 - Max: 214 So averaging the three latest tests: Frames: 22835 - Time: 180000ms - Avg: 126.9 - Min: 85 - Max: 217 My last test with 4.003b was 22362 83 212 124.2 So a bit better! It is good that the dev team has been able to improve the damage model without impacting performance (even improving it a bit!). I didn´t recorded a video of the track, but I think (maybe placebo) that now there are a bit more broken pieces of the bombers failing in the air. 1
dburne Posted April 9, 2020 Posted April 9, 2020 Not real sure I would consider a 2 fps differential as showing an improvement. I would look at it more as performance remained the same myself. Which is always a good thing with a major game update such as this.
CSW_Tommy544 Posted May 11, 2020 Posted May 11, 2020 Hi everyone, I am currently running Oculus Rift S on my laptop with an external Thunderbolt 3 GPU. Considering my CPU boosts on all 4 cores only to 3.2 GHz and GTX 1070 in the eGPU enclosure I am quite surprised that I can get close to 40 fps in the air in not so crowded areas. On the ground and above big cities, the performance is of course worse than that. I am a big fan of the eGPU idea and to be honest, if I wasn't trying to run relatively CPU hungry simulators in VR, my setup is perfectly fine for casual gaming. However, I would like to upgrade my setup to have a more comfortable VR experience and I decided to build a desktop PC and for the start use my 1070 with an upgrade for the GPU in the future in mind. From the Google Sheets results it looks like Intel CPUs are still a bit better for this particular use case. If I understand it correctly, the game graphics settings used for this benchmark are set quite high and it should be possible to lower some of them, i.e. draw distance to get a consistent 80/90 fps as it seems like even the best rigs people have are having troubles reaching this performance level. My main concern is choosing a new CPU. For IL-2, i9-9900K seems to have no measurable advantage over i7-9700k and seems to be quite a bit more expensive. I really don't like Intel's move to remove hyper threading from i7 this generation and other than top performance in IL-2, Ryzen CPUs seem to have better performance overall and are much better value. Would you advise against R7 3800X/3700X for VR in IL-2? Are there any other specific areas to be careful about when choosing the rest of the components? I am of course planning to get some of the faster RAM sticks and run the system of of a m.2 nvme drive. I'm just wondering whether there are any other peculiarities that I might have missed in the results sheets. PS: It also seems like the the switch to deferred rendering is happening quite soon. I am very interested in seeing how that will impact VR performance and whether it redistributes CPU/GPU load. Thank you
Alonzo Posted May 12, 2020 Posted May 12, 2020 17 hours ago, FMF-Tommy544 said: Would you advise against R7 3800X/3700X for VR in IL-2? Are there any other specific areas to be careful about when choosing the rest of the components? I am of course planning to get some of the faster RAM sticks and run the system of of a m.2 nvme drive. I'm just wondering whether there are any other peculiarities that I might have missed in the results sheets. PS: It also seems like the the switch to deferred rendering is happening quite soon. I am very interested in seeing how that will impact VR performance and whether it redistributes CPU/GPU load. Everything might change with deferred rendering, so take this advice with that in mind. If your only use-case is IL-2 in VR, get an Intel CPU. Despite all the other scumbaggery in the Intel camp (sockets continue to be invalidated each generation, unlike AMD) Intel still has the lead for single threaded performance, and that's what IL-2 likes. The recipe for best IL-2 VR performance is the fastest Intel single-thread CPU you can afford, a good 280/360mm AIO cooler, fast RAM (3600 or better) and, critically, a good motherboard that will overclock the CPU and the RAM nicely. If you're trying to get best bang-for-buck with Intel, get the 9700K. Skip the NVMe drive, they make zero difference for gaming, just get a generic m.2 SSD. Put that extra money into the rest of the rig. If you have use-cases beyond IL-2 in VR, then yes, AMD may make sense, but you are also be reducing IL-2 VR performance. Think about those use-cases and decide whether they are actually real or not. For me, I have no use-case (really) that can use more than 6 cores / 12 threads that I get with my Intel. The upgrade path sucks, but I have to live with that. If I want to stream games, I'm going to use NVENC not software encoding. Etc etc. 3
CSW_Tommy544 Posted May 14, 2020 Posted May 14, 2020 On 5/12/2020 at 5:33 PM, Alonzo said: Everything might change with deferred rendering, so take this advice with that in mind. If your only use-case is IL-2 in VR, get an Intel CPU. Despite all the other scumbaggery in the Intel camp (sockets continue to be invalidated each generation, unlike AMD) Intel still has the lead for single threaded performance, and that's what IL-2 likes. The recipe for best IL-2 VR performance is the fastest Intel single-thread CPU you can afford, a good 280/360mm AIO cooler, fast RAM (3600 or better) and, critically, a good motherboard that will overclock the CPU and the RAM nicely. If you're trying to get best bang-for-buck with Intel, get the 9700K. Skip the NVMe drive, they make zero difference for gaming, just get a generic m.2 SSD. Put that extra money into the rest of the rig. If you have use-cases beyond IL-2 in VR, then yes, AMD may make sense, but you are also be reducing IL-2 VR performance. Think about those use-cases and decide whether they are actually real or not. For me, I have no use-case (really) that can use more than 6 cores / 12 threads that I get with my Intel. The upgrade path sucks, but I have to live with that. If I want to stream games, I'm going to use NVENC not software encoding. Etc etc. Thank you Alonzo! I came to a similar conclusion and just have to decide whether getting max possible frame rate in IL-2 is worth buying a CPU that I would not normally recommend to anyone. I disagree with the m.2 SSD though. Sure, the newest ones PCIe 4 NVMe drives might not be the best bang-for-buck, but I see no point in buying SATA m.2 drives when PCIe 3 NVMe drives have been around for quite a few years now and you have at least 5x the data transfer speed compared to SATA. You probably won't notice a difference in IL-2 other than loading times, but for the general system performance it is in my opinion crucial.
Alonzo Posted May 14, 2020 Posted May 14, 2020 6 hours ago, FMF-Tommy544 said: Thank you Alonzo! I came to a similar conclusion and just have to decide whether getting max possible frame rate in IL-2 is worth buying a CPU that I would not normally recommend to anyone. I disagree with the m.2 SSD though. Sure, the newest ones PCIe 4 NVMe drives might not be the best bang-for-buck, but I see no point in buying SATA m.2 drives when PCIe 3 NVMe drives have been around for quite a few years now and you have at least 5x the data transfer speed compared to SATA. You probably won't notice a difference in IL-2 other than loading times, but for the general system performance it is in my opinion crucial. I think my info might be outdated, but when I was building a PC ~18 months ago there was a giant price difference between "NVMe" (I don't know if it was PCIe 3 or 4) and "m.2 SSD". There's also a great video on YouTube showing that the two made almost no difference to load times on games, which is where you'd expect to see a difference. If today you can get "NVMe" for only a slight price premium, and it's not stopping you putting money into other important components, then go for it. I have seen some parts list where people are spending literally double on the drive and then not having cash to move up a model on the CPU or GPU or not being able to afford liquid cooling, which seems like a poor tradeoff to me. 1
HunDread Posted May 14, 2020 Posted May 14, 2020 13 minutes ago, Alonzo said: I think my info might be outdated, but when I was building a PC ~18 months ago there was a giant price difference between "NVMe" (I don't know if it was PCIe 3 or 4) and "m.2 SSD". There's also a great video on YouTube showing that the two made almost no difference to load times on games, which is where you'd expect to see a difference. If today you can get "NVMe" for only a slight price premium, and it's not stopping you putting money into other important components, then go for it. I have seen some parts list where people are spending literally double on the drive and then not having cash to move up a model on the CPU or GPU or not being able to afford liquid cooling, which seems like a poor tradeoff to me. I can confirm that. I have an NVME SSD and haven't seen the slightest practical improvement over the old school regular SSD
Voyager Posted May 20, 2020 Posted May 20, 2020 On 5/14/2020 at 1:54 PM, HunDread said: I can confirm that. I have an NVME SSD and haven't seen the slightest practical improvement over the old school regular SSD Given some of the things that are coming out on the new consoles, we are more likely to see value from NVME drives in the next year or so, when some of the dynamically scaling* engines start heading towards PC games, but we probably aren't going to see that in the Il-2 or DCS lines until their successor products. It's very different than anything games have done lately, and you'd want to do entirely new art assets before you even considered going down that rabbit hole. Now, I've got a 7y old SATA SSD as my boot drive and an NVME as my games drive, and while I haven't seen much difference for gaming, if I need to move large files between drives, that has actually had a noticeable difference. If NVME drives are about the same price as SATA parts, its probably worth getting it anyways, than saving a few dollars by going with a SATA M.2 drive. *I'm not even sure it has a name yet beyond that thing the UE5 just demoed really, that's just the best way to describe what they were showing that I can think of. 1
LLv34_Camouflage Posted May 28, 2020 Posted May 28, 2020 On 4/9/2020 at 1:43 PM, chiliwili69 said: Today I have downloaded the new version 4.005 and before enjoy it I decided to run the Remagen test to see if it is still valid. So averaging the three latest tests: Frames: 22835 - Time: 180000ms - Avg: 126.9 - Min: 85 - Max: 217 My last test with 4.003b was 22362 83 212 124.2 So a bit better! It is good that the dev team has been able to improve the damage model without impacting performance (even improving it a bit!). Hi, thanks for the test and the whole benchmarking project, this is a great help for the community. Did you also happen to test if there is a difference in VR performance with the new update? Camo
A_radek Posted May 28, 2020 Posted May 28, 2020 1 hour ago, LLv34_Camouflage said: Hi, thanks for the test and the whole benchmarking project, this is a great help for the community. Did you also happen to test if there is a difference in VR performance with the new update? Camo Suspect most are so awed playing in 80fps glory that proper benchmarking has been temporarily shelfed. Obviously the big change was a load transfer from cpu to gpu but that was a good thing and something most wished for. I’m temporarily running a 2070s strix oc, comparable to a stock 2080 unless raytracing. Though choosing rather high settings, my gpu usage averaged 55-65% pre-deffered, and after went up to 85-95%, with 2x msaa as the only change in settings. Unscientifically measured on a regular night of multiplayer, same server as always. I’d say running ultra settings on a soon to be previous gen and mid range gpu are till modest system requirements.
Alonzo Posted May 28, 2020 Posted May 28, 2020 7 hours ago, LLv34_Camouflage said: Did you also happen to test if there is a difference in VR performance with the new update? My CPU frame times went from ~11-12ms down to ~6-7ms for the new patch. This enables me to run High preset, 4x MSAA, High clouds, Medium mirrors, High shadows, and still maintain 80 FPS in an 8-aircraft dogfight. This is on an 8086K @ 5ghz / RTX 2080 GPU.
TUS_Samuel Posted May 28, 2020 Posted May 28, 2020 No way! I can't believe this finally happened. Congratulations guys!
wju Posted May 30, 2020 Posted May 30, 2020 (edited) Hi all, the last fpsVR has a new hot key function to start/stop logging CPU/GPU frametimes and loads. One can find it under Settings/Advanced/Frames data logging. So far the beta only, but works. https://store.steampowered.com/app/908520/fpsVR/ EDIT: 2020_06_03: the logging function now available also in the last stable version Edited June 3, 2020 by wju 1
SCG_Wulfe Posted June 13, 2020 Posted June 13, 2020 On 3/15/2020 at 3:48 PM, chiliwili69 said: Thanks for your test. IT is strange that a 9600K at 5.1 give below 3000, it should give around 3080 or 3090. Perhaps the freq is not maintained during the test. For the VR test, did you an Odyssey+ at 128%SS? Well... Let this be a cautionary tale for everyone... I did not reinstall windows after changing my Motherboard and CPU before I ran this benchmark. I thought, meh everything seems to be running fine. It should be ok... I knew it was not best practice. I finally today decided to try wiping and reinstalling just to see.... My ST passmark score went from 2962 to 3155.... 1 2
SCG_Fenris_Wolf Posted June 13, 2020 Posted June 13, 2020 200pts up! Well, what to say but congratulations! 1
chiliwili69 Posted June 15, 2020 Author Posted June 15, 2020 Good lesson learnt! There is always an explanation for everything, but sometimes it is not found.
Ala13_UnopaUno_VR Posted June 15, 2020 Posted June 15, 2020 On 13/6/2020 at 18:04, SCG_Wulfe said: Bueno ... Que esto sea una advertencia para todos ... No reinstalé Windows después de cambiar mi placa base y CPU antes de ejecutar este punto de referencia. Pensé, meh, todo parece estar funcionando bien. Debería estar bien ... Sabía que no era la mejor práctica. Finalmente, hoy decidí intentar limpiar y reinstalar solo para ver ... Mi puntaje de ST Passmark pasó de 2962 a 3155 ... you reinstalled the game and now you are doing better?
SCG_Wulfe Posted June 15, 2020 Posted June 15, 2020 15 hours ago, Ala13_UnopaUno_VR said: you reinstalled the game and now you are doing better? I reinstalled my entire OS (Windows) and the game yes. In my case, I had changed my motherboard and CPU and had not touched the system after having done so as it seemed to be ok. But it was obviously costing me some performance in maybe not optimal drivers/conflicts. 1
CSW_Tommy544 Posted June 19, 2020 Posted June 19, 2020 So guys, if I understand correctly, the Remagen 4.0 benchmarking method is not compatible with the newest patch and so I can't look at the performance numbers in the Google sheets file to see the effect the newest patch had on the distribution of the PC power requirements between CPU and GPU. From various comments I've seen in some threads and partially from my experience as well, it looks like the load was transferred more towards the GPU side. My main question is, how useful it is in the current state of the game engine to go for the fastest clock speeds on the CPU? Is it worth it to aim for an 5 GHz+ overclocked Intel CPU or is it now sufficient to run let's say run a Ryzen 7, 4.6 GHz CPU and spend more cash on a more powerful GPU in order to maintain relatively stable 80-90 fps? Can some of you share your experience with the couple newest patches? Thanks!
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now