chiliwili69 Posted June 18, 2024 Author Posted June 18, 2024 Wow! thanks for this testing. This is a top PC in all senses Mobo, CPU, RAM, GPU! The extra +20fps you got with the RAM upgrade and the KS is significant. Hope you recover soon from your health issue and be able to enjoy that machine in VR! 1
wju Posted August 14, 2024 Posted August 14, 2024 (edited) Hi friends, I have upgraded system, so my new results are as follows: CPU TEST: 2024-08-14 12:25:01 - Il-2 Frames: 8392 - Time: 60000ms - Avg: 139.867 - Min: 118 - Max: 178 2024-08-14 12:27:21 - Il-2 Frames: 8421 - Time: 60000ms - Avg: 140.350 - Min: 114 - Max: 183 2024-08-14 12:29:29 - Il-2 Frames: 8560 - Time: 60000ms - Avg: 142.667 - Min: 120 - Max: 190 GPU 4K TEST 2024-08-14 12:36:16 - Il-2 Frames: 11500 - Time: 60000ms - Avg: 191.667 - Min: 173 - Max: 234 2024-08-14 12:38:16 - Il-2 Frames: 11510 - Time: 60000ms - Avg: 191.833 - Min: 165 - Max: 234 2024-08-14 12:40:19 - Il-2 Frames: 11510 - Time: 60000ms - Avg: 191.833 - Min: 167 - Max: 237 VR TEST2; HP REVERB G2 100% via OpenXR with OpenXR Toolkit disabled 2024-08-14 12:10:53 - Il-2 Frames: 4857 - Time: 60000ms - Avg: 80.950 - Min: 63 - Max: 91 2024-08-14 12:12:54 - Il-2 Frames: 4690 - Time: 60000ms - Avg: 78.167 - Min: 61 - Max: 91 2024-08-14 12:15:06 - Il-2 Frames: 4721 - Time: 60000ms - Avg: 78.683 - Min: 62 - Max: 91 VR TEST3; HP REVERB G2 100% via OpenXR with OpenXR Toolkit disabled 2024-08-16 11:36:05 - Il-2 Frames: 3306 - Time: 60000ms - Avg: 55.100 - Min: 46 - Max: 62 2024-08-16 11:38:00 - Il-2 Frames: 3247 - Time: 60000ms - Avg: 54.117 - Min: 46 - Max: 62 2024-08-16 11:39:57 - Il-2 Frames: 3283 - Time: 60000ms - Avg: 54.717 - Min: 44 - Max: 63 SPEC: Motherboard: ASUS ROG STRIX X670E-F GAMING WIFI rev 1.xx CPU: Ryzen 7 7800X3D CPU Freq: 4.5 Ghz Cores: 8 Threads: 16 RAM size: 64Gb (2x32GB) RAM Freq: 6000 MHz NB Freq: 1500 MHz RAM timings: 30-40-40-96 GPU: RTX 4090 SYSTEM: WIN11 23H2 IL2: 5.204b ******************************* I also have measured VR TEST2 via OpenXR Toolkit (FRAPS off) to see frametimes both CPU&GPU, the graph and average values are here: GPU average frametime: 9.8 ms CPU: average frametime: 11.3 ms FPS: 81,3 NOTE: FPS are not simple 1/max(GPU ,CPU) frametime, because it does not include measurement handling oveheads dtto for VR TEST3: Edited August 16, 2024 by wju VR3 test added 1
chiliwili69 Posted August 30, 2024 Author Posted August 30, 2024 On 8/14/2024 at 3:08 PM, wju said: I also have measured VR TEST2 via OpenXR Toolkit (FRAPS off) to see frametimes both CPU&GPU, the graph and average values are here: GPU average frametime: 9.8 ms CPU: average frametime: 11.3 ms FPS: 81,3 Thank you very much for your testing. Your results are quite well aligned with other test but using previous versions of the game. It is nice to see that the average value provided by Fraps match the value provided by OpenXR Toolkit, both around 81 fps. The OpenXR toolkit is a nice tool to record GPU and CPU frametimes and see what is bottlenecking the whole thing. It is Excel what you used for the plots? What is the grey thing (rCPU)?
wju Posted August 30, 2024 Posted August 30, 2024 (edited) not excel, it is python plot; rCPU definition is below, whatever it means: app CPU: The time spent (on CPU) by the application to produce a frame. This may include game logic. This value is not always measurable (and may not be displayed). rdr CPU: The time spent (on GPU) by the application for the rendering part a frame. This may not include game logic. app GPU: The time spent (on GPU) by the application to produce a frame. Note than if the application is CPU-bound, this value may be incorrect or inaccurate. says great mbucchia himself here: https://mbucchia.github.io/OpenXR-Toolkit/overlay.html#advanced-metrics after WIN11 fix no. KB5041587, which claims better AMD CPU prediction, I have measured noticeable better CPU results: CPU TEST: 2024-08-30 17:17:34 - Il-2 Frames: 8972 - Time: 60000ms - Avg: 149.533 - Min: 129 - Max: 192 2024-08-30 17:19:29 - Il-2 Frames: 9003 - Time: 60000ms - Avg: 150.050 - Min: 125 - Max: 196 BEFORE KB5041587 (see my previous post above) CPU TEST: 2024-08-14 12:25:01 - Il-2 Frames: 8392 - Time: 60000ms - Avg: 139.867 - Min: 118 - Max: 178 2024-08-14 12:27:21 - Il-2 Frames: 8421 - Time: 60000ms - Avg: 140.350 - Min: 114 - Max: 183 2024-08-14 12:29:29 - Il-2 Frames: 8560 - Time: 60000ms - Avg: 142.667 - Min: 120 - Max: 190 Edited August 31, 2024 by wju
chiliwili69 Posted September 2, 2024 Author Posted September 2, 2024 On 8/30/2024 at 8:49 PM, wju said: app CPU: The time spent (on CPU) by the application to produce a frame. This may include game logic. This value is not always measurable (and may not be displayed). rdr CPU: The time spent (on GPU) by the application for the rendering part a frame. This may not include game logic. app GPU: The time spent (on GPU) by the application to produce a frame. Note than if the application is CPU-bound, this value may be incorrect or inaccurate. mmhh, It is really a bit cryptic those definitions, I don´t understand exactly them. @mbucchia Could you please give some more explanation of that? using above graphs as an example. On 8/30/2024 at 8:49 PM, wju said: 2024-08-30 17:17:34 - Il-2 Frames: 8972 - Time: 60000ms - Avg: 149.533 - Min: 129 - Max: 192 2024-08-30 17:19:29 - Il-2 Frames: 9003 - Time: 60000ms - Avg: 150.050 - Min: 125 - Max: 196 WOW!, not bad! I Assume that your VRTest2 has been also improved a bit.
chiliwili69 Posted September 2, 2024 Author Posted September 2, 2024 Since I have also the 7800X3D, I also updated the Windows update to see if I got the same improvement: BEFORE Windows update: 2024-09-02 09:14:01 - Il-2 Frames: 8509 - Time: 60000ms - Avg: 141.817 - Min: 122 - Max: 180 2024-09-02 09:16:08 - Il-2 Frames: 8562 - Time: 60000ms - Avg: 142.700 - Min: 119 - Max: 188 2024-09-02 09:19:26 - Il-2 Frames: 8584 - Time: 60000ms - Avg: 143.067 - Min: 121 - Max: 191 AFTER Windows update: Windows 11. 23H2 27-August-2024 KB5041587 preliminar version ( compilations OS 22621.4112 and OS 22631.4112) 2024-09-02 10:20:44 - Il-2 Frames: 8569 - Time: 60000ms - Avg: 142.817 - Min: 97 - Max: 190 2024-09-02 10:22:45 - Il-2 Frames: 8643 - Time: 60000ms - Avg: 144.050 - Min: 117 - Max: 191 2024-09-02 10:24:44 - Il-2 Frames: 8565 - Time: 60000ms - Avg: 142.750 - Min: 117 - Max: 188 I did not see the same improvement, perhaps it is not the same update or I am missing something.... 1
wju Posted September 2, 2024 Posted September 2, 2024 (edited) 11 hours ago, chiliwili69 said: WOW!, not bad! I Assume that your VRTest2 has been also improved a bit. Yes. To be sure, I ran CPU test again; improvement is there, in VR2 also: CPU TEST 2024-09-02 18:56:01 - Il-2 Frames: 8758 - Time: 60000ms - Avg: 145.967 - Min: 120 - Max: 186 2024-09-02 18:58:14 - Il-2 Frames: 8964 - Time: 60000ms - Avg: 149.400 - Min: 127 - Max: 192 2024-09-02 19:00:22 - Il-2 Frames: 8821 - Time: 60000ms - Avg: 147.017 - Min: 124 - Max: 187 2024-09-02 19:02:38 - Il-2 Frames: 8942 - Time: 60000ms - Avg: 149.033 - Min: 122 - Max: 195 VR2 TEST 2024-09-02 19:10:47 - Il-2 Frames: 4737 - Time: 60000ms - Avg: 78.950 - Min: 60 - Max: 92 2024-09-02 19:13:31 - Il-2 Frames: 4988 - Time: 60000ms - Avg: 83.133 - Min: 67 - Max: 91 2024-09-02 19:16:00 - Il-2 Frames: 5080 - Time: 60000ms - Avg: 84.667 - Min: 70 - Max: 91 2024-09-02 19:18:28 - Il-2 Frames: 4996 - Time: 60000ms - Avg: 83.267 - Min: 67 - Max: 91 The update is the last one in my update history: Edited September 2, 2024 by wju
FTC_ChilliBalls Posted September 7, 2024 Posted September 7, 2024 Allegedly the newest Win 10 22H2 update also includes the branch prediction, but I haven't seen any such statement by either Microsoft or AMD, or whether Win 10 even suffered from the same issues.
1Sascha Posted September 9, 2024 Posted September 9, 2024 As I'm currently contemplating to upgrade my GPU, I checked out the results spreadsheet again and thought my latest numbers on there looked a bit off. I guess I probably messed up with some of the OpenXR Tools settings when I ran the tests in the past, so I just re-ran VR1 and VR2 and.. yeah.. I think I screwed up.. 😄 Anyhoo: Other than putting a slightly higher OC on my card and using Intel's recommended "Performance" settings for my CPU's PLs... err... and upgrading to the 0x129-based BIOS for my board because .. Intel 13/14th gen disaster, nothing in my system has changed since I last reported a result. So I won't repeat all that stuff here in detail as what's in the spreadsheet is still accurate. Short version: - MSI Z690 Tomahawk WiFi DDR4 - flashed to "safe" microcode 0x129-based BIOS - 2x16 GB G.Skill TridentZ RGB, 3600/CL16, running in XMP-mode - Intel i7-14700 KF - PL1/2 253/253W, ICCmax: 307A, no manual OC. - GB GF RTX 4070 Gaming OC, manually OCed by +175 MHz core and +1000MHz on VRAM. Max core-boost hits 3000 MHz with this OC, IIRC. Makes a difference in benchmark scores - not sure it makes a noticeable difference in games. HP Reverb G2 and Open XR But since I'm still not 100% sure if I did everything right, here's how I made my adjustments to prep the rig for the test this time (Open XR Tools and Toolkit can be pretty confusing, if you ask me.. 😄 ). In-game: Used CPU-Test Graphics-settings, as required in the original post. Open XR Tools for WMR: 100% render scale and other than "use latest runtime", nothing is checked/activated here. Open XR Toolkit (in-HMD menu): FSR or any other upscaling stuff: OFF "System"-tab: "Override Resolution = "Yes", then set 2236x2188 (or something very close to it as I couldn't get that exact resolution) as custom resolution for VR1 test . For VR2: "Override" set to 3162x3093 (again: couldn't get the *exact* res, so I chose something very close to it). I hope I've finally done it the correct way.. 😄 Results: VR1: 2024-09-09 17:20:59 - Il-2 Frames: 3996 - Time: 60000ms - Avg: 66.600 - Min: 53 - Max: 91 VR2: 2024-09-09 17:28:55 - Il-2 Frames: 3633 - Time: 60000ms - Avg: 60.550 - Min: 53 - Max: 69 Also, also: What's the correct way of upscaling/reducing res with OpenXR? Do I leave the "override resolution" setting on "No" and then reduce my res using FSR? With the 4070 (non Super), I seem to barely get away with 80 % like that with pretty low in-game settings. Although career missions can still produce some hiccups when there's a lot of planes/vehicles around me. Or is it better to override resolution to, say, 80% and then leave FSR off? Cheers S.
LuftManu Posted September 9, 2024 Posted September 9, 2024 Hey guys! just a sitrep. Seems like new Arrow Lake is dropping on October. I'll be getting one on launch and also do some tests. Motherboard will be probably a Z890 Hero. Core Ultra 7 and Core Ultra 9 unlocked ones. Will check against my tuned KS and others. Desktop only, but will help to see how they would behave on VR. Kind regards!
chiliwili69 Posted September 10, 2024 Author Posted September 10, 2024 15 hours ago, 1Sascha said: I hope I've finally done it the correct way.. 😄 Thank you for re-running the test again for VR1 and VR2 tests. did you run it for the monitor test CPU TEst at 1080p? (Just for completiness) I have never used any of the OpenXR Toolkit or upscaling tools (I have never had the need to, since in the past I have just used Index and currently Quest3), so I am not familiar with those settings. According to your results I think you are doing it in the right way since your results are more or less aligned with DBCOOPER when he was testing a 12900K with a 4070 card. 11 hours ago, LuftManu said: Desktop only, but will help to see how they would behave on VR. Yes sure, thanks for this. I saw in another post you got access to the Zen5 CPUs, Any chance to test them with the SYN_Vander test just on monitor?
1Sascha Posted September 10, 2024 Posted September 10, 2024 (edited) 4 hours ago, chiliwili69 said: I have never used any of the OpenXR Toolkit or upscaling tools (I have never had the need to, since in the past I have just used Index and currently Quest3), so I am not familiar with those settings. Thanks Chili. The thing with Open XR Tools and Open XR Toolkit (which are two different things/apps, AFAIK) is that I have..err... three different "switches" to play around with resolution: There's this in "OpenXR Tools": ^ Which they clumsily translated into German (sadly) so I have to take educated guesses as to what those settings mean - because 99.9 percent of info out there on the web only refer to these settings by their English names. In any case: This is "user defined rendering-scale" - which I have usually set to "off", only activated it here to highlight it and make the 100% entry appear. As I understand it, this should lower (or increase) the resolution before it gets sent to the headset, but I'm not sure. I've tried reducing my resolution in here before, but the only discernible difference was that when I did this and then tried to let Nvidia's overlay recording run to capture my screen for monitoring purposes, the game would slow down considerably while the recording was running. Almost like in the old days of capturing to an HDD and how Fraps or Bandicam could slow down your game while they were running. It doesn't give me these slow-downs when I change resolution settings in the "Open XR Toolkit", which is the in-headset app. In that "Toolkit"-thing, you have two more resolution-"switches" to manipulate: One is under the System-tab where you can choose to override resolution and then set your own. At 100% this is set to (or extremely close to) the G2's native res. The second switch is under "performance" where you can select upscaling methods (FSR and NIS) ... if you enable one of those, you can then set another target resolution. Now, the odd thing about that setting is that while it starts at 100% (~native res) and you can set values of below and above 100%, once I go above 100%, the resolution value it will show me will still decrease - not increase. Meaning 90% in there will be shown as the same resolution as when I set it to 110%. Not my settings, screenshot grabbed off the web - just to show what I'm talking about: I figured that behavior might be upscaling-related, and since using 80% resulted in stutters in career-mode, I went and tried 120% (which shows the same resolution in that menu as 80%) but ran *a lot* smoother. I guess I don't quite get the mechanics involved here, but I suspect that 80% is "upscaling" (resulting in more pixels rendered) while 120% is actually "downscaling" (reducing the total number of pixels rendered and thus the load on the GPU). *shrug* For day-to-day use, I have it currently overriding the native resolution to roughly 80% of native in the "System"-tab to reduce the load on the GPU and I switched off FSR in the Performance-tab. Instead I'm using the CAS-sharpening filter to counter blurriness/jaggedness as that's what some folks in other forums have suggested. The image looks better and sharper to me this way than what I could get with FSR - plus the G2's sweetspot-behavior isn't as pronounced as it was before I tried these settings. My problem is that I still don't understand half of what's going on here and feel like I'm just bumbling about.. and it's hard to find any good explanations/guides on how to set this stuff up. I've found so many contradicting opinions/statements from users that I'm getting the feeling that a lot of folks out there also have no real clue about what's truly going on and what method to set up this stuff is the preferred one.. 😄 S. Edited September 10, 2024 by 1Sascha
1Sascha Posted September 10, 2024 Posted September 10, 2024 (edited) 8 hours ago, chiliwili69 said: did you run it for the monitor test CPU TEst at 1080p? (Just for completiness) I was just doing that, but I noticed that my FPS were locked at 165, even while in the menus. The first run I did also reported an FPS max of 165 ... which seemed a bit convenient.. 😄 165Hz is my monitor's max frequency, so I double-checked IL-2's settings in Nvidia CP and also deactivated V-Sync in there (it was switched to "default (off)" before), but I'm still getting this: ^ This is while switching IL-2 back to the CPU Profile settings, but before the re-start.. so using lower settings than what is shown. Which really shouldn't constantly max out at *precisely* 165Hz/FPS ... so I wonder if I'm missing something here. Are there any other switches that could still turn on V-Sync even though it's deactivated in the in-game-settings *and* in Nvidia CP? I don't think I'd be getting much more than 165 FPS, but this looks to me like my framerate is getting capped somehow, which would probably skew the result. EDIT: Seems that switching it off in Nvidia CP (both for IL-2 and in the general settings-tab) did the trick. 2024-09-10 17:37:44 - Il-2 Frames: 7995 - Time: 60000ms - Avg: 133.250 - Min: 116 - Max: 180 S. Edited September 10, 2024 by 1Sascha
LuftManu Posted September 10, 2024 Posted September 10, 2024 On 9/10/2024 at 9:11 AM, chiliwili69 said: I saw in another post you got access to the Zen5 CPUs, Any chance to test them with the SYN_Vander test just on monitor? Hi! I do not have access to those PCs but the difference would be so little that it just not worth it. The new increased TDP somehow reduces performance too in a few games. I didn't test this but my friend. Anyways, I think we should wait for 9800X3D to re-run other tests. If things also go well and I can also test the new Korea series, I will be running too tests there with different setups, so I can check with the team if something is wrong 1
chiliwili69 Posted September 11, 2024 Author Posted September 11, 2024 16 hours ago, 1Sascha said: 2024-09-10 17:37:44 - Il-2 Frames: 7995 - Time: 60000ms - Avg: 133.250 - Min: 116 - Max: 180 Thanks!. My monitor is 60Hz and when I am in the menu sections of IL-2 the fps, as shown by Fraps, are at 60 fps. But later when running the mission of the benchmark the fps goes higher.
wju Posted September 11, 2024 Posted September 11, 2024 (edited) On 9/9/2024 at 6:01 PM, 1Sascha said: As I'm currently contemplating to upgrade my GPU, I checked out the results spreadsheet again and thought my latest numbers on there looked a bit off. I guess I probably messed up with some of the OpenXR Tools settings when I ran the tests in the past, so I just re-ran VR1 and VR2 and.. yeah.. I think I screwed up.. 😄 Anyhoo: Other than putting a slightly higher OC on my card and using Intel's recommended "Performance" settings for my CPU's PLs... err... and upgrading to the 0x129-based BIOS for my board because .. Intel 13/14th gen disaster, nothing in my system has changed since I last reported a result. So I won't repeat all that stuff here in detail as what's in the spreadsheet is still accurate. Short version: - MSI Z690 Tomahawk WiFi DDR4 - flashed to "safe" microcode 0x129-based BIOS - 2x16 GB G.Skill TridentZ RGB, 3600/CL16, running in XMP-mode - Intel i7-14700 KF - PL1/2 253/253W, ICCmax: 307A, no manual OC. - GB GF RTX 4070 Gaming OC, manually OCed by +175 MHz core and +1000MHz on VRAM. Max core-boost hits 3000 MHz with this OC, IIRC. Makes a difference in benchmark scores - not sure it makes a noticeable difference in games. HP Reverb G2 and Open XR But since I'm still not 100% sure if I did everything right, here's how I made my adjustments to prep the rig for the test this time (Open XR Tools and Toolkit can be pretty confusing, if you ask me.. 😄 ). In-game: Used CPU-Test Graphics-settings, as required in the original post. Open XR Tools for WMR: 100% render scale and other than "use latest runtime", nothing is checked/activated here. Open XR Toolkit (in-HMD menu): FSR or any other upscaling stuff: OFF "System"-tab: "Override Resolution = "Yes", then set 2236x2188 (or something very close to it as I couldn't get that exact resolution) as custom resolution for VR1 test . For VR2: "Override" set to 3162x3093 (again: couldn't get the *exact* res, so I chose something very close to it). I hope I've finally done it the correct way.. 😄 Results: VR1: 2024-09-09 17:20:59 - Il-2 Frames: 3996 - Time: 60000ms - Avg: 66.600 - Min: 53 - Max: 91 VR2: 2024-09-09 17:28:55 - Il-2 Frames: 3633 - Time: 60000ms - Avg: 60.550 - Min: 53 - Max: 69 Also, also: What's the correct way of upscaling/reducing res with OpenXR? Do I leave the "override resolution" setting on "No" and then reduce my res using FSR? With the 4070 (non Super), I seem to barely get away with 80 % like that with pretty low in-game settings. Although career missions can still produce some hiccups when there's a lot of planes/vehicles around me. Or is it better to override resolution to, say, 80% and then leave FSR off? Cheers S. regarding this benchmark, imho the best way to set correct values is as easy as follows: 1) completly disable Open XR Toolkit 2) set the appropriate resolution by Custom render scale in Open XR Tools for WMR, for HP Reverb G2 it is 100% tweaking for playing is another endless story, but out of topic in this thread. 🤗 Edited September 11, 2024 by wju 1
1Sascha Posted September 11, 2024 Posted September 11, 2024 (edited) 3 hours ago, chiliwili69 said: My monitor is 60Hz and when I am in the menu sections of IL-2 the fps, as shown by Fraps, are at 60 fps. But later when running the mission of the benchmark the fps goes higher. That thought had crossed my mind (meaning that the menus max out at the monitor's max refresh rate), but as the earlier test-run I did also showed max FPS of 165, I got a bit suspicious. Quote 2024-09-10 17:02:38 - Il-2 Frames: 7567 - Time: 60000ms - Avg: 126.117 - Min: 95 - Max: 165 ^ Older run with "V-sync-ON-eqsque" max framerate .. Especially since my older CPU runs with this card had always produced peaks higher than 165. But it's all good, since FPS on that second run I did spiked well above 165, so I'm pretty sure there wasn't any V-Sync going on this time. Just ordered a new GPU, BTW, which should be here in two days, so I'll be posting more benchmarks soon. It's a 4070 Super, which might sound like a silly upgrade, but as I'm still unwilling to dish out over €800 for a graphics card (that's what the Ti Supers cost over here currently) but I still think my non-Super 4070 is a bit too weak for IL-2 VR (and I wanted to treat myself 😄 ), ... I didn't have much of a choice. I guess if it doesn't work out I can always send the card back and, perhaps, grab a Ti Super instead. 😄 S. Edited September 11, 2024 by 1Sascha
1Sascha Posted September 13, 2024 Posted September 13, 2024 (edited) Well... this is odd. Plopped in the new GPU today (Gigabyte 4070 Super Gaming OC), manually OCed the core and the VRAM, benched and stress-tested it and in 3DMark, etc I got the expected uplift. Depending on the benchmark, I managed to get 15 - 20% manually OCed vs the also manually OCed 4070 non Super - stopped pushing it when I got my first "Legendary"-result in Port Royal, but I don't think I could go much higher, as that highest OC crashed Port Royal stress-test after 11 loops. Speed Way stress-test passed and ran with 99.1 % stability, though... weird. Anyway... the results in IL-2 are a bit ... odd. Haven't tried gaming with my own in-game and OpenXR-settings, and I guess the CPU-test doesn't tell me all that much about the GPU, but the VR-test results are a bit ... underwhelming.. 😄 CPU: 2024-09-13 19:16:10 - Il-2 Frames: 7669 - Time: 60000ms - Avg: 127.817 - Min: 107 - Max: 178 VR1: 2024-09-13 19:28:39 - Il-2 Frames: 3977 - Time: 60000ms - Avg: 66.283 - Min: 50 - Max: 91 VR2: 2024-09-13 19:33:31 - Il-2 Frames: 3846 - Time: 60000ms - Avg: 64.100 - Min: 53 - Max: 84 Looks like only VR2 shows some minor improvement - I wonder what will happen while actually gaming. Maybe I need to return the card and grab a Ti Super instead, after all... This was done with a slightly milder OC on the core than what I achieved manually (Afterburner Auto OC curve with +175 MHz core, manually I managed +250 MHz core) and a manual VRAM OC of +1150 MHz. S. Edited September 14, 2024 by 1Sascha
chiliwili69 Posted September 16, 2024 Author Posted September 16, 2024 On 9/13/2024 at 7:51 PM, 1Sascha said: Looks like only VR2 shows some minor improvement - I wonder what will happen while actually gaming Thank you for re-testing the IL-2 benchmark with the new card. The results are very much as expected. The CPUtest in monitor is just only to bench the CPU, so it should not be affected. The VRTest1 is a mix of CPU and GPU, it depends on the combo. In your case, you would be mostly constrained by the CPU and that´s why you didn´t got any improvement. The VRTest2 is mostly a GPU test (it is 19.5 million pixels at 90Hz which is a lot!), so that why you saw that improvement going from 4070 to 4070S. In actual gaming you will an improvement for the cases where you were GPU constrained. Not sure if going to the 4070 Ti will improve to much. If it is for the same price, then yes, it looks like the 4070Ti is a bit better than the 4070S. 1
1Sascha Posted September 16, 2024 Posted September 16, 2024 (edited) 1 hour ago, chiliwili69 said: In your case, you would be mostly constrained by the CPU and that´s why you didn´t got any improvement. Yeah... that's what I figured. Although it's crazy to think that the 14700K/F is bottlenecking anything in such an old title, lol! I've tried to play around with OCing the CPU a bit, but I haven't found a way yet to push single core performance without going into crazy power-draw/VID/temp territory. I'm talking scary levels here of around 350W and near instant thermal throttling.. 😄 All I did for now was to leave PLs at Intels recommended "Performance" settings and upped ICCmax to 360A to get the CPU to not current-throttle. It does reach the 253W limit now under heavy load (was more like 200W with recommended ICCmax of 307A), and CB23 score went up from 33,500 to 35,500 multi-core, but I'm not sure this does anything to single core performance. *shrug* 1 hour ago, chiliwili69 said: In actual gaming you will an improvement for the cases where you were GPU constrained. Yeah... before I had to run slightly under 80% resolution (~ 76/77% via "Override Res" in Toolkit - I've stopped using upscalers as they tend to make the image look ugly/blurry and don't seem to offer any performance boost over simply reducing resolution). Now I'm able to run at around 84/85 percent in a full quick mission - might have to slightly reduce that for career, but the uplift is pretty much what I expected. 1 hour ago, chiliwili69 said: Not sure if going to the 4070 Ti will improve to much. Oh no... I wasn't talking about the Ti but about the Ti Super (God, I hate those stupid Nvidia names.. 😄 ). If YTer's results and my own are any indication, the 4070 Super is *very* close to the OG 4070 Ti in performance. With my GPU and VRAM OCed manually, I did manage 3DMark results that were sitting inside the "Top 15%" for systems running a 4070 Ti. The more GPU-heavy runs like Speed Way and Port Royal are Top 18% and Top 26% compared to 4070 Ti-systems, so not too shabby for this kind of card. It's only when I compare my results to 4070 Ti Super systems that my results start to stink.. 😄 I *was* looking at getting a second hand 4070 Ti instead of the 4070 Super, but those would've still been more expensive than a brand new 4070 Super with only minor advantages in performance (on paper... naturally couldn't test this myself). Even though I went for another GB "Gaming OC"-model with is above MSRP. I paid €630 for the card brand new and the best looking used 4070 Tis (non-Super) would've been just under 700 or 750 for a brand new one. Which, I decided, wasn't worth the extra cost or risk given how small the Ti's lead over the 4070 Super seems to be. In all honesty, I doubt that even the 4070 Ti Super would be enough to go full native res in IL-2 on the G2. I suspect for that you would need either a 4080 or 4090, but that's just a hunch. S. Edited September 16, 2024 by 1Sascha
1Sascha Posted October 3, 2024 Posted October 3, 2024 (edited) Upgraded to a Quest 3 yesterday (and boy.. is it an upgrade!), so I thought I'd run the VR-tests again (nothing else in the system has changed since the last test). So... i7-14700KF 32 GB G.Skill TridentZ DDR4 3600, XMP is enabled GB RTX 4070 Super Gaming OC with a manual OC on core and VRAM Win 11 Quest 3 runs tethered with a pretty fast USB link/charging cable (Oculus puts it at 2.9 Gbps) going into a 3.2 Gen 1 Type A socket on the board. And I did make some tweaks in ODT and in Nvidia CP for IL-2 as recommended by the community. As I had to switch back to SteamVR last night to get another title I wasn't even going to play in VR to start (damn you, Hitman!!), I thought I'd run the test in SteamVR first and then once more after I switched back to OpenComposite/XR. Starting with my SteamVR results: VR Test 1: 2024-10-03 18:22:27 - Il-2 Frames: 3334 - Time: 60000ms - Avg: 55.567 - Min: 43 - Max: 91 VR Test 2: 2024-10-03 18:29:33 - Il-2 Frames: 3154 - Time: 60000ms - Avg: 52.567 - Min: 36 - Max: 89 EDIT: And here are my OpenComposite/OpenXR results. Render res set to 1.0x in the Oculus app (90 Hz), then in OpenXR Tools (in-HMD menu) "Override Resolution" to YES and resolution set to values specified in VR1/2 test instructions. No other filters/scalers active in OpenXR Toolkit (no CAS/NIS/FSR, etc): VR Test 1: 2024-10-03 19:03:31 - Il-2 Frames: 3512 - Time: 60000ms - Avg: 58.533 - Min: 41 - Max: 91 VR Test 2: 2024-10-03 19:07:51 - Il-2 Frames: 3258 - Time: 60000ms - Avg: 54.300 - Min: 41 - Max: 91 Seems SteamVR will stay very much disabled for me.. 😄 EDIT - I know this probably isn't the place for this, but: Having compared my last results with the G2/WMR and these first Q3-results, I should probably add that these numbers are nowhere near representative of the difference in performance I'm seeing with the Q3 in IL-2. Judging from the benchmark-results, you might think the G2 blows the Q3 out of the water WRT framerate - when I find it quite the opposite when actually playing the game. Q3 just runs so much nicer/smoother and provides a much better picture than anything I ever got from the G2. It's not just the lenses, edge-to-edge clarity and all that, it's also the fact that there are no more large or tiny hiccups. Game just *runs* at a rock-steady 90 FPS no matter what offline mission I throw at it. S. Edited October 4, 2024 by 1Sascha
chiliwili69 Posted October 5, 2024 Author Posted October 5, 2024 On 10/3/2024 at 6:41 PM, 1Sascha said: Game just *runs* at a rock-steady 90 FPS no matter what offline mission I throw at it. Thank you very much for doing all this. Firstly, it is the first time I see some objetive comparison of SteamVR and OpenComposite-OpenXR. And your test show +3fps and +2fps which although small they are there and they are for free. Secondly, it is the first Quest3 benchmark I see here (apart from mine) and you also show similar results in VR1 (around 55 for SteamVR) and a better in VR2 (your 4070S card is above my 3080), but as you said, it is much lower than what I was obtaining with the Index or the G2 in your case. But, it is really not representing the actual fps in actual game play. I normally use SteamVR (setting Oculus XR Runtime) and the Airlink with my Wifi6 router at 72Hz, and I am always at 72fps except in some dense scenarios (+15 planes/vehicles) where my 7800X3D can not be at 72fps, but all as smooth and pleasant as with my previous Index.(but better images) If you see in my previous posts, I was comparing the four different ways to use the Quest3 (USB cable QuestLink, Airlink, Virtual Desktop, SteamLink), I tested all with SteamVR. The SteamLink gave higher results. This is something I still don´t understand well, perhaps it is the heavy fixed foveated rendering applied. Benchmarking the Quest3 it is a bit more complex than it was with Index (only one method, DP cable and you are done). I didn´t tried the SteamLink lately, it was improved in the latest releases. Have you tried that?
1Sascha Posted October 5, 2024 Posted October 5, 2024 (edited) 43 minutes ago, chiliwili69 said: And your test show +3fps and +2fps which although small they are there and they are for free. That's what I thought, but considering the near-extreme in-game settings required by the test (and high SS in Test 2), I do wonder if it won't make more of a difference with more "realistic", day-to-day settings. Not sure if this is down to the Quest 3, but when I switch to 80 Hz (but leave Render Resolution in the Oculus App at maximum ... err... 5400 something), I can just get away with enabling MSAA x2 with my settings (results in 81-85 FPS in-game). I wonder if that would be possible via SteamVR on this GPU. 43 minutes ago, chiliwili69 said: I didn´t tried the SteamLink lately, it was improved in the latest releases. Have you tried that? LOL... I'm not even aware of SteamLink. I only ever ran the Q3 with cable-link (via Oculus app) as that was what was recommended in all the how-to videos I could find. EDIT: Just looked into SteamLink and I'm not sure this would be an option for me. My router is a pretty basic German Telekom job that's probably 10 years old or so (supports 802.11 and 2.4/5Ghz simultaneously, but it's not what I'd call "cutting edge" 😄 ). Plus, due to "architectural issues", I'm connected to it not via cable but over a power-line adapter - Wifi reception in my gaming room is pretty bad. All of which works great for browsing/gaming and doesn't really matter as I'm on a pretty crappy internet connection speed-wise (nothing faster available in my region), but I'm not sure it would be enough to do any sort of wireless connection with the Q3. Getting 2.9 Gbps connection speed (according to the built-in tester) and I'm using these settings in ODT (copied from Virtual Warbirds/DerSheriff's how-to videos): ^ Perhaps these are outdated already, as the videos were both at least 5 or 6 months old and there seem to have been improvements with newer versions of the Oculus App. Like Sheriff is using one step lower of render resolution in Oculus App than I do and he did say he couldn't go up to the render resolution that I'm using. But on his system (in the video), his setting (with slightly lower render resolution than what I use) is labeled as "higher rated"/more demanding than on my system.. and he's on a 4070 Ti Super, so he has more GPU power than me, and I think we're both on the same CPU (14700K/F). I think what is shown for me in the app as 1.4x is 1.6 or 1.7x for him on the older Oculus App version. And if I understand things correctly, those 1.3, 1.4x numbers in the app aren't exact resolution multipliers but more like an indicator of how hard the setting is to run on your hardware. Considering the 4070 Super vs 4070 Ti Super thing, this would only make sense if it was an issue with the older Oculus version he was using compared to mine (v. 69). His video is... 5 months old, I believe. S. Edited October 5, 2024 by 1Sascha
chiliwili69 Posted October 6, 2024 Author Posted October 6, 2024 On 10/5/2024 at 11:02 AM, 1Sascha said: I do wonder if it won't make more of a difference with more "realistic", day-to-day settings. The main purpose of the benchmark was to compare CPU performance and look for the best CPUs for IL-2, (basically in the CPU test). The setting selected for the CPU test are just the setting which try to load the CPU (and not the GPU, that´s why they are a bit unrealistic). The setting of the VR1 test uses a low resolution (just 9.5 million pixels, no MSAA, low clouds,) because it also tries to load only the CPU. And then the VR2 test add extra load just to the GPU by just increasing the resolution. The purpose is NOT to know what are the fps in "normal" settings during play game. Since there will be many "normal" combinations of the settings for each CPU, GPU and device (and taste of everyone). On 10/5/2024 at 11:02 AM, 1Sascha said: LOL... I'm not even aware of SteamLink. I only ever ran the Q3 with cable-link (via Oculus app) as that was what was recommended in all the how-to videos I could find. Yes, there are many ways to use the Quest3 with IL-2. I tried to draw and scheme here. The VirtualDesktop, SteamLink and Airlink are via wifi. So if you don´t have a good router you can forget them and use only the QuestLink (USB cable), either with OpenComposite or with SteamVR (setting Oculus as XR runtime). On 10/5/2024 at 11:02 AM, 1Sascha said: I think what is shown for me in the app as 1.4x is 1.6 or 1.7x for him on the older Oculus App version The Pixel per Display, or Pixel Density used by Oculus is the factor that you apply to each axis (horizontal and vertical). So, for example, if you apply 1.2, you do 20% more pixels horizontally and 20% more pixel vertically, so in total 1.2x1.2 is 1.44 , so in total you do 44% more pixels. SteamVR SS% don´t use Pixel Density, but just the % increase in the total pixels. Summary: 1.2 in Oculus means 144% in SteamVR. 1
1Sascha Posted October 7, 2024 Posted October 7, 2024 (edited) 12 hours ago, chiliwili69 said: The purpose is NOT to know what are the fps in "normal" settings during play game. Since there will be many "normal" combinations of the settings for each CPU, GPU and device (and taste of everyone). I understand all of that and I wasn't questioning the tests themselves or their methodology. I was just stating that the results don't necessarily indicate how well or badly a system will perform while actually playing the game, using different and more "day to day" settings. Just like your experiences and mine with the Q3 seem to demonstrate. Like I said before: Looking at my raw benchmark results, one could get the impression that I was doing better with the Reverb than with the Quest 3 when my actual, "in-game-using-my-own-settings"-experience is quite the opposite. 12 hours ago, chiliwili69 said: Summary: 1.2 in Oculus means 144% in SteamVR. Ah... ok.. so there is more to those settings in the app than just an estimation of how hard it will be to run with them. So with my current setting of 1.4x / 5152x2752 @90 Hz I'll be doing 1.4x1.4 = 1.96, so 96 percent more pixels... I guess. This stuff is still making my head spin... 😄 What kinda baffles me is that I've tried all refresh-rate settings now from 72 to 120 ... and I don't really see much difference. Even 72 vs 120 isn't nearly as pronounced as 60 vs 90 Hz was on the G2. Seeing how in DCS some folks with 4090s will run the Q3 at 72 Hz, I'm starting to wonder if I'm not leaving performance/picture quality on the table by sticking with 90 Hz. As mentioned before, I can just get away with MSAA x2 at 72 or even 80 Hz which is probably the single most "effective" method to improve visual quality for me - although it does seem to make spotting much harder than FXAA x4 and it is quite taxing on the GPU. The only other method I can think of (outside of other in-game settings) is adding more pixels/increasing resolution ... somewhere. I do wonder what the preferred method is here ... going max res in Oculus App? Going 1.0x in the App and then use "Override Resolution" in OpenXR Toolkit? Or maybe using PD-setting in Oculus Development Tool? So many settings to fiddle with and I don't really know what the most effective way of doing this is. I do remember when watching the benchmark run at the VR2-Test-settings that the game looked very impressive (especially noticeable when looking at the skin and model of the "player's" IL-2 in the benchmark). But I don't rightly know if that was down to the high in-game settings or the high SS setting. Is there a general priority "list" or even consensus of what settings/changes will have the biggest effect on picture quality? Like Render Res in Oculus, vs PD-setting, vs. Res setting in SteamVR/OpenXR Toolkit vs in-game settings? Meaning: Where should I spend my limited GPU resources to ensure maximum visual quality while maintaining playability? S. Edited October 7, 2024 by 1Sascha
chiliwili69 Posted October 7, 2024 Author Posted October 7, 2024 9 hours ago, 1Sascha said: Like I said before: Looking at my raw benchmark results, one could get the impression that I was doing better with the Reverb than with the Quest 3 when my actual, "in-game-using-my-own-settings"-experience is quite the opposite. Yes, you are right. And it is good that you indicated that there so people fully understand it. 9 hours ago, 1Sascha said: Is there a general priority "list" or even consensus of what settings/changes will have the biggest effect on picture quality? Like Render Res in Oculus, vs PD-setting, vs. Res setting in SteamVR/OpenXR Toolkit vs in-game settings? Meaning: Where should I spend my limited GPU resources to ensure maximum visual quality while maintaining playability? This is the one million question.... 😉 To complicate it a bit more you should know that depending on the frequency (72Hz, 80Hz, etc) Oculus is adjusting internally the default resolution. So, if you use 1.4 Pixel Density, the number of pixel you render is not the same at 72Hz than at 80hz or 90Hz:
chiliwili69 Posted October 7, 2024 Author Posted October 7, 2024 Regarding image quality (and performance. Both go together) there are many decissions one should take when using the Quest3 with IL-2: 1- Connecting method: Questlink, AirLink, VD or SteamLink 2- Refresh rate: 72, 80, 90 or 120 Hz 3- Compression method: H264, H265 or AV1 4- Resolution (Oculus PD or SteamVR SS%) 5-In game settings: MSAA/FXAA, Sharpen, Only with the three first points you have 4x4x3= 48 different ways! to use the Quest3, and then you still have to add the multiple resolutions you can use and the in-game options. And now if you add the XRtoolkit there could be like 1000 ways to use it. My decision tree has been: 1.- I use AirLink. It works well. I like wifi 6E. No cables and I use an extra battery for I can play for 3-4 hours. 2.- I use 72Hz since it gives better images for the same bitrate (200Mbps is the limit for the XR2gen2), so less frames but better ones. 3.- I use H265. My card doesn´t support AV1 (only 4000 series like your one) 4.- I use SteamVR SS 150%, for my 3080 is where I can always keep 72fps with some margin. 5.- I use FXAAx2 (no cost on GPU) and sharpen I don´t use OpenComposite or OpenXR toolkit Is my combination the absolutely best?? most likely not, but I am having enough fun with it when playing IL-2 with the Quest3. 😉 1 1
1Sascha Posted October 7, 2024 Posted October 7, 2024 (edited) Gracias chili! 😄 Inspired by the settings for the VR test, I stayed on 90 Hz and set 1.0x in the Oculus APP and then went a bit nuts with the "Override Resolution" in Open XR Toolkit. Tried 3020 (so 6040 in Oculus App terms) which is way more than I could set in there (slider maxes out at 5400-something). That caused some performance issues in career missions, so I reduced it to ~2900 (5800) which is still a bit more than what Oculus App lets me set. Especially the higher (but problematic) setting resulted in pretty gorgeous visuals - 2900 is pretty damn good still. I also went down to 80 Hz, that also helped to speed things up a bit. As for settings: I'm using H.264, 500/600 Mbps bitrate and I'm connected via a link/charging cable - like I said: My router is too ancient for Air-Link to make much sense and I don't mind the cable. Not sure how I would set AV1 because the Video Codec entry in ODT only shows "Default", "H.264" and "H.265"... no AV1-entry there for me. EDIT: Looks like AV1, while supported by my card and by the Q3 isn't available when using a link-cable... for whatever reason. https://communityforums.atmeta.com/t5/Get-Help/When-is-wired-Quest-3-link-getting-AV1-encoding/td-p/1130603 I did manage to run MSAA x2 at 80 Hz (at 1.4x in Oculus IIRC) and with some in-game stuff reduced and it looked gorgeous ... but spotting seems a lot harder with it on (with FXAA x2 or x4 I can see dots further out than icon-range)... here's a video of a career mission (don't watch the whole thing) where, right at the start, you can see a group of planes at my 1 OC and out of icon range which were *very* obvious to spot within VR (less so in the video and probably even less so after I'd converted it for YT). And I usually suck at spotting, doesn't matter if it's on a monitor or in VR.. 😄 I did capture left-eye only and at 1440p, but we all know what compression does to the details.. This was at 80 Hz with the slightly lower res I described above and FXAA x4 ... even if the FPS counter seems to fluctuate at times, the experience within the HMD was very fluid. S. Edited October 8, 2024 by 1Sascha
LuftManu Posted October 11, 2024 Posted October 11, 2024 Hello everybody, @chiliwili69 Core Ultra Benchmarks will be done as soon as I can. Kind regards, 2
1Sascha Posted October 13, 2024 Posted October 13, 2024 On 10/11/2024 at 8:53 PM, LuftManu said: Core Ultra Benchmarks will be done as soon as I can. Nice! Can't wait to see if Intel's admission is correct and those things are actually a bit slower than 14th gen in games... S.
chiliwili69 Posted October 14, 2024 Author Posted October 14, 2024 It looks like it (only 1 sample for now) will be above the 14900KS but consuming less power
Aapje Posted October 14, 2024 Posted October 14, 2024 Passmark is not a game. The 14900K was already beating the 9950X in gaming on average, although in F1 2024 that was not the case: But Intel's own tests for F1 2024 with the new Cores are worse, so I would expect a regression for simming titles:
LuftManu Posted October 14, 2024 Posted October 14, 2024 5 hours ago, Aapje said: Passmark is not a game. But Intel's own tests for F1 2024 with the new Cores are worse, so I would expect a regression for simming titles: Hi! Sims are built different 😉 Tests with the showcase were done with: Processor: Intel® Core™ Ultra 9 285K 24C24T processor; PL1=PL2=250W; Memory: G.Skill 2x16GB 6400MHz 32-39-39-102 DDR5; Motherboard: ASUS ROG MAXIMUS Hero Z890; BIOS: 8001; We are forgetting about why the K's were made. Last generations had so less margin that was not useful. Thermal headroom and performance gains are coming back again. Thermal improvements with the new IHS are notable. Also CUDIMM (New RAM format) with clock to achieve even higher speeds. Aim for 8000 MT/s and get a 360 AIO. Ultra 7 265k in prime95 small fft 240w @ 5.2 Ghz with max temp of 75ºC. Also: 19 hours ago, 1Sascha said: Nice! Can't wait to see if Intel's admission is correct and those things are actually a bit slower than 14th gen in games... S. Surprise incoming! 1
Aapje Posted October 14, 2024 Posted October 14, 2024 1 hour ago, LuftManu said: Hi! Sims are built different 😉 Yes, but looking at a sim benchmark is way better than looking at a general compute benchmark. Of course an IL-2 benchmark is better, but we don't have any yet. 1 hour ago, LuftManu said: Aim for 8000 MT/s and get a 360 AIO. Then I still don't see it beating a 9800X3D and you are producing a ton of heat, causing discomfort, thermal issues and high electricity bills. Also, the first CUDIMMs are almost certainly going to be so pricey that they make little sense, unless you want the absolute fastest. But if you want the absolute fastest in gaming, you go for AMD. Quote Surprise incoming! Do you seriously think that they wouldn't claim the win if they could. This just seems like copium.
LuftManu Posted October 14, 2024 Posted October 14, 2024 15 minutes ago, Aapje said: Then I still don't see it beating a 9800X3D and you are producing a ton of heat, causing discomfort, thermal issues and high electricity bills. 9800X3D won't be 20%-25% performance uplift like from Zen 3 to Zen 4. Sure it will be better, but It's a smaller gain with bigger cache. Be prepared with your wallet, the crown is pricey (...) if you want the absolute fastest in gaming, you go for AMD. This is not true. If we want to be specific, a 14900KS and fast (pricey) memory rams will perform better and don't be CPU limited in some cases, unlike with the 7800X3D and FPS dips. Check the benchmarks. 7800X3D is just the best price/performance out there, but not so much since the price here in Europe just went to the roof (450€ and more). Am4 5700 X3D it's still the best value RN with DDR4 Do you seriously think that they wouldn't claim the win if they could. This just seems like copium. Yes, I'm just an old man yelling at clouds, copiuming 😁 Performance will be similar on high SKUs. AMD still beats it at Wattage and might have a 5% lead, depending on the situation. With unlimited limits on the I9 (now that you have thermal headroom) you can improve the performance sans getting 50-80 more Watts on it and having the Single core = and better multicore. Just to clarify, I have different setups. Both AMD and Intel, I just care about performance and like OC. So I don't win anything if my "preferred" one is the king. Kind regards,
1Sascha Posted October 17, 2024 Posted October 17, 2024 (edited) Slightly OT, but I thought I'd share in here. A couple of days ago I was trying to get DCS to run half-way decently in MP (good luck with that!) and did some recording with HWInfo graphs open in the background - as I can't seem to get Afterburner/RT overlay to show up in either DCS' or IL-2's mirror-window anymore... used to work, but now it doesn't. Anyhoo: As a reality check and for general comparison, I also did some recordings in IL-2 while I was at. This is what I saw during a very busy career-mission on the Kuban-map - for some reason, the graph-windows had re-arranged themselves during this video, so the GPU-usage-window was covered up by the "Available VRAM"-graph window, but this is more about CPU usage. Airfield defense mission and the shot was taken right as I was making contact with the enemy near the field - there were ~ 4 IL-2s and 6 or more La5s at my low 1oc when this shot was taken and I was accompanied by the max number of wingmen I could get (7 other 109s) *and* there was another friendly AI flight (not sure how many planes in that one - probably 2 or 4) taking off from the airfield that was being attacked. So... pretty busy mission, I'd say. Quest was set to 1.0x, 80 Hz, no upscaling in OpenXR (only CAS active) and override resolution set to 2850/5700xsomething - which gives very nice visuals and *just* leaves enough GPU-headroom to avoid stutters. Think I had one or two FPS-drops in this mission during the large furball that followed, but those weren't really noticeable while flying. I was quite surprised to see that the CPU was barely breaking a sweat in IL-2 - I mean.. I know that on my setup I should be GPU-bound all the time, more or less (14700KF + 4070 Super), but I hadn't expected the CPU-load being *this* low. I was also pleasantly surprised at how little VRAM the game seems to need with my medium to high settings. Especially compared to DCS (which eats it all) and considering that just having the Quest 3 operational will already use 2 to 3 GB of VRAM, even without any games launched. On 10/14/2024 at 4:34 PM, Aapje said: Do you seriously think that they wouldn't claim the win if they could. I'm probably being too naive here, but I can't help but wonder if they are planning launching more of those "Core Ultra" models in the near future. Reason I'm saying this is/are ... the names they've chosen for the new chips. Why call the 14600 successor "245" or the 14700's "265"? Not only is it confusing, odd numbers like that are certainly not what the marketing-department would advocate for from a psychological POV ... AFAIK, even, round numbers like, oh, I don't know... 14600 or 260 or 270 are considered more psychologically "pleasing". Again: This might be totally wrong and just a simple screw-up on Intel's part re the new naming conventions for this generation, but... maybe one of the reasons why they freely admitted the relative lack of uplift in games is that there are other new CPUs in the works already? I'm probably under- and overthinking this at the same time ... 😄 S. Edited October 17, 2024 by 1Sascha
LuftManu Posted October 18, 2024 Posted October 18, 2024 On 10/17/2024 at 9:54 AM, 1Sascha said: I'm probably being too naive here, but I can't help but wonder if they are planning launching more of those "Core Ultra" models in the near future. Reason I'm saying this is/are ... the names they've chosen for the new chips. Why call the 14600 successor "245" or the 14700's "265"? Not only is it confusing, odd numbers like that are certainly not what the marketing-department would advocate for from a psychological POV ... AFAIK, even, round numbers like, oh, I don't know... 14600 or 260 or 270 are considered more psychologically "pleasing". Again: This might be totally wrong and just a simple screw-up on Intel's part re the new naming conventions for this generation, but... maybe one of the reasons why they freely admitted the relative lack of uplift in games is that there are other new CPUs in the works already? I'm probably under- and overthinking this at the same time ... 😄 S. Yeah! VRAM usage in DCS is nuts! Not so in Il-2 and I don't expect a huge uplift in Il-2 Series Korea. We might jump from maximum of 5-6gb on VR/high resolution to 8gb. That's my expectation, but I don't think we are going to surpass 12gb yet. The same with RAM. I've seen up to 10gb on VR. Maybe we are getting close to 16. Being 32 the new norm (and being 16 not suitable for DCS) I expect this not to be an issue. About the new Intel naming scheme... yeah I also find it kinda off. But we have to understand they want to start anew. So many issues with the 13/14th gen and this is a new tech for them. These CPUs are also done by TSMC on 3B node, so also different than their own fabs. (This would also help for those who are wary of the latest mishap with oxidation). On other words, my new Intel rig is coming. I'm going for an ASUS APEX Overclocking board and I also ordered the new CU-DIMM kits for ram at 8800MHz. My idea is to get them 9000MHz+ But I feel like this new gen might not need 2DIMM boards. This new tech with CKD will enable faster speeds and more realiability/easier to reach them. Even on 4 DIMM boards. Can't wait to test the new Intel generation + hyperfast ram on sims. Will also test this with the new MSFS 2024 and when Korea is availible for testing. Will update as soon as I can. Kind regards! 1
=SFG=BoostedStig Posted November 13, 2024 Posted November 13, 2024 Got a 9800X3D on release. So far I've only had a chance to test the CPU benchmark Motherboard: Asus ROG Strix X670E-F CPU: AMD R7 9800X3D CPU Freq: 5.4 Ghz (PBO enabled +200mhz offset) Cores: 8 Threads: 16 RAM size: 64GB (2x32GB) RAM Freq: 6200 MHz (DDR5) FCLK: 2066 MHz RAM timings: 30-36-35-40 GPU: Gigabyte Gaming OC 4090 Windows 10 - 23H2 - 22631.4391 - Most recent version that isn't 24H2 or any early release/access build I'm pretty sure. 1080P CPU Test: 2024-11-12 19:52:14 - Il-2 Frames: 10118 - Time: 60000ms - Avg: 168.633 - Min: 147 - Max: 215 1 1
DBCOOPER011 Posted November 13, 2024 Posted November 13, 2024 Just got my 9800X3D in today and did a couple runs with PBO +200 and -30 CO at XMP settings and got the below. Still need to tweak my ram settings since it didnt take with my 7800X3D setup. Seems like there is a lot of potential with this chip. My last 7800X3D at -30 CO below also.. 9800X3D: 2024-11-12 23:29:55 - Il-2 Frames: 10960 - Time: 60000ms - Avg: 182.667 - Min: 159 - Max: 231 2024-11-12 23:31:59 - Il-2 Frames: 10723 - Time: 60000ms - Avg: 178.717 - Min: 157 - Max: 221 7800X3D: 2024-11-08 05:41:58 - Il-2 Frames: 9490 - Time: 60000ms - Avg: 158.167 - Min: 135 - Max: 203 -30
chiliwili69 Posted November 13, 2024 Author Posted November 13, 2024 6 hours ago, =SFG=BoostedStig said: Frames: 10118 - Time: 60000ms - Avg: 168.633 - Min: 147 - Max: 215 Many thanks for this test. It gives your +16fps with respect your previous Ryzen9 9850X3D at 5.7GHz. Not bad at all. And perhaps you also save some watts in your electricity bill.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now