chiliwili69 Posted October 10, 2024 Posted October 10, 2024 Since VR was implemented in IL-2 (in 2017) we all quickly realized that our CPUs at that time quickly become the bottleneck. The resolutions of the inital HTC vive and Rfit was just around 4-5 million pixels in total (at 100% SteamVR SS) and the top GPUs (1080Ti) were able to handle them well at 90Hz. A number of IL-2 benchmarks has been tested along the years (Balapan, Samuel, Remagen and current SYn_Vander) to check IL-2 performance in monitor and VR. And one thing that has been always observed is that the fps are halved when playing in VR (with GPU not loaded). Below picture red dots are fps in 1080p monitor, and blue dots are fps in VR for Rifts&Vives. So, it looks like the CPU has to do double the work in VR, but in VR the 3D scene is calculated just once and it is the GPU the one that has to do two renders one for each eye. I was not understaning why the CPU has to do about double extra work, so I asked to my expert friend (ChatGPT) about it, and this is what he told me: 1. Increased CPU Workload in VR In VR, the CPU has to handle more tasks compared to running the game on a traditional monitor. Even though the game logic might be similar, VR introduces additional processing demands that can overwhelm the CPU: Handling Two Camera Views (Two View Matrices): In VR, the CPU needs to calculate two separate view matrices, one for each eye. While this is not highly intensive on its own, it does add extra work for every frame. Each camera view might require different occlusion culling (deciding which objects are visible and need rendering) or object updates for each perspective. If the scene is complex or has a lot of objects, this can become CPU-intensive. Head and Body Tracking: VR requires constant real-time tracking of the user's head and controllers. The CPU has to frequently process this input and adjust the scene and game logic based on the user's position, orientation, and actions. This tracking data needs to be processed at very high frequencies (well above 90Hz to maintain low latency), which adds overhead to the CPU. Physics and Game Logic Computation for Two Views: In some cases, the game engine may calculate physics and game logic separately for each camera view to ensure precise interaction and collision detection from both perspectives. This could effectively double the amount of work the CPU is doing in terms of physics, animation, and other scene updates. Input Processing: VR often involves more complex input processing (e.g., hand controllers, eye tracking, etc.) compared to playing with a mouse and keyboard or game controller on a standard monitor. This extra input data requires more frequent updates and can further strain the CPU. 2. Higher Update Rates for Sensor and Tracking Data VR systems often require sensor data updates at higher rates compared to a standard game played on a monitor. This involves: Headset position tracking (for accurate head movement and orientation). Controller tracking (for user input). Motion prediction (to reduce latency). All these updates need to be processed in real-time, adding significant CPU overhead. This demand increases in complexity as the game world and interactions become more intricate. 3. Multithreading and Engine Optimization Single-threaded vs Multithreaded Performance: Many game engines are not perfectly optimized for parallel processing (multithreading). This means that even though your CPU may have multiple cores, the main game loop (which handles physics, AI, and input) might still run primarily on a single core or a limited number of threads. VR, with its extra workload, can push that primary thread beyond its limits, causing CPU bottlenecks. On a 4K monitor, you may not see this issue because the CPU workload is lower, and the GPU is likely the limiting factor. In VR, the extra processing for multiple views and input tracking can overwhelm the CPU, even if the GPU can keep up. 4. Higher Frame Rate and Latency Sensitivity VR Demands Consistent Low Latency: In VR, frame timing and latency sensitivity are much stricter compared to traditional gaming. The CPU needs to prepare each frame in time for the GPU to render it at a consistent frame rate (e.g., 90 FPS or 120 FPS), and this needs to happen with minimal delay. If the CPU cannot consistently keep up with the high demand for updates (due to tracking data, game logic, physics, etc.), it will cause frame drops or limit the frame rate, even if the GPU is capable of rendering at a higher rate. Frame Time Deadlines: In a non-VR game, frame timing might be less strict, and the CPU can sometimes take longer to process certain frames without a noticeable impact on the user experience. In VR, however, any lag or delay in frame processing can immediately cause discomfort or motion sickness, so the CPU needs to process frames quickly and consistently. 5. VR-Specific CPU Overhead (Asynchronous Time Warp, Reprojection) Many VR systems use techniques like asynchronous time warp (ATW) or asynchronous reprojection to smooth out frames and reduce the perception of latency. These techniques require the CPU to perform extra calculations to predict where the user's head will be in the next frame and adjust the rendering accordingly. While these methods reduce the strain on the GPU, they add additional work for the CPU, potentially reducing the overall frame rate if the CPU cannot keep up. In IL-2 VR we don´t use VR controllers, VR body tracking, only Head tracking. Here my questions is what is exactly the task which make to double the work of the CPU? I marked above in green one potential reason, is physics calculated for each view? Do we really need that? 3
DBCOOPER011 Posted October 10, 2024 Posted October 10, 2024 Just did a Syn_Vander CPU run in capframex and it looks like only 2 cores are doing a majority of the work...
1PL-Husar-1Esk Posted October 11, 2024 Posted October 11, 2024 18 hours ago, chiliwili69 said: physics calculated for each view? I don't think physic is calculated for each monitor, the render does. I don't know if that can be optimized by pixel shift or need to be draw for each view port.
jollyjack Posted October 11, 2024 Posted October 11, 2024 How does hyperthreading affect IL2? https://www.reddit.com/r/LocalLLaMA/comments/1cl278t/if_you_are_using_cpu_this_one_simple_trick_will/
Aapje Posted October 11, 2024 Posted October 11, 2024 1 hour ago, jollyjack said: How does hyperthreading affect IL2? https://www.reddit.com/r/LocalLLaMA/comments/1cl278t/if_you_are_using_cpu_this_one_simple_trick_will/ Hyperthreading tends to speed up some games and slow down others. The difference is usually not going to be that big. And that Reddit post is not even about running games, but about running an AI model, so... 1
1Sascha Posted October 11, 2024 Posted October 11, 2024 (edited) 13 hours ago, DBCOOPER011 said: Just did a Syn_Vander CPU run in capframex and it looks like only 2 cores are doing a majority of the work... That's the general impression I'm getting looking at HWInfo during a VR session. My CPU doesn't seem to get taxed *that* much by IL-2 in VR - since I lost my ability to display the Riva Tuner-overlay on IL-2's monitor-window in VR (still have no idea why as this always used to work), it's a bit hard to follow exactly what's going on at any point in time, though. While I do see *spikes* of up to 100W power draw from the CPU, and higher loads on one or two cores, the average value of CPU load seems quite a bit lower over the entire session. Of course: This could just be me being perpetually GPU-bound with my combo of a 4070 Super and i7-14700KF, but I do wonder if this couldn't be improved if IL-2 used more cores/threads - as most of mine seem to be barely doing anything most of the time. If it wasn't such a chore, I'd be almost tempted to swap that i7 out and replace it with the i5-12600K I've used before, just to see what would happen to CPU-load and overall performance.. 😄 S. Edited October 11, 2024 by 1Sascha
1PL-Husar-1Esk Posted October 11, 2024 Posted October 11, 2024 (edited) 45 minutes ago, 1Sascha said: couldn't be improved if IL-2 Current engine is finished, no major redesign of it architecture will be done . They have new engine for that. That what I understand when I listened them speak. If that is a bug and can be fixed without major changes, then is an option I believe. Edited October 11, 2024 by 1PL-Husar-1Esk
chiliwili69 Posted October 17, 2024 Author Posted October 17, 2024 On 10/10/2024 at 6:00 PM, firdimigdi said: See no. 3 I could understand that VR require an extra CPU load, but not double extra work. When we do a CPU test in IL-2 in monitor (maxing all setting that load CPU, but using a very low resolution like 1080p in a 4090GPU) we obtain X fps, But when do the same in VR (in 120Hz mode for example and using a very very low resolution ina 4090GPU) we obtain X/2 fps. So it looks like VR requires double work, but it should not. On 10/10/2024 at 9:37 PM, DBCOOPER011 said: capframex nice tool. It looks like better than that old Fraps. If you run the same in VR I assume you will obtain the same: Two cores loaded.
firdimigdi Posted October 18, 2024 Posted October 18, 2024 18 hours ago, chiliwili69 said: I could understand that VR require an extra CPU load, but not double extra work. It's not doing double work, the game's main thread is blocked because almost everything is done there, from AI processing to UI, VR simply causes just enough extra work to tip the scale. That's the reason higher single core performance matters for this game, it allows quicker computation of the main thread's workload.
HazMatt Posted October 18, 2024 Posted October 18, 2024 I find this interesting. I was told that my 12100 was bottlenecking my 1080ti even though it wasn't showing high usage so I got a 12700 and nothing changed. I then upgraded to a 4070 super with the 12700 and there was an improvement in detail but nothing earthshaking. Hardware: 12100 1080ti quest 2 12700 4070 super quest 2 The performance difference between the two doesn't justify the price difference in my opinion.
chiliwili69 Posted October 19, 2024 Author Posted October 19, 2024 15 hours ago, HazMatt said: I was told Every combination of CPU, GPU, VR device and Settings is different. If someone tells you something (including me) it could be wrong. The best way to my knowledge to know how your GPU or CPU is bottlenecked for your VR device and for your Settings (Game, VR device, etc) is to run fpsVR while you are playing. So you can view the CPU frametimes and the GPU frametimes, and the reprojection treshold (for 90Hz is 1000/90=11.11 miliseconds). So you will see that in certain scenes your CPU frametimes are higher in AI/smoke/physics dense scenes. And your GPU frametimes are higher depending on the objects in the scene to render. There is no a big difference in single-core performance of the i3-12100 (turbo at 4.3GHz) and the i7-12700 (turbo at 4.9 Ghz). So, you should observe that difference when your GPU is not loaded.
HazMatt Posted October 19, 2024 Posted October 19, 2024 Aye. I figured that out The reason I was posting it is so that somebody didn't make the same mistake as me and buy an I7 thinking it would make a big difference from an I3. I used the advice that people gave me and the bottleneck calculators online and found them to be lacking. In my experience in IL2, Warthunder, DCS and AH3 the 12100 seems to be the best bang for the buck in that 12th gen cpu series. Of course like you said there are many other variables between systems. I'm merely giving my experience in the event that it may help others. 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now