JG1_Greif Posted January 14, 2020 Posted January 14, 2020 25 minutes ago, Alonzo said: Can I just multiply up? So the above line is the same as Yes, that works as 0.01669 * (GHz) = 0.01669 * (1000 * MHz) = 16.69 * MHz 39 minutes ago, Alonzo said: I agree that peculiarity of setup doesn't help us. For example, my actual FPS upgrades are: RAM from 3200 to 3600 (both CAS-16) gave +3.06 fps CPU from 5.0ghz to 5.2ghz (unfortunately with already 3600-CL15 RAM, so not "base case" test) gave +2.73 fps And we have margin of error on all the measurements and probably some effects as we approach the limits of the improvements. Agreed. It is inevitable that there are error margins. The model allows that you calculate the predicted FPS up to a precision of 1/1000ths of FPS. Seeing the predicted FPS with such a precision may give the false impression that the predictions of the model have an accuracy with the same precision, this obviously is not the case. Regarding the confidence of the model, I included the confidence intervals per predictor in the last two columns of the model, under "99% confidence". Here you can see that the real multiplication factor (this is unknown, and that is what the models tries to estimate based on the sample data) per (RAM) MHz is with 99% confidence between 0.01326 and 0.0201 according to the model and for CPU, the 99% confidence interval is between 13.05553 and 24.63155. Using this information a safer interpretation of the model would be that an upgrade of 400 MHz should translate to an increase of between 5.3 and 8 FPS and an increase of .2 GHz CPU should translate to an increase between 2.6 and 4.9 FPS, all this with 99% confidence. Your example shows that the estimated CPU increase is within the range and it is outside the range for the RAM speed, reason all the more to be cautious. Obviously, as said before, statistically the model presupposes al kinds of conditions that we know are not the case (variables being completely uncorrelated, linearity of the results, i.e. a 200 MHz increase will always result in the same FPS increase, no matter your starting point and no matter the rest of your setup). This doesn't mean that the model is useless, it just shows that one needs to be cautious. 3 hours ago, chiliwili69 said: With your correlation all of us will know the fps we will achieve before buying the PC, so we can do a wiser purchase. As said in my response to Alonzo and as demonstrated by the data, caution is needed, though I am quite convinced the model will do OK if you don't expect a 100% perfect prediction. I do estimate that a) the model will get you a (quite) educated guess and b) some refinement of the model should be a good idea (more data!). I have forced the model to use STMark instead of CPU GHz, but the R^2 was lower and the model needed more variables to make a prediction, which in general is less good. BTW, no need to separate 4.002 and 4.003 as I can use the version as an input for the model. Actually, I already did this and at least in the current setup with the current data it did not come up as (the best) predictor.
Alonzo Posted January 14, 2020 Posted January 14, 2020 11 minutes ago, JG1_G_Greif said: I included the confidence intervals per predictor in the last two columns of the model, under "99% confidence". Here you can see that the real multiplication factor (this is unknown, and that is what the models tries to estimate based on the sample data) per (RAM) MHz is with 99% confidence between 0.01326 and 0.0201 according to the model and for CPU, the 99% confidence interval is between 13.05553 and 24.63155. Using this information a safer interpretation of the model would be that an upgrade of 400 MHz should translate to an increase of between 5.3 and 8 FPS and an increase of .2 GHz CPU should translate to an increase between 2.6 and 4.9 FPS, all this with 99% confidence. Your example shows that the estimated CPU increase is within the range and it is outside the range for the RAM speed, reason all the more to be cautious. Ah, great. I understand all those words separately but I had not really understood the flow-through. Quote Obviously, as said before, statistically the model presupposes all kinds of conditions that we know are not the case (variables being completely uncorrelated, linearity of the results, i.e. a 200 MHz increase will always result in the same FPS increase, no matter your starting point and no matter the rest of your setup). This doesn't mean that the model is useless, it just shows that one needs to be cautious. Definitely not a useless model, in fact a pretty good basic indicator for people to understand how the factors correlate. I don't really see anything in this thread that has changed my previous overall advice for getting good IL2 performance, maybe just better data to back up the suggestions. "Get an Intel CPU at the highest clock speed you can reasonably manage. IL2 likes fast memory, it makes a big difference, get some fast memory. Spend the rest on the GPU." 1
JG1_Greif Posted January 14, 2020 Posted January 14, 2020 12 minutes ago, Alonzo said: Definitely not a useless model, in fact a pretty good basic indicator for people to understand how the factors correlate. I don't really see anything in this thread that has changed my previous overall advice for getting good IL2 performance, maybe just better data to back up the suggestions. "Get an Intel CPU at the highest clock speed you can reasonably manage. IL2 likes fast memory, it makes a big difference, get some fast memory. Spend the rest on the GPU." It is in general reassuring if a model makes sense given your observations so far. If the initial results of the model seem counterintuitive, your model has either identified some structure that you hadn't seen yourself or, quite likely, there is some unwanted effect in your model and/or data. In all cases, further investigation is a good idea.
dburne Posted January 15, 2020 Posted January 15, 2020 Ok I have run the benchmark now with version 4.003. All setting exactly as specified in the OP. I did not run Passmark again as I have not changed any settings in my rig. Motherboard: EVGA Z390 Dark CPU: i9 9900k CPU Freq: 5.2 GHz L3 cache: 16 MB RAM type: DDR4 RAM size: 32 GB RAM Freq: 3200 MHz RAM Latency: 14 GPU: EVGA 2080 Ti FTW3 STMark: 3154 - from previous run at same settings. 4.003 results: 2020-01-15 06:54:36 - Il-2 Frames: 28841 - Time: 180000ms - Avg: 160.228 - Min: 110 - Max: 266
ZachariasX Posted January 15, 2020 Posted January 15, 2020 Nice score. I guess... I have to put in a i9-10990XE. It supposedly goes to 5 GHz. on 22(!) cores, plus HT. 380 Watts they say. Hm. My cooker can be as powerful on 10 cores. So maybe I'd have to give it some headroom. I'll have to buy a better cooler: I would have though that Intel would have learned something from the Prescott mess. But this seems to be what sheer desperation does. 2
chiliwili69 Posted January 15, 2020 Author Posted January 15, 2020 7 hours ago, dburne said: 4.003 results: 2020-01-15 06:54:36 - Il-2 Frames: 28841 - Time: 180000ms - Avg: 160.228 - Min: 110 - Max: 266 Thanks Don for re-running in version 4.003. Good for you that the new version didn´t steal you the 3 fps others have seen.
chiliwili69 Posted January 15, 2020 Author Posted January 15, 2020 On 1/13/2020 at 2:58 PM, Niiranen-VR said: test : Frames: 23817 - Time: 180000ms - Avg: 132.317 - Min: 83 - Max: 214 VR : Frames: 9991 - Time: 180000ms - Avg: 55.506 - Min: 44 - Max: 91 I think you say in another post that you did your test with the new RAM using a mod, so this data is affected. I will then not include it in the table. let me know when you run without the mod. you should be at 150fps.
WallterScott Posted January 15, 2020 Posted January 15, 2020 (edited) Repeated in the new version ************************************************************************************** Motherboard: ASUS ROG MAXIMUS XI APEX CPU: i9 9900k CPU Freq: 5.4 Ghz AVX=0, HT=OFF, ring=4.9 L3 cache: 16 MB RAM type: DDR4 RAM size: 16 GB RAM Freq: 4400 MHz RAM Latency: 17 GPU: 2080Ti MSI, max. core 2130Mhz STMark: 3270 2020-01-16 00:30:50 - Il-2 Frames: 34904 - Time: 194813ms - Avg: 179.167 - Min: 120 - Max: 287 BOX 4.003 ************************************************************************************** And it Max: 287 looks like the ceiling. In all tests, always Max: 287 Edited January 15, 2020 by WallterScott
NiiranenVR Posted January 15, 2020 Posted January 15, 2020 Chiliwili Yes sorry , I forgot I have played a little in the meantime ?♂️ I make a new honest test later ?
Alonzo Posted January 15, 2020 Posted January 15, 2020 1 hour ago, WallterScott said: CPU: i9 9900k CPU Freq: 5.4 Ghz AVX=0, HT=OFF, ring=4.9 RAM Freq: 4400 MHz RAM Latency: 17 GPU: 2080Ti MSI, max. core 2130Mhz Frames: 34904 - Time: 194813ms - Avg: 179.167 - Min: 120 - Max: 287 DAMN that is some hot PC sexiness. DAMN!
dburne Posted January 15, 2020 Posted January 15, 2020 Yeah that is fantastic both CPU clock and ring ratio! How are you keeping that CPU cool ?
WallterScott Posted January 16, 2020 Posted January 16, 2020 5 часов назад, dburne сказал: Yeah that is fantastic both CPU clock and ring ratio! How are you keeping that CPU cool ? Two loops of water (cpu and gpu), but the main thing is the pre-tested cpu. Not everyone can do that.
chiliwili69 Posted January 16, 2020 Author Posted January 16, 2020 11 hours ago, WallterScott said: CPU Freq: 5.4 Ghz AVX=0, HT=OFF, ring=4.9 WOW! 5.4GHz, nice freq. Thanks for pushing this to the top and re-run the test!!
WallterScott Posted January 16, 2020 Posted January 16, 2020 20 часов назад, ZachariasX сказал: I'll have to buy a better cooler: Here's something like this I built at home.
apollon01 Posted January 16, 2020 Posted January 16, 2020 (edited) On 12/29/2019 at 11:27 AM, chiliwili69 said: I have used in a second run the fpsVR tool stats. I think it is mainly CPU limited. **************************** Quote Now the fpvVR stats improved, the CPU has is more time below 12.5ms and allows GPU to do his job more effectively: Chili, you have me lost here with these charts and your comments ? Why do you say in the first run that it appears to you CPU limited? Intuitively, I would say that it is more GPU limited since the blue GPU chart is shifted more to the right - therefore the GPU takes more time to render a frame. I am missing something obvious here or am misinterpreting the charts.... Thanks. Edited January 16, 2020 by apollon01 1
ZachariasX Posted January 16, 2020 Posted January 16, 2020 19 hours ago, WallterScott said: CPU Freq: 5.4 Ghz AVX=0, HT=OFF, ring=4.9 ?
chiliwili69 Posted January 16, 2020 Author Posted January 16, 2020 1 hour ago, apollon01 said: Why do you say in the first run that it appears to you CPU limited? Intuitively, I would say that it is more GPU limited since the blue GPU chart is shifted more to the right - therefore the GPU takes more time to render a frame. Thank you for asking this, since this is something I also want feedback about what other people think about how IL-2 VR and fpsVR reports frametimes for CPU and GPU. As I understand it for every cycle (80 cycles per second in my Index): The CPU is the one who first calculate for this instant of time the FlyingModel, DamageModel, the AI, etc, and based on all this produce a geometry (just where all 3D objects are situated on the space). After that, and only after, the GPU takes that geometry and render the scene (think on render like taking a picture of the scene, i.e.: put a color to every pixel based on lights, geometry, textures, etc). So, based on this, the frametimes of the CPU should be always below the frametimes of the GPU. This is what I always have seen in fpsVR frametimes. It means that the frametimes shown by the GPU take into account the time taken by the CPU plus the time taken by the GPU. This is my interpretation. If it is wrong it means that there could be cases where the CPU frametimes are above GPU frametimes in IL-2 VR with fpsVR. (If onyone have seen this, please just let me know). In the first chart, the CPU is showing the main peak around 11ms, and the GPU (which is needing about 4ms) ends up above 12.5ms most of the time. But in the second chart, since the CPU has dropped the main peak around 8.5ms, now the GPU is able to complete the frames below 12.5ms. So it was not a GPU problem it was a CPU problem.
apollon01 Posted January 16, 2020 Posted January 16, 2020 (edited) 29 minutes ago, chiliwili69 said: It means that the frametimes shown by the GPU take into account the time taken by the CPU plus the time taken by the GPU. OK... I am not sure about this statement above. The correct interpretation makes all the difference ? Maybe someone more knowledgeable could chip in? Btw. recently I have supported a feature request that would make the fpsVR to log also CPU and GPU frame times so the result would not be just the distributions. For a given frame / time unit we would see the CPU & GPU frame times and hopefully also CPU & GPU usage. Sergio, the developer, acknowledged it is on his to-do list. https://steamcommunity.com/app/908520/discussions/0/1750142176477291142/ Edited January 16, 2020 by apollon01
chiliwili69 Posted January 16, 2020 Author Posted January 16, 2020 25 minutes ago, apollon01 said: OK... I am not sure about this statement above. The correct interpretation makes all the difference What I say is that the GPU frametimes are directly shifted by the CPU frametimes. I took the second chart I made: 1.- The first two peaks (I think it is the moment of the loop or the hangar, so clean sky) are separated just 3ms. This 3ms is the time that the GPU needs to do its job (render scene). So GPU frametimes. 2.- The big peaks are where the track is most of the time. Again the GPU adds about 3ms to the frametimes of the CPU 3.- Once the CPU exceed the 12.5 limit, then the delay added is 0 ms, since the GPU has a full cycle (12.5) to do his job.
jarg1 Posted January 17, 2020 Posted January 17, 2020 Motherboard: ASUSTek ROG MAXIMUS XI HERO (WI-FI) CPU: Intel Core i9 9900KS CPU Freq: 5.4 Ghz L3 cache: 16 MB Cores: 8 (number of active physical Cores) HT: Off (HyperThreading. If Cores=Threads then HT is Off) RAM type: DDR4 RAM size: 32 GB NB Freq: 4300 MHz RAM Freq: 4000 MHz (this is 2xDRAM freq if Dual channel) RAM Latency: 17 (CAS Latency CL) GPU: 2080Ti STMark 3274 Frames: 28466 - Time: 180000ms - Avg: 158.144 - Min: 110 - Max: 246
chiliwili69 Posted January 17, 2020 Author Posted January 17, 2020 4 hours ago, jarg1 said: CPU: Intel Core i9 9900KS CPU Freq: 5.4 Ghz Thank you very much for bringing the first 9900KS results!! The 5.4 GHz freq is impressive as well. Now looking to details, your PC is very similar to the one of Walterscott. The only differences are: -Mobo: Hero vs Apex -RAM freq: 4000 vs 4400 -NB freq: 4300 vs 4900 And the is 20fps difference. So it is not explained by those differences. Just three questions: - Is the AVX offset=0? - Is the 5.4 GHz maintained during the run of the benchmark track? (to be sure there is no throttling) - Did you switch off mods?
NiiranenVR Posted January 17, 2020 Posted January 17, 2020 Por favor amigo - Ill be in the Crowd again and this time its a new honestly test without Mod and the 1 page told ---- Amar Halshug Motherboard: ROG MAXIMUS CODE CPU: i5 9600k CPU Freq: 5.2 Ghz L3 cache: 9 MB Cores: 6 HT: Off ( not possible to on/off in Bios ) RAM type: DDR4 RAM size: 16 GB ( dual ) NB Freq: 4300Hz RAM Freq: 2x1800 = 3600 MHz (this is 2xDRAM freq if Dual channel) RAM Latency: 15 GPU: 2080super STMark: 3141 Frames: 24613 - Time: 180000ms - Avg: 136.739 - Min: 86 - Max: 219 gracias
dburne Posted January 17, 2020 Posted January 17, 2020 3 hours ago, chiliwili69 said: Thank you very much for bringing the first 9900KS results!! The 5.4 GHz freq is impressive as well. Now looking to details, your PC is very similar to the one of Walterscott. The only differences are: -Mobo: Hero vs Apex -RAM freq: 4000 vs 4400 -NB freq: 4300 vs 4900 And the is 20fps difference. So it is not explained by those differences. Most likely the NB frequency.
ZachariasX Posted January 17, 2020 Posted January 17, 2020 2 hours ago, Niiranen-VR said: Frames: 24613 - Time: 180000ms - Avg: 136.739 - Min: 86 - Max: 219 I get about that at 4.6 GHz or so. Are you sure you have not set some further graphics tweaks for instance in the NVidia control settings? You should certainly average about 150 FPS or so. Can you check your CPU temp while running the track? HWInfo will tell you actual frequencies and temps.
jarg1 Posted January 17, 2020 Posted January 17, 2020 (edited) 6 hours ago, chiliwili69 said: Just three questions: - Is the AVX offset=0? Yes - Is the 5.4 GHz maintained during the run of the benchmark track? (to be sure there is no throttling) Yes - Did you switch off mods? No Mods See answers above. I noticed this also. My best guess is that the main difference might be related to the RTX 2080Ti. Mine is a Nvidia FE with a water block. The memory has a 1000 MHz overclock but I am not able to get a great overclock on the GPU. Edited January 17, 2020 by jarg1
apollon01 Posted January 17, 2020 Posted January 17, 2020 17 hours ago, chiliwili69 said: What I say is that the GPU frametimes are directly shifted by the CPU frametimes. I took the second chart I made: 1.- The first two peaks (I think it is the moment of the loop or the hangar, so clean sky) are separated just 3ms. This 3ms is the time that the GPU needs to do its job (render scene). So GPU frametimes. 2.- The big peaks are where the track is most of the time. Again the GPU adds about 3ms to the frametimes of the CPU 3.- Once the CPU exceed the 12.5 limit, then the delay added is 0 ms, since the GPU has a full cycle (12.5) to do his job. FYI Chili, I just ran Prepar3D with the fpsVR and pretty much all the time the CPU frame time was around 20+ while the GPU frame time was around 9. So to me it seems like the GPU frame time does not include the CPU frame time.
NiiranenVR Posted January 17, 2020 Posted January 17, 2020 Zach Yes , under the test its only power maximum on Nvidia Normaly I do some more in Nvidia..... I read this and that in blogs so I try this and that - but I'm not allmost sure what is really a help ..... But I'm doing something ?? But I maked the test again
ZachariasX Posted January 17, 2020 Posted January 17, 2020 27 minutes ago, Niiranen-VR said: Yes , under the test its only power maximum on Nvidia You have global settings in the NVidia seeting plus program specific settings. Did you make adjustments in global settings? I had that once before I ran a track and it of course impacted my FPS. Also you might want to have GPU-Z running to log your GPU activity. With Chilis settings, your card should not be used more than 50% or so. But I see that you *maybe* have not synched your cores. You need to set your Turboboost in BIOS as "Synch All Cores" (or how your mobo maker calls it). You have to force alll cores clocking up (and down) in the same way. In your case, I can see that the min. frequency is not equal. If I force cores to equal frequencies, I get also similar min. frequencies. There might be something related to that. 1
NiiranenVR Posted January 17, 2020 Posted January 17, 2020 Zach Yes, in Nvidia only BOS I change ... And ...oooooo yes ......the cores are not in synch ...and the game is much 1 core singular - as told - but core 2 and 5 is up and 'ring the bell ' ( I learn more and more ?, good for my 14 year old son who have a similar pc ?)
ZachariasX Posted January 17, 2020 Posted January 17, 2020 2 minutes ago, Niiranen-VR said: .the cores are not in synch If not, in my case then the game runs at the slowest specified Turbo speed that is reserved for all cores use. In your case probably 4.7 GHz. 1
jarg1 Posted January 17, 2020 Posted January 17, 2020 6 hours ago, dburne said: Most likely the NB frequency. I may try a memory overclock this evening and retest.
NiiranenVR Posted January 17, 2020 Posted January 17, 2020 Zach Well - I tried synch the cores and ' not good ' ....- my CPU get down lower to 4600 MHz - avg 128.094 - min : 75 - Max : 212 So now I'm Back again
WokeUpDead Posted January 18, 2020 Posted January 18, 2020 Motherboard: ASUSTeK Computer Inc. G55VW CPU: i7 3630QM CPU Freq: 1.2GHz L3 cache: 6MB, 12-way Cores: 4, threads: 8 RAM type: DDR3 RAM size: 8 GB RAM Freq: Na RAM Latency: 16 DRAM Frequency: 798MHz GPU: NVIDIA GeForce GTX 660M STMark: 1777 FRAPS TXT: Frames: 1851 - Time: 180000ms - Avg: 10.283 - Min: 2 - Max: 23 BOX 4.003
ZachariasX Posted January 18, 2020 Posted January 18, 2020 7 hours ago, Niiranen-VR said: Zach Well - I tried synch the cores and ' not good ' ....- my CPU get down lower to 4600 The temperatures seem to be ok. Does it say in your BIOS that after synching cores the desired max. turbo boost is still at 5.4 GHz? Because now it looks like your cores got synched to what was the lowest boost selected for all core usage, and 4.6 GHz are just about the ballpark for your initial scores.
jarg1 Posted January 18, 2020 Posted January 18, 2020 (edited) 23 hours ago, chiliwili69 said: Thank you very much for bringing the first 9900KS results!! The 5.4 GHz freq is impressive as well. Now looking to details, your PC is very similar to the one of Walterscott. The only differences are: -Mobo: Hero vs Apex -RAM freq: 4000 vs 4400 -NB freq: 4300 vs 4900 And the is 20fps difference. So it is not explained by those differences. Just three questions: - Is the AVX offset=0? - Is the 5.4 GHz maintained during the run of the benchmark track? (to be sure there is no throttling) - Did you switch off mods? I spent a couple of hours trying to work this out, with some success. First, I happened to notice that my Graphics interface was only running x8 even though I had it placed in a PCIe x16 slot. I was able to move the card to another slot and it is now at x16. I removed 2 sticks of RAM and was able to bump the speed up a bit to 4100 MHz. Also the NB Frequency is now 4900 MHz (My original number may have been the result of not checking while overclocked). I think with a bit of tweaking I might be able to improve the numbers further. Anyway, below is the revised data: Motherboard: ASUSTek ROG MAXIMUS XI HERO (WI-FI) CPU: Intel Core i9 9900KS CPU Freq: 5.4 Ghz L3 cache: 16 MB Cores: 8 (number of active physical Cores) HT: Off (HyperThreading. If Cores=Threads then HT is Off) RAM type: DDR4 RAM size: 16 GB NB Freq: 4900 MHz RAM Freq: 4100 MHz (this is 2xDRAM freq if Dual channel) RAM Latency: 17 (CAS Latency CL) GPU: 2080Ti STMark 3269 2020-01-17 22:05:53 - Il-2 Frames: 30753 - Time: 180000ms - Avg: 170.850 - Min: 118 - Max: 273 Edited January 18, 2020 by jarg1 1
chiliwili69 Posted January 18, 2020 Author Posted January 18, 2020 18 hours ago, apollon01 said: FYI Chili, I just ran Prepar3D with the fpsVR and pretty much all the time the CPU frame time was around 20+ while the GPU frame time was around 9. So to me it seems like the GPU frame time does not include the CPU frame time OK, thanks for letting me know. I wanted to see exactly this. So my assumption should be wrong. I then could configure IL-2 VR in such a way that CPU frametimes are above GPU frametime. Maybe with a ver very low SS but with high CPU load. I will try.
NiiranenVR Posted January 18, 2020 Posted January 18, 2020 Zach No more spamming this page with Bios and what to do - so I make a new page about Bios and what to control for get the mashine better I go in to Bios , and many time I really need help 'for Dummies' ..so .......?
chiliwili69 Posted January 18, 2020 Author Posted January 18, 2020 17 hours ago, ZachariasX said: But I see that you *maybe* have not synched your cores. You need to set your Turboboost in BIOS as "Synch All Cores" (or how your mobo maker calls it). You have to force alll cores clocking up (and down) in the same way. In your case, I can see that the min. frequency is not equal. If I force cores to equal frequencies, I get also similar min. frequencies. There might be something related to that Well seen! Perhaps I should indicate this in the instructions that all cores must run at same freq fixed manually (better than letting the TurboBoost tech decide the max freq) 4 hours ago, WokeUpDead said: i7 3630QM Thanks for running this test. I see your PC is a laptop. Since laptops are very different from PCs (in terms of processors, overclocking, cooling capacity, graphics card) I think we should not mix data from PC with laptops. So I will put them at the bottom of the page. 2 hours ago, jarg1 said: First, I happened to notice that my Graphics interface was only running x8 even though I had it placed in a PCIe x16 slot. I was able to move the card to another slot and it is now at x16. I removed 2 sticks of RAM and was able to bump the speed up a bit to 4100 MHz. Also the NB Frequency is now 4900 MHz (My original number may have been the result of not checking while overclocked). The PCIe thing is strange, good that you sorted out. Regarding RAM, it seems that less RAM (16 vs 32) is better in order to reach higher RAM frequencies. Since IL-2 doesn´t need more than 16Gb, it is a good lesson learnt. With this two thing I see you got a big bump in fps. Exactly as predicted by greif correlation! Nice for you. 1
haltux Posted January 18, 2020 Posted January 18, 2020 (edited) On 1/16/2020 at 8:44 PM, chiliwili69 said: As I understand it for every cycle (80 cycles per second in my Index): The CPU is the one who first calculate for this instant of time the FlyingModel, DamageModel, the AI, etc, and based on all this produce a geometry (just where all 3D objects are situated on the space). After that, and only after, the GPU takes that geometry and render the scene (think on render like taking a picture of the scene, i.e.: put a color to every pixel based on lights, geometry, textures, etc). So, based on this, the frametimes of the CPU should be always below the frametimes of the GPU. This is what I always have seen in fpsVR frametimes. It means that the frametimes shown by the GPU take into account the time taken by the CPU plus the time taken by the GPU. This is my interpretation. If it is wrong it means that there could be cases where the CPU frametimes are above GPU frametimes in IL-2 VR with fpsVR. (If onyone have seen this, please just let me know). I am not an expert but I am pretty sure it does not work that way. The CPU has a complete frame time (12 ms) to complete its computation, and then pass its output to the GPU which process it during the next frame, and has a complete frame time as well. They work in parallel, the CPU works on frame t while the gpu works on frame t-1. That involves a 1 frame latency, which is fine. Actually, you can even set more than 1 "pre-rendered frame" provided from the CPU to the GPU in order to smooth the process at the cost of a higher latency. If the GPU was waiting until the CPU has done his job to do his, first that would be a silly waste of ressources, the GPU would spend a lot of time idling, second their would be no clear notion of CPU or GPU bottleneck, because a better GPU would compensate a weaker CPU, which is clearly not the case in IL2. Edited January 18, 2020 by haltux
chiliwili69 Posted January 18, 2020 Author Posted January 18, 2020 1 hour ago, haltux said: They work in parallel, the CPU works on frame t while the gpu works on frame t-1. Thanks for explaining that. This makes a lot of sense. Then, it should be possible to reproduce in IL-2 VR these two extreme cases with my Index: CPU bottleneck: (Shadows High, mirror complex) but very low resolution (SS quite below 100%) CPU frametimes is close to 12.5 (or above) and GPU frametimes below 10 ms GPU bottleneck:(Shadows off, mirror off) but very high resolution (SS quite above 100%) CPU frametimes is below 10ms and GPU frametimes close to 12.5 (or above) I have to try that with fpsVR.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now