Jump to content
chiliwili69

Benchmark for CPU/RAM performance: Remagen 4.002 to 4.004

Recommended Posts

25 minutes ago, Alonzo said:

Can I just multiply up? So the above line is the same as 

 Yes, that works as 0.01669 * (GHz) = 0.01669 * (1000 * MHz) = 16.69 * MHz

 

39 minutes ago, Alonzo said:

I agree that peculiarity of setup doesn't help us. For example, my actual FPS upgrades are:

  • RAM from 3200 to 3600 (both CAS-16) gave +3.06 fps
  • CPU from 5.0ghz to 5.2ghz (unfortunately with already 3600-CL15 RAM, so not "base case" test) gave +2.73 fps

And we have margin of error on all the measurements and probably some effects as we approach the limits of the improvements.

Agreed. It is inevitable that there are error margins. The model allows that you calculate the predicted FPS up to a precision of 1/1000ths of FPS. Seeing the predicted FPS with such a precision may give the false impression that the predictions of the model have an accuracy with the same precision, this obviously is not the case.

 

Regarding the confidence of the model, I included the confidence intervals per predictor in the last two columns of the model, under "99% confidence". Here you can see that the real  multiplication factor (this is unknown, and that is what the models tries to estimate based on the sample data) per (RAM) MHz is with 99% confidence between 0.01326 and  0.0201 according to the model and for CPU, the 99% confidence interval is between 13.05553 and 24.63155.

 

Using this information a safer interpretation of the model would be that an upgrade of 400 MHz should translate to an increase of between 5.3 and 8 FPS and an increase of .2 GHz CPU should translate to an increase between 2.6 and 4.9 FPS, all this with 99% confidence. Your example shows that the estimated CPU increase is within the range and it is outside the range for the RAM speed, reason all the more to be cautious.

 

Obviously, as said before, statistically the model presupposes al kinds of conditions that we know are not the case (variables being completely uncorrelated, linearity of the results, i.e. a 200 MHz increase will always result in the same FPS increase, no matter your starting point and no matter the rest of your setup). This doesn't mean that the model is useless, it just shows that one needs to be cautious.

 

3 hours ago, chiliwili69 said:

With your correlation all of us will know the fps we will achieve before buying the PC, so we can do a wiser purchase.

As said in my response to Alonzo and as demonstrated by the data, caution is needed, though I am quite convinced the model will do OK if you don't expect a 100% perfect prediction. I do estimate that a) the model will get you a (quite) educated guess and b) some refinement of the model should be a good idea (more data!).

 

I have forced the model to use STMark instead of CPU GHz, but the R^2 was lower and the model needed more variables to make a prediction, which in general is less good. BTW, no need to separate 4.002 and 4.003 as I can use the version as an input for the model. Actually, I already did this and at least in the current setup with the current data it did not come up as (the best) predictor.

Share this post


Link to post
Share on other sites
11 minutes ago, JG1_G_Greif said:

I included the confidence intervals per predictor in the last two columns of the model, under "99% confidence". Here you can see that the real  multiplication factor (this is unknown, and that is what the models tries to estimate based on the sample data) per (RAM) MHz is with 99% confidence between 0.01326 and  0.0201 according to the model and for CPU, the 99% confidence interval is between 13.05553 and 24.63155.

 

Using this information a safer interpretation of the model would be that an upgrade of 400 MHz should translate to an increase of between 5.3 and 8 FPS and an increase of .2 GHz CPU should translate to an increase between 2.6 and 4.9 FPS, all this with 99% confidence. Your example shows that the estimated CPU increase is within the range and it is outside the range for the RAM speed, reason all the more to be cautious.

 

Ah, great. I understand all those words separately but I had not really understood the flow-through. 

 

Quote

Obviously, as said before, statistically the model presupposes all kinds of conditions that we know are not the case (variables being completely uncorrelated, linearity of the results, i.e. a 200 MHz increase will always result in the same FPS increase, no matter your starting point and no matter the rest of your setup). This doesn't mean that the model is useless, it just shows that one needs to be cautious.

 

Definitely not a useless model, in fact a pretty good basic indicator for people to understand how the factors correlate. I don't really see anything in this thread that has changed my previous overall advice for getting good IL2 performance, maybe just better data to back up the suggestions. "Get an Intel CPU at the highest clock speed you can reasonably manage. IL2 likes fast memory, it makes a big difference, get some fast memory. Spend the rest on the GPU."

  • Like 1

Share this post


Link to post
Share on other sites
12 minutes ago, Alonzo said:

Definitely not a useless model, in fact a pretty good basic indicator for people to understand how the factors correlate. I don't really see anything in this thread that has changed my previous overall advice for getting good IL2 performance, maybe just better data to back up the suggestions. "Get an Intel CPU at the highest clock speed you can reasonably manage. IL2 likes fast memory, it makes a big difference, get some fast memory. Spend the rest on the GPU."

It is in general reassuring if a model makes sense given your observations so far. If the initial results of the model seem counterintuitive, your model has either identified some structure that you hadn't seen yourself or, quite likely, there is some unwanted effect in your model and/or data. In all cases, further investigation is a good idea.

Share this post


Link to post
Share on other sites

Ok I have run the benchmark now with version 4.003.

All setting exactly as specified in the OP.

I did not run Passmark again as I have not changed any settings in my rig.

 

Motherboard: EVGA Z390 Dark
 CPU:                 i9 9900k
 CPU Freq:        5.2 GHz
 L3 cache:        16 MB
 RAM type:        DDR4
 RAM size:        32 GB
 RAM Freq:        3200 MHz
 RAM Latency: 14
 GPU:                 EVGA 2080 Ti FTW3
 STMark:           3154 - from previous run at same settings.

 

4.003 results:

2020-01-15 06:54:36 - Il-2
Frames: 28841 - Time: 180000ms - Avg: 160.228 - Min: 110 - Max: 266

 

Share this post


Link to post
Share on other sites

Nice score. I guess... I have to put in a i9-10990XE. It supposedly goes to 5 GHz. on 22(!) cores, plus HT. 380 Watts they say. Hm. My cooker can be as powerful on 10 cores. So maybe I'd have to give it some headroom.

 

I'll have to buy a better cooler:

image.png.d02c068b1769680d8d66369c17fbcc09.png

 

I would have though that Intel would have learned something from the Prescott mess. But this seems to be what sheer desperation does.

  • Haha 2

Share this post


Link to post
Share on other sites
7 hours ago, dburne said:

4.003 results:

2020-01-15 06:54:36 - Il-2
Frames: 28841 - Time: 180000ms - Avg: 160.228 - Min: 110 - Max: 266

 

Thanks Don for re-running in version 4.003. Good for you that the new version didn´t steal you the 3 fps others have seen.

 

Share this post


Link to post
Share on other sites
On 1/13/2020 at 2:58 PM, Niiranen-VR said:

test       :     Frames: 23817 - Time: 180000ms - Avg: 132.317 - Min: 83 - Max: 214

VR         :     Frames: 9991 -   Time: 180000ms - Avg: 55.506 - Min: 44 - Max: 91

 

I think you say in another post that you did your test with the new RAM using a mod, so this data is affected. 

I will then not include it in the table. let me know when you run without the mod. you should be at 150fps.

Share this post


Link to post
Share on other sites

Repeated in the new version

**************************************************************************************

Motherboard: ASUS ROG MAXIMUS XI APEX
 CPU:                 i9 9900k
 CPU Freq:        5.4 Ghz  AVX=0, HT=OFF, ring=4.9
 L3 cache:        16 MB
 RAM type:        DDR4
 RAM size:        16 GB
 RAM Freq:        4400 MHz
 RAM Latency:         17
 GPU:                 2080Ti MSI, max. core 2130Mhz
 STMark:           3270
 
2020-01-16 00:30:50 - Il-2
Frames: 34904 - Time: 194813ms - Avg: 179.167 - Min: 120 - Max: 287

 

BOX  4.003

**************************************************************************************

And it Max: 287  looks like the ceiling. In all tests, always Max: 287

Edited by WallterScott

Share this post


Link to post
Share on other sites

Chiliwili

Yes sorry , I forgot I have played a little in the meantime 🤦‍♂️

I make a new honest test later 🙏

 

Share this post


Link to post
Share on other sites
1 hour ago, WallterScott said:

 CPU:                 i9 9900k
 CPU Freq:        5.4 Ghz  AVX=0, HT=OFF, ring=4.9
 RAM Freq:        4400 MHz
 RAM Latency:         17
 GPU:                 2080Ti MSI, max. core 2130Mhz
Frames: 34904 - Time: 194813ms - Avg: 179.167 - Min: 120 - Max: 287

 

DAMN that is some hot PC sexiness. DAMN!

Share this post


Link to post
Share on other sites

Yeah that is fantastic both CPU clock and ring ratio!

How are you keeping that CPU cool ?

Share this post


Link to post
Share on other sites
5 часов назад, dburne сказал:

Yeah that is fantastic both CPU clock and ring ratio!

How are you keeping that CPU cool ?

Two loops of water (cpu and gpu), but the main thing is the pre-tested cpu. Not everyone can do that.

 

Share this post


Link to post
Share on other sites
11 hours ago, WallterScott said:

CPU Freq:        5.4 Ghz  AVX=0, HT=OFF, ring=4.9

WOW!  5.4GHz, nice freq.

Thanks for pushing this to the top and re-run the test!!

Share this post


Link to post
Share on other sites
20 часов назад, ZachariasX сказал:

I'll have to buy a better cooler:

image.png.d02c068b1769680d8d66369c17fbcc09.png

 

Here's something like this I built at home.

Share this post


Link to post
Share on other sites
On 12/29/2019 at 11:27 AM, chiliwili69 said:

I have used in a second run the fpsVR tool stats. I think it is mainly CPU limited.

413985696_FrametimesIl-2Index29_12_201910_34_46.thumb.png.f69a826e3af4b6198f8b3c0201b7662f.png

 

1301180220_GPUCPUIl-2Index29_12_201910_34_46.thumb.png.573e9ebd1c0c82ab6a03e3127c0fb164.png

 

****************************

Quote

Now the fpvVR stats improved, the CPU has is more time below 12.5ms and allows GPU to do his job more effectively:

 

1172553590_FrametimesIl-2Index29_12_201910_57_59_my.thumb.png.7ba69992479e5aff03e138837c940c4a.png

 

1379174391_GPUCPUIl-2Index29_12_201910_57_59_my.thumb.png.cf22071d7bd674874fff4a9b99c5a73c.png

 

Chili, you have me lost here with these charts and your comments 🤔

 

Why do you say in the first run that it appears to you CPU limited? Intuitively, I would say that it is more GPU limited since the blue GPU chart is shifted more to the right - therefore the GPU takes more time to render a frame.

 

I am missing something obvious here or am misinterpreting the charts....

 

Thanks.

Edited by apollon01
  • Upvote 1

Share this post


Link to post
Share on other sites
19 hours ago, WallterScott said:

CPU Freq:        5.4 Ghz  AVX=0, HT=OFF, ring=4.9

🤯

Share this post


Link to post
Share on other sites
1 hour ago, apollon01 said:

Why do you say in the first run that it appears to you CPU limited? Intuitively, I would say that it is more GPU limited since the blue GPU chart is shifted more to the right - therefore the GPU takes more time to render a frame.

 

Thank you for asking this, since this is something I also want feedback about what other people think about how IL-2 VR and fpsVR reports frametimes for CPU and GPU.

 

As I understand it for every cycle (80 cycles per second in my Index):

 

The CPU is the one who first calculate for this instant of time the FlyingModel, DamageModel, the AI, etc, and based on all this produce a geometry (just where all 3D objects are situated on the space).

After that, and only after, the GPU takes that geometry and render the scene (think on render like taking a picture of the scene, i.e.: put a color to every pixel based on lights, geometry, textures, etc).

 

So, based on this, the frametimes of the CPU should be always below the frametimes of the GPU. This is what I always have seen in fpsVR frametimes.

It means that the frametimes shown by the GPU take into account the time taken by the CPU plus the time taken by the GPU.

 

This is my interpretation. If it is wrong it means that there could be cases where the CPU frametimes are above GPU frametimes in IL-2 VR with fpsVR. (If onyone have seen this, please just let me know).

 

In the first chart, the CPU is showing the main peak around 11ms, and the GPU (which is needing about 4ms) ends up above 12.5ms most of the time.

 

But in the second chart, since the CPU has dropped the main peak around 8.5ms, now the GPU is able to complete the frames below 12.5ms.

So it was not a GPU problem it was a CPU problem.

Share this post


Link to post
Share on other sites
29 minutes ago, chiliwili69 said:

It means that the frametimes shown by the GPU take into account the time taken by the CPU plus the time taken by the GPU.

OK... I am not sure about this statement above. The correct interpretation makes all the difference 🤔

 

Maybe someone more knowledgeable could chip in?

 

Btw. recently I have supported a feature request that would make the fpsVR to log also CPU and GPU frame times so the result would not be just the distributions. For a given frame / time unit we would see the CPU & GPU frame times and hopefully also CPU & GPU usage. Sergio, the developer, acknowledged it is on his to-do list.

 

https://steamcommunity.com/app/908520/discussions/0/1750142176477291142/

 

Edited by apollon01

Share this post


Link to post
Share on other sites
25 minutes ago, apollon01 said:

OK... I am not sure about this statement above. The correct interpretation makes all the difference

 

What I say is that the GPU frametimes are directly shifted by the CPU frametimes.

I took the second chart I made:

1.- The first two peaks (I think it is the moment of the loop or the hangar, so clean sky) are separated just 3ms. This 3ms is the time that the GPU needs to do its job (render scene). So GPU frametimes.

2.- The big peaks are where the track is most of the time. Again the GPU adds about 3ms to the frametimes of the CPU

3.- Once the CPU exceed the 12.5 limit, then the delay added is 0 ms, since the GPU has a full cycle (12.5) to do his job.

fpsvr.thumb.png.5519fd706bba02037d3cf2f8f72c82ae.png

Share this post


Link to post
Share on other sites

Motherboard: ASUSTek ROG MAXIMUS XI HERO (WI-FI)
 CPU:                 Intel Core i9 9900KS
 CPU Freq:        5.4 Ghz
 L3 cache:        16 MB
 Cores:               8  (number of active physical Cores)
 HT:                    Off (HyperThreading. If Cores=Threads then HT is Off)
 RAM type:        DDR4
 RAM size:        32 GB
 NB Freq:           4300 MHz
 RAM Freq:        4000 MHz (this is 2xDRAM freq if Dual channel)
 RAM Latency:  17  (CAS Latency CL)
 GPU:                  2080Ti
 STMark             3274
 Frames: 28466 - Time: 180000ms - Avg: 158.144 - Min: 110 - Max: 246

Share this post


Link to post
Share on other sites
4 hours ago, jarg1 said:

 CPU:                 Intel Core i9 9900KS
 CPU Freq:        5.4 Ghz

 

Thank you very much for bringing the first 9900KS results!! The 5.4 GHz freq is impressive as well.

 

Now looking to details, your PC is very similar to the one of Walterscott. The only differences are:

 

-Mobo: Hero vs Apex

-RAM freq: 4000 vs 4400

-NB freq: 4300 vs 4900

 

And the is 20fps difference. So it is not explained by those differences.

 

Just three questions:

- Is the AVX offset=0?

- Is the 5.4 GHz maintained during the run of the benchmark track? (to be sure there is no throttling)

- Did you switch off mods?

 

Share this post


Link to post
Share on other sites

Por favor amigo - Ill be in the Crowd again 

and this time its a new honestly test without Mod and the 1 page told ---- Amar Halshug 

 

Motherboard:  ROG MAXIMUS CODE
 CPU:                 i5 9600k
 CPU Freq:       5.2 Ghz
 L3 cache:        9 MB

 Cores:             6  

 HT:                    Off ( not possible to on/off in Bios ) 
 RAM type:        DDR4
 RAM size:        16 GB ( dual )

 NB Freq:          4300Hz
 RAM Freq:       2x1800 =  3600  MHz (this is 2xDRAM freq if Dual channel)
 RAM Latency:  15  
 GPU:                 2080super
 STMark:          3141

 

Frames: 24613 - Time: 180000ms - Avg: 136.739 - Min: 86 - Max: 219

 

gracias

Share this post


Link to post
Share on other sites
3 hours ago, chiliwili69 said:

 

Thank you very much for bringing the first 9900KS results!! The 5.4 GHz freq is impressive as well.

 

Now looking to details, your PC is very similar to the one of Walterscott. The only differences are:

 

-Mobo: Hero vs Apex

-RAM freq: 4000 vs 4400

-NB freq: 4300 vs 4900

 

And the is 20fps difference. So it is not explained by those differences.

 

 

 

 

Most likely the NB frequency.

Share this post


Link to post
Share on other sites
2 hours ago, Niiranen-VR said:

Frames: 24613 - Time: 180000ms - Avg: 136.739 - Min: 86 - Max: 219

I get about that at 4.6 GHz or so. Are you sure you have not set some further graphics tweaks for instance in the NVidia control settings? You should certainly average about 150 FPS or so. Can you check your CPU temp while running the track? HWInfo will tell you actual frequencies and temps.

Share this post


Link to post
Share on other sites
6 hours ago, chiliwili69 said:

Just three questions:

- Is the AVX offset=0? Yes

- Is the 5.4 GHz maintained during the run of the benchmark track? (to be sure there is no throttling) Yes

- Did you switch off mods? No Mods

 

See answers above.  I noticed this also.  My best guess is that the main difference might be related to the RTX 2080Ti.  Mine is a Nvidia FE with a water block.  The memory has a 1000 MHz overclock but I am not able to get a great overclock on the GPU.

Edited by jarg1

Share this post


Link to post
Share on other sites
17 hours ago, chiliwili69 said:

What I say is that the GPU frametimes are directly shifted by the CPU frametimes.

I took the second chart I made:

1.- The first two peaks (I think it is the moment of the loop or the hangar, so clean sky) are separated just 3ms. This 3ms is the time that the GPU needs to do its job (render scene). So GPU frametimes.

2.- The big peaks are where the track is most of the time. Again the GPU adds about 3ms to the frametimes of the CPU

3.- Once the CPU exceed the 12.5 limit, then the delay added is 0 ms, since the GPU has a full cycle (12.5) to do his job.

FYI Chili, I just ran Prepar3D with the fpsVR and pretty much all the time the CPU frame time was around 20+ while the GPU frame time was around 9. So to me it seems like the GPU frame time does not include the CPU frame time.

Share this post


Link to post
Share on other sites

Zach     

Yes , under the test its only power maximum on Nvidia 

Normaly I do some more in Nvidia..... I read this and that in blogs so I try this and that - but I'm not allmost sure what is really a help ..... But I'm doing  something 🤔🥴

But I maked the test again 

A2BBD010-F49F-4288-BF60-0503F5DE9BC2.jpeg

245865D0-4D42-4E1C-B070-108143F3308D.jpeg

Share this post


Link to post
Share on other sites
27 minutes ago, Niiranen-VR said:

Yes , under the test its only power maximum on Nvidia 

You have global settings in the NVidia seeting plus program specific settings. Did you make adjustments in global settings? I had that once before I ran a track and it of course impacted my FPS. Also you might want to have GPU-Z running to log your GPU activity. With Chilis settings, your card should not be used more than 50% or so.

 

But I see that you *maybe* have not synched your cores. You need to set your Turboboost in BIOS as "Synch All Cores" (or how your mobo maker calls it). You have to force alll cores clocking up (and down) in the same way. In your case, I can see that the min. frequency is not equal. If I force cores to equal frequencies, I get also similar min. frequencies. There might be something related to that.

  • Upvote 1

Share this post


Link to post
Share on other sites

Zach

Yes, in Nvidia only BOS I change ...

And ...oooooo yes ......the cores are not in synch ...and the game is much 1 core singular - as told - but core 2 and 5 is up and 'ring the bell ' 

 

( I learn more and more 👍, good for my 14 year old son who have a similar pc 😁

Share this post


Link to post
Share on other sites
2 minutes ago, Niiranen-VR said:

.the cores are not in synch

If not, in my case then the game runs at the slowest specified Turbo speed that is reserved for all cores use. In your case probably 4.7 GHz.

  • Upvote 1

Share this post


Link to post
Share on other sites
6 hours ago, dburne said:

 

Most likely the NB frequency.

 

 I may try a memory overclock this evening and retest.

Share this post


Link to post
Share on other sites

Zach 

Well - I tried synch the cores and ' not good '  ....- my CPU get down lower to 4600 MHz -                     avg 128.094 - min : 75  - Max : 212 

 

So now I'm Back again 

6177579A-62BD-4B64-9203-B50AC7DAE29A.jpeg

Share this post


Link to post
Share on other sites
 

Motherboard: ASUSTeK Computer Inc. G55VW
 CPU:             i7 3630QM
 CPU Freq:        1.2GHz
 L3 cache:        6MB, 12-way

 Cores: 4, threads: 8
 RAM type:        DDR3
 RAM size:        8 GB
 RAM Freq:        Na
 RAM Latency:         16

 DRAM Frequency: 798MHz
 GPU:                 NVIDIA GeForce GTX 660M
 STMark:           1777
 
FRAPS TXT: Frames: 1851 - Time: 180000ms - Avg: 10.283 - Min: 2 - Max: 23

 

BOX  4.003

 

Share this post


Link to post
Share on other sites
7 hours ago, Niiranen-VR said:

Zach 

Well - I tried synch the cores and ' not good '  ....- my CPU get down lower to 4600

The temperatures seem to be ok. Does it say in your BIOS that after synching cores the desired max. turbo boost is still at 5.4 GHz?

 

Because now it looks like your cores got synched to what was the lowest boost selected for all core usage, and 4.6 GHz are just about the ballpark for your initial scores.

Share this post


Link to post
Share on other sites
23 hours ago, chiliwili69 said:

 

Thank you very much for bringing the first 9900KS results!! The 5.4 GHz freq is impressive as well.

 

Now looking to details, your PC is very similar to the one of Walterscott. The only differences are:

 

-Mobo: Hero vs Apex

-RAM freq: 4000 vs 4400

-NB freq: 4300 vs 4900

 

And the is 20fps difference. So it is not explained by those differences.

 

Just three questions:

- Is the AVX offset=0?

- Is the 5.4 GHz maintained during the run of the benchmark track? (to be sure there is no throttling)

- Did you switch off mods?

 

 

I spent a couple of hours trying to work this out, with some success.

 

First, I happened to notice that my Graphics interface was only running x8 even though I had it placed in a PCIe x16 slot.  I was able to move the card to another slot and it is now at x16.

 

I removed 2 sticks of RAM and was able to bump the speed up a bit to 4100 MHz.  Also the NB Frequency is now 4900 MHz (My original number may have been the result of not checking while overclocked).

 

I think with a bit of tweaking I might be able to improve the numbers further.  Anyway, below is the revised data:

 

Motherboard: ASUSTek ROG MAXIMUS XI HERO (WI-FI)
 CPU:                 Intel Core i9 9900KS
 CPU Freq:        5.4 Ghz
 L3 cache:        16 MB
 Cores:               8  (number of active physical Cores)
 HT:                    Off (HyperThreading. If Cores=Threads then HT is Off)
 RAM type:        DDR4
 RAM size:        16 GB
 NB Freq:           4900 MHz
 RAM Freq:        4100 MHz (this is 2xDRAM freq if Dual channel)
 RAM Latency: 17  (CAS Latency CL)
 GPU:                 2080Ti
 STMark            3269

 

2020-01-17 22:05:53 - Il-2
Frames: 30753 - Time: 180000ms - Avg: 170.850 - Min: 118 - Max: 273

Edited by jarg1
  • Like 1

Share this post


Link to post
Share on other sites
18 hours ago, apollon01 said:

FYI Chili, I just ran Prepar3D with the fpsVR and pretty much all the time the CPU frame time was around 20+ while the GPU frame time was around 9. So to me it seems like the GPU frame time does not include the CPU frame time

OK, thanks for letting me know. I wanted to see exactly this. So my assumption should be wrong.

I then could configure IL-2 VR in such a way that CPU frametimes are above GPU frametime. Maybe with a ver very low SS but with high CPU load. I will try.

Share this post


Link to post
Share on other sites

Zach 

No more spamming this page with Bios and what to do - so I make a new page about Bios and what to control for get the mashine better 

I go in to Bios , and many time I really need help 'for Dummies' ..so .......🤔

Share this post


Link to post
Share on other sites
17 hours ago, ZachariasX said:

But I see that you *maybe* have not synched your cores. You need to set your Turboboost in BIOS as "Synch All Cores" (or how your mobo maker calls it). You have to force alll cores clocking up (and down) in the same way. In your case, I can see that the min. frequency is not equal. If I force cores to equal frequencies, I get also similar min. frequencies. There might be something related to that

 

Well seen!  Perhaps I should indicate this in the instructions that all cores must run at same freq fixed manually (better than letting the TurboBoost tech decide the max freq)

4 hours ago, WokeUpDead said:

i7 3630QM

 

Thanks for running this test.

I see your PC is a laptop.

Since laptops are very different from PCs (in terms of processors, overclocking, cooling capacity, graphics card) I think we should not mix data from PC with laptops.

So I will put them at the bottom of the page.

2 hours ago, jarg1 said:

First, I happened to notice that my Graphics interface was only running x8 even though I had it placed in a PCIe x16 slot.  I was able to move the card to another slot and it is now at x16.

 

I removed 2 sticks of RAM and was able to bump the speed up a bit to 4100 MHz.  Also the NB Frequency is now 4900 MHz (My original number may have been the result of not checking while overclocked).

 

The PCIe thing is strange, good that you sorted out.

 

Regarding RAM, it seems that less RAM (16 vs 32) is better in order to reach higher RAM frequencies.

Since IL-2 doesn´t need more than 16Gb, it is a good lesson learnt.

 

With this two thing I see you got a big bump in fps. Exactly as predicted by greif correlation! Nice for you.

  • Upvote 1

Share this post


Link to post
Share on other sites
On 1/16/2020 at 8:44 PM, chiliwili69 said:

As I understand it for every cycle (80 cycles per second in my Index):

 

The CPU is the one who first calculate for this instant of time the FlyingModel, DamageModel, the AI, etc, and based on all this produce a geometry (just where all 3D objects are situated on the space).

After that, and only after, the GPU takes that geometry and render the scene (think on render like taking a picture of the scene, i.e.: put a color to every pixel based on lights, geometry, textures, etc).

 

So, based on this, the frametimes of the CPU should be always below the frametimes of the GPU. This is what I always have seen in fpsVR frametimes.

It means that the frametimes shown by the GPU take into account the time taken by the CPU plus the time taken by the GPU.

 

This is my interpretation. If it is wrong it means that there could be cases where the CPU frametimes are above GPU frametimes in IL-2 VR with fpsVR. (If onyone have seen this, please just let me know).

 

I am not an expert but I am pretty sure it does not work that way. The CPU has a complete frame time (12 ms) to complete its computation, and then pass its output to the GPU which process it during the next frame, and has a complete frame time as well. They work in parallel, the CPU works on frame t while the gpu works on frame t-1. That involves a 1 frame latency, which is fine. Actually, you can even set more than 1 "pre-rendered frame" provided from the CPU to the GPU in order to smooth the process at the cost of a higher latency.

 

If the GPU was waiting until the CPU has done his job to do his, first that would be a silly waste of ressources, the GPU would spend a lot of time idling, second their would be no clear notion of CPU or GPU bottleneck, because a better GPU would compensate a weaker CPU, which is clearly not the case in IL2.

Edited by haltux

Share this post


Link to post
Share on other sites
1 hour ago, haltux said:

They work in parallel, the CPU works on frame t while the gpu works on frame t-1.

 

Thanks for explaining that. This makes a lot of sense.

Then, it should be possible to reproduce in IL-2 VR these two extreme cases with my Index:

 

CPU bottleneck: (Shadows High, mirror complex) but very low resolution (SS quite below 100%)

CPU frametimes is close to 12.5 (or above) and GPU frametimes below 10 ms

 

GPU bottleneck:(Shadows off, mirror off) but very high resolution (SS quite above 100%)

CPU frametimes is below 10ms and GPU frametimes close to 12.5 (or above)

 

I have to try that with fpsVR.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...