Jump to content

JG1_Greif

Members
  • Content Count

    6
  • Joined

  • Last visited

Community Reputation

2 Neutral

1 Follower

About JG1_Greif

Recent Profile Visitors

75 profile views
  1. @chiliwili69 as mentioned, I was somewhat surprised that the model favours raw RAM MHz vs. true latency to predict average fps as it seems counter intuitive. However, when I had a look at the spectacular 5.5 GHz data from jarg1, I couldn't help but notice an observation that supports this result even further: Obviously, this is just one example and I would feel comfortable considering that the variation in outcome falls well in the error margin. That having said, when you compare WallterScott's configuration and the resulting average fps with those from jarg1, you can see that all Walterscott's values are lower or equal to jarg1's, save RAM MHz (I will consider jarg1's true latency of 7.74 ns virtually equal to Walterscott's 7.73 ns) except his 4400 MHz RAM vs. jarg1's 4100 MHz. Nonetheless, Walterscott's average fps is > 5 fps higher (179.2 vs. 173.9). I found this result quite striking, especially given the fact that jarg1 actually has a higher CPU speed, which does correlate positively with average fps (it is one of the other predictors in the model) as well as a (somewhat) higher STMark. I'd be curious to see if additional data can actually disprove this counter intuitive result where RAM MHz has a stronger correlation with average fps than true latency.
  2. @chiliwili69I know this probably only irks data nerds like me, but because of statistics (blah blah blah, descriptive statistics vs. predictive analytics, small sample size etc. etc.), but more important, because of managing expectations, I would advise to use other wording: "Going to that 2400MHz (check your Mobo can support it) you should gain about 16 fps in monitor". ... nonetheless, this is very cool and satisfying. Really nice to see that the model helps identifying possible bottlenecks in people's configuration! Too bad about the 0.496 deviation between the predicted and observed FPS 😄
  3. It is in general reassuring if a model makes sense given your observations so far. If the initial results of the model seem counterintuitive, your model has either identified some structure that you hadn't seen yourself or, quite likely, there is some unwanted effect in your model and/or data. In all cases, further investigation is a good idea.
  4. Yes, that works as 0.01669 * (GHz) = 0.01669 * (1000 * MHz) = 16.69 * MHz Agreed. It is inevitable that there are error margins. The model allows that you calculate the predicted FPS up to a precision of 1/1000ths of FPS. Seeing the predicted FPS with such a precision may give the false impression that the predictions of the model have an accuracy with the same precision, this obviously is not the case. Regarding the confidence of the model, I included the confidence intervals per predictor in the last two columns of the model, under "99% confidence". Here you can see that the real multiplication factor (this is unknown, and that is what the models tries to estimate based on the sample data) per (RAM) MHz is with 99% confidence between 0.01326 and 0.0201 according to the model and for CPU, the 99% confidence interval is between 13.05553 and 24.63155. Using this information a safer interpretation of the model would be that an upgrade of 400 MHz should translate to an increase of between 5.3 and 8 FPS and an increase of .2 GHz CPU should translate to an increase between 2.6 and 4.9 FPS, all this with 99% confidence. Your example shows that the estimated CPU increase is within the range and it is outside the range for the RAM speed, reason all the more to be cautious. Obviously, as said before, statistically the model presupposes al kinds of conditions that we know are not the case (variables being completely uncorrelated, linearity of the results, i.e. a 200 MHz increase will always result in the same FPS increase, no matter your starting point and no matter the rest of your setup). This doesn't mean that the model is useless, it just shows that one needs to be cautious. As said in my response to Alonzo and as demonstrated by the data, caution is needed, though I am quite convinced the model will do OK if you don't expect a 100% perfect prediction. I do estimate that a) the model will get you a (quite) educated guess and b) some refinement of the model should be a good idea (more data!). I have forced the model to use STMark instead of CPU GHz, but the R^2 was lower and the model needed more variables to make a prediction, which in general is less good. BTW, no need to separate 4.002 and 4.003 as I can use the version as an input for the model. Actually, I already did this and at least in the current setup with the current data it did not come up as (the best) predictor.
  5. OK, with all these new entries, I thought I'd make an update for the multivariate regression. The best model I got used CPU GHz, (RAM)MHz, GPU Passmark and STMark and got an R^2 of 0.938. However, because I considered the collinearity between CPU Ghz and STMark to severe, I think the better model is the one only using CPU GHz, (RAM)MHz and GPU Passmark, with a still very comfortable R^2 of 0.929. This is the updated model: Parameter Estimates Variable Label DF Parameter Standard t Value Pr > |t| Tolerance Variance 99% Confidence Limits Estimate Error Inflation Intercept Intercept 1 -58.1902 10.26161 -5.67 <.0001 . 0 -86.09648 -30.28392 CPU GHz CPU GHz 1 18.84354 2.12835 8.85 <.0001 0.86813 1.1519 13.05553 24.63155 MHz MHz 1 0.01669 0.00126 13.24 <.0001 0.96012 1.04154 0.01326 0.02011 GPU Passmark GPU Passmark 1 0.00356 0.0004605 7.73 <.0001 0.8432 1.18596 0.00231 0.00481 In other words: avg FPS = -58.1902 + (18.84354 * CPU GHz) + (0.01669 * (RAM) Mhz) + (0.00356 * GPU Passmark) Obviously there are caveats to the model, for example the number of observations is somewhat limited combined with the fact that some people have contributed multiple lines concerning the same computer configuration with some changes. More data is (generally) better, but this can mean that particularities of their setup will end up being over represented in the model. Normalizing the data so that the predictors all have the same scale gives the same model may give insight in the variables that have the most influence on the result. Again, the relatively high factor for Mhz (as well as the higher t-value) suggests that if improving RAM speed and upgrading were the same cost, improving your RAM speed would give you the most bang for your buck. Given the limitations of the data and because of the counter intuitive results of raw RAM Mhz being a better predictor that true latency, I remain skeptical, though obviously faster RAM will never hurt. Parameter DF Estimate Standard t Value Pr > |t| Error Intercept 1 136.643 0.72123 189.46 <.0001 CPU GHz 1 6.94063 0.78393 8.85 <.0001 MHz 1 9.87021 0.74544 13.24 <.0001 GPU Passmark 1 6.1469 0.79544 7.73 <.0001
  6. I really enjoy this thread and study of the numbers, so I thought I'd add my 0.02 as I thought I could help out on running a regression using multiple variables using SAS. My objective was to see if it is possible to improve the results of the univariate regression in order to get a better impression of the factors that influence average fps. I used the 15 data lines from the Remagen 4.0 tab in the IL2 VR online sheet.xlsx, I skipped the line from Gomoto because of the missing data. I used a stepwise selection for the variable selection with a criterion of significance of 0.05 for adding or removing variables. Because I could reasonably expect the graphics card to have an influence on the average fps, I added a column that represents the Passmark score per GPU from https://www.videocardbenchmark.net/high_end_gpus.html and added that variable to the model. This is the Passmark data translation I used: GPU Passmark 2080Ti 16694 2080 15502 2070 14502 1080Ti 14221 1080 12457 1060 9097 I ended up with this model: Parameter Estimates Variable Label DF Parameter Standard t Value Pr > |t| Estimate Error Intercept Intercept 1 -77.4523 15.9487 -4.86 0.0005 CPU GHz CPU GHz 1 22.0593 4.5072 4.89 0.0005 RAM MHz MHz 1 0.01709 0.0026 6.68 <.0001 Passmark Passmark 1 0.0037 0.0008 4.76 0.0006 In other words: avg fps = -77.45233 + (22.05925 * CPU GHz) + (0.0037 * Passmark) + (0.01709 + RAM MHz) This results in a very comfortable R^2 of >96% In order to get an idea of the relative importance per variable, I ran a model using standardized data. This yields a model with the same R^2, but the size of the estimated parameter expresses the relative importance per variable: Parameter DF Estimate Standard Error t Value Pr > |t| Intercept 1 132.433333 1.276271 103.77 <.0001 CPU GHz 1 8.818447 1.801808 4.89 0.0005 RAM MHz 1 10.529343 1.575338 6.68 <.0001 Passmark 1 7.537976 1.582208 4.76 0.0006 I notice that the model favors including the variable of the raw RAM MHz instead of the true latency (which, btw, are obviously correlated) which I found surprising. Also, it suggests that the relative importance of RAM MHz is more important than CPU GHz (though they do not necessarily come at the same cost. Forcing the model to use true latency instead of RAM MHz yielded a model with a lower performance that does use true latency, suggesting that raw RAM MHz is more important than true latency (R^2 = 0.87, which goes against my intuition). Please share your thoughts on these results. I will be able to run alternative models based on your suggestions and/or when new data will be added tot the data set. However, this will be after Christmas. Happy holidays everyone!
×
×
  • Create New...