Jump to content
SCG_Wulfe

RTX 3080 and VR performance

Recommended Posts

Well, I figure a bunch of you will be curious about this. 

 

I got an RTX 3080 yesterday and installed it today. I am currently running a Odyssey + with SS of 150% (1772x2220).

 

(I'm waiting on my Reverb G2 Pre-Order.) 

 

With the following settings, I am able to achieve a mostly stable 90 FPS that does drop into the mid 80s in congested areas on Bodenplatte servers. I am hopeful that I will be able to get a bit more performance out of the Reverb G2 due to running at native res... and if not, running it at full settings with 60hz refresh. 

 

[KEY = graphics]
    3dhud = 0
    adapter = 0
    bloom_enable = 0
    canopy_ref = 0
    desktop_center = 1
    detail_rt_res = 1024
    draw_distance = 0.81800
    far_blocks = 0
    fps_counter = 0
    fps_limit = 60
    full_height = 768
    full_width = 1280
    fullscreen = 0
    gamma = 0.80000
    grass_distance = 0.00000
    hdr_enable = 1
    land_anisotropy = 2
    land_tex_lods = 2
    max_cache_res = 1
    max_clouds_quality = 2
    mgpu_compatible = 0
    mirrors = 2
    msaa = 1
    multisampling = 2
    or_ca = 0.00184
    or_dummy = 0
    or_enable = 1
    or_height = 2220
    or_hud_rad = 1.50000
    or_hud_size = 0.75000
    or_ipd = 0.06447
    or_sipdc = 0.00000
    or_width = 1772
    post_sharpen = 1
    preset = 2
    prop_blur_max_rpm_for_vr = 155
    rescale_target = 1.00000
    shadows_quality = 3
    ssao_enable = 1
    stereo_dof = 5.00000
    vsync = 0
    win_height = 768
    win_width = 1024
[END]

 

 

  • Thanks 1

Share this post


Link to post
Share on other sites

Hi Wulfe! Thanks for this.

 

What I'm really interested in is the relative performance vs another graphics card. Are you able to record some kind of test track, like Chili has used before, then run it to make reproducible (to you) performance numbers? That way you could compare the performance of the 3080 to your previous graphics card.

 

If I had a 3080 on hand, this is what I'd do:

  • Install old GPU (RTX 2080 in my case).
  • Record 2-minute benchmark track including other planes, some combat, some ground stuff.
  • Set graphics high enough that I don't hit 90 FPS. I have a Valve Index, so actually what I do for benchmarking is use 144hz mode with all reprojection switched off. This gives a real framerate. (Probably you want High or Ultra with everything switched on, and some supersample).
  • Benchmark the 2080, maybe 3 runs.
  • Install new 3080, benchmark again, maybe 3 runs.

The reason I suggest setting the graphics high enough that you don't hit 90 FPS is because that gives a real differential between the cards. You also want to disable the 45 FPS interpolated mode for benchmarking, what we're looking for is the "raw FPS" that each card can deliver.

  • Upvote 3

Share this post


Link to post
Share on other sites

Would it be worth upgrading from a 2080 TI to a 3080, primarily for VR, or hold off until 3090s are less expensive?

Share this post


Link to post
Share on other sites

In addition to what Alonzo said, you can use the free MSI Aferburner (with Riva tuner) to monitor the fps, GPU load, VRAM etc for your comparison.

Look this comparison: https://forum.il2sturmovik.com/topic/53603-valve-index-vs-hp-reverb-through-the-lens-pictures/?do=findComment&comment=815845

 

 

To effectively compare the performance increase of both cards (your old one and the new 3080) you will need to run a case where the CPU is never bottlenecked.

 

So, probably a recorded track with few just transport planes (less AI done by CPU), no use of guns (damage model done by CPU), no mirrors (reflections calculated by CPU), no mountains (I believe  more complex terrain surface loads CPU)

 

You can then put settings of clouds at extreme and use MSAAx8. This will load GPU.

 

Hey, but don´t feel like you have to do all this. It takes time and it is your time. In any case congrats for having the new card, you were lucky!

Share this post


Link to post
Share on other sites
26 minutes ago, chiliwili69 said:

no mountains (I believe  more complex terrain surface loads CPU)

 

Collision detection with ground and objects like trees will most likely stress the CPU.

Share this post


Link to post
Share on other sites
16 hours ago, nimitstexan said:

Would it be worth upgrading from a 2080 TI to a 3080, primarily for VR, or hold off until 3090s are less expensive?

 

2080ti to 3080 gives about +25% performance, for whatever cost differential between selling your old 2080ti and buying the 3080. If you can buy one, that is.

 

3080 to 3090 gives about +10-15% performance, for +$700. Again, if you can buy one.

 

(These numbers are general, not specific for IL2. It could be better or worse. No-one has actually done any science on IL2 as far as I can tell, maybe someone eventually will do so).

 

Whether it's "worth" upgrading is your own personal value calculus. Does +25% performance (3080) or +40% performance (3090) allow you to hit specific graphical settings and frame rates with your VR setup? If so, it could definitely be worth it, to you.

 

For me, I have a 2080 (non-ti) and a Valve Index. I run the Index at ~2300 vertical pixels, shadows off. I'd like to be able to run shadows on, or MSAAx2, or to support a Reverb G2. For me, the 2080 to 3080 upgrade price is "worth it" but the 3090 seems like a lot of money for not much more performance. But that's just for me, everyone is different.

Share this post


Link to post
Share on other sites

Pimax 8kx and MSI RTX 3080 Gaming X Trio (the fastest 3080 now) are scheduled for October 1st. 

 

I'll see if it suffices. If not, I will return it and look for a 3090. I've heard great things of the 3080 by Wulfe though, so I doubt, that this may be necessary.

Edited by SCG_Fenris_Wolf

Share this post


Link to post
Share on other sites
2 hours ago, SCG_Fenris_Wolf said:

MSI RTX 3080 Gaming X Trio

 

Let´s us know if you have 2 or 1 MLCC in your card. It seems they may vary or they are changing the original design:

https://www.reddit.com/r/MSI_Gaming/comments/izyb21/img_of_3080_gaming_x_trio_with_2_mlcc_and_4/

 

Probably just 1 MLCC will solve the crash problems, so you will be OK. But let us know also how well support OC.

Share this post


Link to post
Share on other sites

I got a 2080ti and will most likely be waiting for a 3080ti wich will most likely come and be much cheaper than the 3090.
I hope i can run the G2 decently with the 2080ti.

Share this post


Link to post
Share on other sites
1 hour ago, Winger said:

I got a 2080ti and will most likely be waiting for a 3080ti wich will most likely come and be much cheaper than the 3090.
I hope i can run the G2 decently with the 2080ti.

 

Same approach I am taking now as well. Think I am going to wait and see if something comes out in between the 3080 and 3090. 

  • Like 1

Share this post


Link to post
Share on other sites

Is anyone going to get the AMD RDNA2 card? Or does AMD not run IL2 well?

Share this post


Link to post
Share on other sites
3 hours ago, CDRSEABEE said:

Is anyone going to get the AMD RDNA2 card? Or does AMD not run IL2 well?

 

I believe it's fine at the moment, but AMD has had some driver problems, and it looks like players have had fewer issues with NVidia cards in IL2 (I think the deferred rendering patch had problems with AMD cards). Personally I would rather have an NVidia card but I don't want to pay through the nose for the privilege, it will be interesting to see if the RDNA cards force any kind of price change for NVidia.

Share this post


Link to post
Share on other sites
On 9/26/2020 at 2:32 PM, Alonzo said:

Hi Wulfe! Thanks for this.

 

What I'm really interested in is the relative performance vs another graphics card. Are you able to record some kind of test track, like Chili has used before, then run it to make reproducible (to you) performance numbers? That way you could compare the performance of the 3080 to your previous graphics card.

 

If I had a 3080 on hand, this is what I'd do:

  • Install old GPU (RTX 2080 in my case).
  • Record 2-minute benchmark track including other planes, some combat, some ground stuff.
  • Set graphics high enough that I don't hit 90 FPS. I have a Valve Index, so actually what I do for benchmarking is use 144hz mode with all reprojection switched off. This gives a real framerate. (Probably you want High or Ultra with everything switched on, and some supersample).
  • Benchmark the 2080, maybe 3 runs.
  • Install new 3080, benchmark again, maybe 3 runs.

The reason I suggest setting the graphics high enough that you don't hit 90 FPS is because that gives a real differential between the cards. You also want to disable the 45 FPS interpolated mode for benchmarking, what we're looking for is the "raw FPS" that each card can deliver.

 

 

Hi Alonzo, 

I will see if I get a chance to do some more scientific tests this week. I can tell you anecdotally that with my previous GTX 1080, the current settings I'm using would get me about 45-50 FPS in a one-on-one QMB over western europe with heavy cloud (this is with no reprojection enabled). These settings now get me 90 FPS on the same QMB scenario.

 

In combat box with heavy clouds and an aircraft equipped with a mirror:

 

-90 FPS when flying higher or low with sparse activity 

- mid 80s when low and in busy dogfight

- high 70's when over a large town and busy dogfight

 

Not the locked 90 FPS I hoped for, but I am hopeful that the Reverb G2 will either give me a nice flicker free 60hz, or will be such a resolution improvement that I can slightly lower res or AA and maintain 90 FPS with still better visuals. 

 

I will say this. Even on my Oddysey+, running the game in VR with 4xMSAA, clouds on high, shadows on Ultra, and SS at 150%... it looks draw dropping. 

Edited by SCG_Wulfe
  • Thanks 1
  • Upvote 1

Share this post


Link to post
Share on other sites
25 minutes ago, SCG_Wulfe said:

In combat box with heavy clouds and an aircraft equipped with a mirror:

 

-90 FPS when flying higher or low with sparse activity 

- mid 80s when low and in busy dogfight

- high 70's when over a large town and busy dogfight

 

Not the locked 90 FPS I hoped for, but I am hopeful that the Reverb G2 will either give me a nice flicker free 60hz, or will be such a resolution improvement that I can slightly lower res and maintain 90 FPS with still better visuals. 

 

I will say this. Even on my Oddysey+, running the game in VR with 4xMSAA, clouds on high, shadows on Ultra, and SS at 150%... it looks draw dropping. 

 

Awesome. I think the lack of a locked-90 is probably also due to CPU hitches in the game (dogfight, or near a large town). There's an every-5-seconds multiplayer frame hitch caused by the CPU that I've reported in the bugs forum, and that kind of thing seems to crop up every now and then in IL2. Maintaining 80 or 90 FPS frame pacing is as difficult for the GPU as the CPU in the simulator, it seems.

  • Upvote 2

Share this post


Link to post
Share on other sites
3 minutes ago, Alonzo said:

 

Awesome. I think the lack of a locked-90 is probably also due to CPU hitches in the game (dogfight, or near a large town). There's an every-5-seconds multiplayer frame hitch caused by the CPU that I've reported in the bugs forum, and that kind of thing seems to crop up every now and then in IL2. Maintaining 80 or 90 FPS frame pacing is as difficult for the GPU as the CPU in the simulator, it seems.

 

Yes, this is quite possible. I need to do a bit more analysis to see my GPU utilization during these frame drops. I also dropped my CPU OC from 5.1 to 5.0 a little while ago due to intermittent crashes. I just recently upgraded my PSU (which was barely enough before) for the 3080 and I'm tempted to try and get back my 5.1 OC and see if it might have been PSU related. It just might buy me a few more frames. 

 

That said, if the reverb G2 looks great at 60hz and doesn't have the nasty strobe that my Odyssey does... I think running 8x MSAA and maxed out settings at 100% SS might be achievable at a locked 60FPS.

Share this post


Link to post
Share on other sites

3080 should be able to run max graphics settings.  If you are dropping frames with one it is on the CPU.  Higher clocks over 5ghz will help.

Share this post


Link to post
Share on other sites
1 hour ago, Bernard_IV said:

3080 should be able to run max graphics settings.  If you are dropping frames with one it is on the CPU.  Higher clocks over 5ghz will help.

 

Well it's definitely losing frames when I try and run 8x MSAA or SSAO. I don't think either of these would be affected by CPU much. 

That said, I'm sure if I can get a bit more clock speed it will help keep my 90FPS stable for the times it drops due to action. 

 

1 hour ago, DD_Llama_Thumper said:

Wulfe, what are the rest of your specs? 

 

9600k at 5.0 Ghz

16gig of DDR4 ram at 3800 mhz

Samsung Evo SSD

 

  • Like 1

Share this post


Link to post
Share on other sites

Time to call upon der8auer for me then. Sells pre-binned high-performing Intel chips from Germany.

 

My 7700K's stability at 5.0GHz has detoriated, to 4.9GHz in 2019, and 4.8GHz now. 

 

It was an insane chip though, for almost 4 years. I had really won the silicon lottery with that bad boy.

 

 

Wulfe's 8600K is the perfect combination for IL-2. OCs incredibly for IL-2, which neither needs nor uses a Kraken's corecount.

Edited by SCG_Fenris_Wolf

Share this post


Link to post
Share on other sites
1 hour ago, SCG_Fenris_Wolf said:

Time to call upon der8auer for me then. Sells pre-binned high-performing Intel chips from Germany.

 

My 7700K's stability at 5.0GHz has detoriated, to 4.9GHz in 2019, and 4.8GHz now. 

 

It was an insane chip though, for almost 4 years. I had really won the silicon lottery with that bad boy.

 

I can't remember whether I posted on here, but I finally worked up the courage to delid my 8086K. It worked great, now I am at 5.1ghz (+200 to previously) with about -10C. On a Noctua air cooler! I also increased my RAM to 3800 (was 3500) by adding some voltage.

 

So 8086K / 5.1ghz / 4.8ghz DDR4 / RTX 2080 and I still get CPU frame time glitches. 😞

  • Like 1

Share this post


Link to post
Share on other sites

I just bought a 10700K and overclocked it to 5.0 ghz. I still get those CPU frame time spikes in VR, but only in multiplayer. My cpu frametime is obviously lower then my 3700x, but the frame time spikes are still there. It gets progressively worse if the server is really busy and/or the ping is high. Are you getting these spikes in both single and multiplayer mode? I wish I could figure out how to fix this, everything runs smooth single player mode..

Share this post


Link to post
Share on other sites
52 minutes ago, UperDinero said:

I just bought a 10700K and overclocked it to 5.0 ghz. I still get those CPU frame time spikes in VR, but only in multiplayer. My cpu frametime is obviously lower then my 3700x, but the frame time spikes are still there. It gets progressively worse if the server is really busy and/or the ping is high. Are you getting these spikes in both single and multiplayer mode? I wish I could figure out how to fix this, everything runs smooth single player mode..

 

I only get the hard 'spikes' to about 16ms in multiplayer, and they are exactly every 5 seconds. In singleplayer I get "noisy" CPU frame times, but nothing that makes a hard spike and causes a frame drop.

Share this post


Link to post
Share on other sites
16 hours ago, SCG_Wulfe said:

4xMSAA, clouds on high, shadows on Ultra, and SS at 150%

 

Maybe a bit of topic, but as far as I understand, the MSAA technique is equivalent to apply SteamVR SS (or Pixel Density in Oculus). Both load the GPU in the same way by supersampling each pixel by a factor. It is like the RenderQuality parameter in the Pitools.

 

So why to touch both? 

  • Upvote 1

Share this post


Link to post
Share on other sites
14 hours ago, Alonzo said:

 

I can't remember whether I posted on here, but I finally worked up the courage to delid my 8086K. It worked great, now I am at 5.1ghz (+200 to previously) with about -10C. On a Noctua air cooler! I also increased my RAM to 3800 (was 3500) by adding some voltage.

 

So 8086K / 5.1ghz / 4.8ghz DDR4 / RTX 2080 and I still get CPU frame time glitches. 😞

The CPU frame glitches are notorious. I have also seen them in HLA. Are they in all SteamVR applications? Do they happen in 2D gaming too?

 

If yes to one of those, just bite the bullet and format c....

 

I haven't done that in a while, but a buddy said it fixed his issues. So far, shun away from it, but will probably do that this weekend. Clean, new, Windows 10 2004 installation.

Share this post


Link to post
Share on other sites
6 hours ago, chiliwili69 said:

 

Maybe a bit of topic, but as far as I understand, the MSAA technique is equivalent to apply SteamVR SS (or Pixel Density in Oculus). Both load the GPU in the same way by supersampling each pixel by a factor. It is like the RenderQuality parameter in the Pitools.

 

So why to touch both? 

 

My understanding is that super-sampling is essentially SSAA which renders the entire scene at a higher resolution and then converts it back to your display resolution giving better textures throughout the image. Basically synthetically replicating a higher resolution... though obviously still not the same or as good as actual higher resolution.

MSAA on the other hand is an algorithm that focuses only on super-sampling the edges/borders of textures and should not really touch the interior of textures, making it less expensive than full on SSAA or super sampling. 

 

All of that said, I find my VR image absolutely looks best with some MSAA combined with SS. It reduces flickering objects like lines and clouds and makes things like surface reflections, fire, and clouds look just beautiful. 

 

Now, I haven't actually tried disabling AA and cranking my SS to achieve a similar image to the combo. It's probably possible and probably would look even better since it would work on the interiors of textures as well... But, my assumption is it would also be more costly in terms of performance to achieve the same edge anti-aliasing as I am getting using the combo of the two. 

 

I'm no expert on this, so please feel free to correct me if I am wrong. 

Edited by SCG_Wulfe
  • Thanks 1

Share this post


Link to post
Share on other sites
17 hours ago, SCG_Fenris_Wolf said:

Time to call upon der8auer for me then. Sells pre-binned high-performing Intel chips from Germany.

 

Not necessary. Got my i9 9900K via amazon.de. Runs 5.2Ghz just fine.

Share this post


Link to post
Share on other sites
13 minutes ago, sevenless said:

 

Not necessary. Got my i9 9900K via amazon.de. Runs 5.2Ghz just fine.

 

Mine is as well, loving this cpu.

 

 

27 minutes ago, SCG_Wulfe said:

 

My understanding is that super-sampling is essentially SSAA which renders the entire scene at a higher resolution and then converts it back to your display resolution giving better textures throughout the image. Basically synthetically replicating a higher resolution... though obviously still not the same or as good as actual higher resolution.

MSAA on the other hand is an algorithm that focuses only on super-sampling the edges/borders of textures and should not really touch the interior of textures, making it less expensive than full on SSAA or super sampling. 

 

All of that said, I find my VR image absolutely looks best with some MSAA combined with SS. It reduces flickering objects like lines and clouds and makes things like surface reflections, fire, and clouds look just beautiful. 

 

Now, I haven't actually tried disabling AA and cranking my SS to achieve a similar image to the combo. It's probably possible and probably would look even better since it would work on the interiors of textures as well... But, my assumption is it would also be more costly in terms of performance to achieve the same edge anti-aliasing as I am getting using the combo of the two. 

 

I'm no expert on this, so please feel free to correct me if I am wrong. 

 

Same thing I have found as well.

No amount of SS on my end will look as good as some SS combined with MSAA.

I run MSAA x2 and 1.2  SS in Oculus.

Edited by dburne
  • Upvote 1

Share this post


Link to post
Share on other sites
7 hours ago, chiliwili69 said:

Maybe a bit of topic, but as far as I understand, the MSAA technique is equivalent to apply SteamVR SS (or Pixel Density in Oculus). Both load the GPU in the same way by supersampling each pixel by a factor. It is like the RenderQuality parameter in the Pitools.

 

So why to touch both? 

 

1 hour ago, SCG_Wulfe said:

My understanding is that super-sampling is essentially SSAA which renders the entire scene at a higher resolution and then converts it back to your display resolution giving better textures throughout the image. Basically synthetically replicating a higher resolution... though obviously still not the same or as good as actual higher resolution.

MSAA on the other hand is an algorithm that focuses only on super-sampling the edges/borders of textures and should not really touch the interior of textures, making it less expensive than full on SSAA or super sampling.

 

I think you're right, Wulfe: MSAA knows a bit about the geometry in the scene, so it knows where to apply extra sampling to reduce aliasing. Basically, if a pixel is entirely within a triangle in the scene, no super-sampling is performed. If a pixel is near multiple triangles, then it does the extra sampling work.

 

The net effect in-game is that MSAA smooths jaggies while increasing local contrast near object boundaries, making spotting easier. SSAA also smooths jaggies, but it does that by rendering the entire scene bigger and then crushing it back down. This is computationally more expensive and tends to blur pixels together, making spotting harder.

 

Talon did some comparisons and found that MSAAx2 helped make contacts easier to see but MSAAx4 gave no spotting improvement. It might give reduced jaggies, I'm not sure.

 

If GPU power were limitless and competitive multiplayer your goal, you'd want MSAAx2, sharpen on, and then as little SSAA as you can bear. For simply the best picture with least aliasing I guess you could basically skip the MSAA and crank the SSAA.

 

And in order to add something maybe useful to the thread, here's a review of 12 VR games (sadly not IL2) using FCAT to determine frame pacing. The Elite Dangerous, No Man's Sky and result is interesting:

 

Elite Dangerous: The RTX 2080 Ti delivered 95.99 unconstrained FPS with 2 Warp Misses and 2 dropped frames, but 39% (2515) of its frames had to be synthesized. In contrast, the RTX 3080 delivered 136.99 unconstrained FPS with 1 Warp Miss and 1 dropped frame, but it only required 4 synthetic frames.  The experience playing Elite Dangerous at Ultra settings is far superior on the RTX 3080 than it is playing on the RTX 2080 Ti.

 

No Man's Sky: The RTX 2080 Ti delivered 82.30 unconstrained FPS with 1 Warp Miss and 1 dropped frame, but 50% (3162) of its frames had to be synthesized.  It wasn’t a great experience and we would recommend playing it on Enhanced instead of Ultra. In contrast, the RTX 3080 delivered 115.07 unconstrained FPS with no dropped frames, and it only required 403 (6%) synthetic frames.  The experience playing No Man’s Sky using Ultra settings is much better on the RTX 3080 than it is playing with the RTX 2080 Ti.

 

Project Cars 2: The RTX 2080 Ti delivered 99.36 unconstrained FPS, and 21% (1596) of its frames had to be synthesized.  It isn’t a great experience playing with Motion Smoothing on, and we would recommend lowering settings instead. In contrast, the RTX 3080 delivered 121.99 unconstrained FPS with no dropped frames, and it didn’t require any synthetic frames either.  The experience playing Project CARS 2 on our chosen near-maximum settings is significantly better on the RTX 3080 than it is on the RTX 2080 Ti.

 

So that's +43% unconstrained FPS in Elite, +40% in NMS, and +23% in Project Cars 2. That review has other games too, but I don't consider benchmarks of 5ms frame-time games to be very useful for our purposes.

  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, Alonzo said:

 

 

I think you're right, Wulfe: MSAA knows a bit about the geometry in the scene, so it knows where to apply extra sampling to reduce aliasing. Basically, if a pixel is entirely within a triangle in the scene, no super-sampling is performed. If a pixel is near multiple triangles, then it does the extra sampling work.

 

The net effect in-game is that MSAA smooths jaggies while increasing local contrast near object boundaries, making spotting easier. SSAA also smooths jaggies, but it does that by rendering the entire scene bigger and then crushing it back down. This is computationally more expensive and tends to blur pixels together, making spotting harder.

 

Talon did some comparisons and found that MSAAx2 helped make contacts easier to see but MSAAx4 gave no spotting improvement. It might give reduced jaggies, I'm not sure.

 

If GPU power were limitless and competitive multiplayer your goal, you'd want MSAAx2, sharpen on, and then as little SSAA as you can bear. For simply the best picture with least aliasing I guess you could basically skip the MSAA and crank the SSAA.

 

And in order to add something maybe useful to the thread, here's a review of 12 VR games (sadly not IL2) using FCAT to determine frame pacing. The Elite Dangerous, No Man's Sky and result is interesting:

 

Elite Dangerous: The RTX 2080 Ti delivered 95.99 unconstrained FPS with 2 Warp Misses and 2 dropped frames, but 39% (2515) of its frames had to be synthesized. In contrast, the RTX 3080 delivered 136.99 unconstrained FPS with 1 Warp Miss and 1 dropped frame, but it only required 4 synthetic frames.  The experience playing Elite Dangerous at Ultra settings is far superior on the RTX 3080 than it is playing on the RTX 2080 Ti.

 

No Man's Sky: The RTX 2080 Ti delivered 82.30 unconstrained FPS with 1 Warp Miss and 1 dropped frame, but 50% (3162) of its frames had to be synthesized.  It wasn’t a great experience and we would recommend playing it on Enhanced instead of Ultra. In contrast, the RTX 3080 delivered 115.07 unconstrained FPS with no dropped frames, and it only required 403 (6%) synthetic frames.  The experience playing No Man’s Sky using Ultra settings is much better on the RTX 3080 than it is playing with the RTX 2080 Ti.

 

Project Cars 2: The RTX 2080 Ti delivered 99.36 unconstrained FPS, and 21% (1596) of its frames had to be synthesized.  It isn’t a great experience playing with Motion Smoothing on, and we would recommend lowering settings instead. In contrast, the RTX 3080 delivered 121.99 unconstrained FPS with no dropped frames, and it didn’t require any synthetic frames either.  The experience playing Project CARS 2 on our chosen near-maximum settings is significantly better on the RTX 3080 than it is on the RTX 2080 Ti.

 

So that's +43% unconstrained FPS in Elite, +40% in NMS, and +23% in Project Cars 2. That review has other games too, but I don't consider benchmarks of 5ms frame-time games to be very useful for our purposes.

Wow those performance gains are stellar... 

Share this post


Link to post
Share on other sites

Probably Project Cars 2 and Elite Dangerous would come closest to what we my see in flight sims.

If not for the 10 GB Vram limitation of the 3080 I would probably be trying to snag one.

Can't see the 3090 being worth the huge price premium for me.

 

I was measuring my Vram usage in IL-2 earlier today, seems to run around 9-9.5 GB with the settings I am running - basically Ultra for the most part.

This was in a PWCG campaign with a fair amount of action going on.

I want more than 10GB of Vram so may hold out and see if a better card for what I want might be released a little down the road. Heck my current 2080 Ti has 11GB of Vram

If I get too impatient though I may grab a 3090 - hoping not as I feel I would likely have wasted a few hundred bucks.

Share this post


Link to post
Share on other sites
On 9/26/2020 at 11:32 AM, Alonzo said:

Hi Wulfe! Thanks for this.

 

What I'm really interested in is the relative performance vs another graphics card. Are you able to record some kind of test track, like Chili has used before, then run it to make reproducible (to you) performance numbers? That way you could compare the performance of the 3080 to your previous graphics card.

 

If I had a 3080 on hand, this is what I'd do:

  • Install old GPU (RTX 2080 in my case).
  • Record 2-minute benchmark track including other planes, some combat, some ground stuff.
  • Set graphics high enough that I don't hit 90 FPS. I have a Valve Index, so actually what I do for benchmarking is use 144hz mode with all reprojection switched off. This gives a real framerate. (Probably you want High or Ultra with everything switched on, and some supersample).
  • Benchmark the 2080, maybe 3 runs.
  • Install new 3080, benchmark again, maybe 3 runs.

The reason I suggest setting the graphics high enough that you don't hit 90 FPS is because that gives a real differential between the cards. You also want to disable the 45 FPS interpolated mode for benchmarking, what we're looking for is the "raw FPS" that each card can deliver.

 

How do you turn off reprojection with the Index? Is it beneficial to run it in 144hz mode with it off, or 90/80hz with it on? I'm on a 1080ti but will be upgrading to a 3080 as soon as I can get my hands on it.

Share this post


Link to post
Share on other sites
On 9/29/2020 at 11:25 AM, dburne said:

If not for the 10 GB Vram limitation of the 3080 I would probably be trying to snag one.

Can't see the 3090 being worth the huge price premium for me.

 

I might have replied elsewhere, but there are strong, persistent rumors that a 20GB version of the 3080 will be available, rumored price $849 - $999, some time in December. The AMD lineup supposedly has 16GB too, and might be within spitting distance of the 3080 for performance.

Share this post


Link to post
Share on other sites

Very exciting that's a crazy amount of RAM wow! A bit out of my price range however I'll have to wait till it comes down to around $500 or get something used.

Share this post


Link to post
Share on other sites

It has been leaked that these are cancelled afaik. So no 3080 20GB or 3070 16GB 😕 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...