Jump to content
VeganMobster

Poor FPS in battles in VR, otherwise fine. Why?

Recommended Posts

Hi. For the past two days I've been experimenting with settings to get close to 90 FPS stable in VR. I have no problem reaching that on the ground or when flying while there's no activity. But battles with 10 planes or so pull my FPS down to the 40s and 50s, causing stutter and making combat a nauseating experience.

 

My setup:
- Core i7 8700 (not K)

- GTX 1070

- 16 gigs 3000 MHz RAM

- Oculus Rift

- Windows 10 64 bit

 

I took measurements of CPU and GPU usage during a mission with probably about 12 planes in combat. In the attached graphs the gameplay starts at around a quarter of the way in (as seen in the GPU graph). In that stage (takeoff, flying to waypoint) FPS was a stable 90. The last half of the run is during combat where FPS dipped a lot. As you can see, GPU usage (2nd from top) is quite high all the way, but CPU usage (3rd from bottom) is surprisingly low:
 

analysis.thumb.PNG.e3b422a697f9a2d8c5e8eef1f00df2b4.PNG

 

I've tried many different settings based on tips on this forum. These results were recorded with Oculus supersampling at 1.2 and ASW off. I have the latest Oculus drivers (beta drivers). In game, I used the Low graphics setting, with 1024x768 resolution, 4x AA and 70 km view distance with HBAO off. NVIDIA power management is at max for the game. Overall I've fiddled with the graphics settings a lot but keep running into poor FPS in combat.

How should I interpret the usage graphs? Am I being limited by my GPU in combat situations? I thought that this game was above all demanding on the CPU. I know my temperatures are not ideal, but I don't think they're high enough to cause throttling.

 

Thanks in advance for any help! Flying in VR is wonderful but trying to get it to work well is not.

Share this post


Link to post
Share on other sites

Your falling for the old multi-core averaging trick.  Your CPU is overloading, on the one core that really counts, the rest are on vacation, useless.  Try turning off Hyperthreading and see if that core picks up a bit off it's slack side siamese cousin.

Share this post


Link to post
Share on other sites

Never ending story 😃

Just put 6 cores in calculations and you'll get your 22% average CPU usage.

CPU is bottleneck, draw a K next to 8700 and overclock it to 5.0GHz.

Share this post


Link to post
Share on other sites
5 hours ago, TUS_Samuel said:

Never ending story 😃

Just put 6 cores in calculations and you'll get your 22% average CPU usage.

CPU is bottleneck, draw a K next to 8700 and overclock it to 5.0GHz.

Don't get his hopes up this still happens with my system. I play on balanced settings with 100% SS and 2xAA still get massive frame drops in dog fights. Vr performance is pretty bad lately. 

  • Upvote 1

Share this post


Link to post
Share on other sites

Thanks for the replies. Yes, I took another look at CPU usage and it is actually between 20 to 30 percent. 

 

I just upgraded to the 8700, so an even more expensive CPU just to play this game in VR is not a possibility, given that the 8700 blazes through every other game I have. Is an i9 really a requirement for VR in this game? 

 

I will keep experimenting to see if I get better FPS. It's a bit difficult to gather tips from this forum because there's a lot of contradicting information. Is there a summary, maybe from the developers, of settings that have been proven to be optimal? 

 

 

Share this post


Link to post
Share on other sites
1 hour ago, JonRedcorn said:

still get massive frame drops in dog fights

I agree, many people complain about stutters. In short term we have to wait for update, maybe devs will fix that.

There are some measurements of FPS versus settings

https://docs.google.com/spreadsheets/d/1gJmnz_nVxI6_dG_UYNCCpZVK2-f8NBy-y1gia77Hu_k/edit#gid=334824301

But still settings topic is very subjective. Someone plays on ultra and is happy with constant 45 FPS. Someone OC's CPU and uses balanced settings.

Share this post


Link to post
Share on other sites

Maybe I'll go with ASW and mod the propeller away to get rid of the visual artifacts 😐

 

To be clear, I'm very happy that VR was introduced to this game. I've played VR games since last summer, and I very rarely get that VR "wow" factor anymore. But getting in that cockpit with the Rift on gave me just that. The sense of space and height is tremendous. I hadn't played IL2 for maybe two, three years, but I feel like VR could lure me back in. And for some reason, flying in VR finally enabled me to land a plane without crashing! And to do it several times over. 

  • Like 1

Share this post


Link to post
Share on other sites

VR requires a crap load of horsepower both from the CPU and the GPU.

To get the best VR experience in what we do get the best motherboard, cpu, ram, and gpu you can afford. 

No matter which, will never maintain a constant 90 fps in all situations with today's technology. But can get close.

 

A 2080 Ti thrown into an i7 DDR3 system is not going to be all that great, however throw that same 2080 ti in a new i9 9xxx system with DDR4 ram and it will be much better.

But comes at a price obviously.

 

 

Share this post


Link to post
Share on other sites

Cheapest way is probably to hack dburne on his location, buy planetickets forth and back, break in and take his Computer, then send the Computer home with max insurance. That'll be cheaper than buying a system that can run IL-2 properly at 90fps constant 😂

  • Haha 4

Share this post


Link to post
Share on other sites
Posted (edited)

I am curious what the 9900k has over the coffelake series in regards to single core and ipc.

According to this site, I know it's not great, but the ipc between the two is within the margin of error.

 

https://cpu.userbenchmark.com/Compare/Intel-Core-i9-9900K-vs-Intel-Core-i7-8700K/4028vs3937

 

1-2%

 

For anyone on coffeelake or even the 7k series it's not even worth it.

 

A 9700k or 9900k at 5.0 ghz will score the same as a 8700k or 7700k at 5 ghz.

 

That 2080ti though, that's a solid improvement for a very heavy price.

Edited by JonRedcorn
  • Upvote 1

Share this post


Link to post
Share on other sites

Watching the development in CPU during the last 5 years, it is safe to assume that the single core clock speed won't make any real leaps in the future. Same goes for Intel's new chiplet tech. They'll run a single maincore - as right now - along with a few Atoms beside it.

 

Hence, the only way to get IL-2 running properly in VR is to have the developers upgrade and optimize their engine to use multiple cores and dx12.

 

There is no other way. And there won't be any other way.

  • Upvote 4

Share this post


Link to post
Share on other sites
Posted (edited)
7 hours ago, JonRedcorn said:

I am curious what the 9900k has over the coffelake series in regards to single core and ipc.

 

There are four things that should affect the CPU single-thread performance (which IL-2 VR performance is heavily correlated as demonstrated in our previous tests):

 

- Clock speed (cycles per second)

- Instructions per cycle (IPC).

- Caches sizes/latencies

- RAM speed (assuming the code to execute is larger than cache size)

 

From Haswell  to Coffee-lake architectures (so from 4th to 9th generation) the CPU has a theoretical 32 single precision IPC:

https://en.wikipedia.org/wiki/Instructions_per_cycle

 

so for the same clock speed the only influence is caches sizes/latencies and RAM speed. Here a table of cache sizes and RAM speed:

cache.jpg.249ba1855dda73be0063d2b1b15c18b9.jpg

 

So there could be a gain from 9900K (16MB cache) to the 9700K/8086K/8700K but apparently is very small according to peak overclocked (5.2) SC mark.

When I look to the CPUbenchmark page I tend to look to the peak overclocked SC mark. So you will see that for 5.2 GHz, the 9900K or 8086K or 8700K they give basically the same SC mark (only 1% difference, top is 8086K with 159):

686778484_9900Kvs8086KSC.thumb.jpg.dfb4a4f105a96defb021f7b028b7d479.jpg

 

418536653_9900Kvs8700KSC.thumb.jpg.bddc35b40bbff32555379207b6b86a30.jpg

 

and with the 8350K, which has less cache and slower RAM, the difference is 5%

 

503204069_9900Kvs8350KSC.thumb.jpg.3e301e6a742177282a7ef193ff8445bf.jpg

 

 

Edited by chiliwili69
  • Upvote 1

Share this post


Link to post
Share on other sites
Posted (edited)

So basically non existent. That's why I stopped throwing money at the game to try and improve my framerate and basically gave up on VR. Sad truth fellas. Really I wish I never bought the samsung, it's a poor HMD, with horrible comfort and ergonomics. The rift blows it out of the water in that department. It starts hurting my face after minutes of use. I could wear the rift for a very long time before it became uncomfortable. IMO the halo design is garbage. It works for people with basketballs for heads. Last time I checked my head wasn't a sphere.

 

I ordered a VR cover, maybe that will help.

Edited by JonRedcorn

Share this post


Link to post
Share on other sites
9 hours ago, SCG_Fenris_Wolf said:

Watching the development in CPU during the last 5 years, it is safe to assume that the single core clock speed won't make any real leaps in the future. Same goes for Intel's new chiplet tech. They'll run a single maincore - as right now - along with a few Atoms beside it.

 

Hence, the only way to get IL-2 running properly in VR is to have the developers upgrade and optimize their engine to use multiple cores and dx12.

 

There is no other way. And there won't be any other way.

Not DX12, let's open the field up a little and use something that really delivers the goods and is cross platform - Vulcan.

 

DX12 has shown itself to be a non performer when it comes to improvements over DX11 and modern CPU/GPU tech.  Titles that have gone the Vulcan route have shown much more promise and with Sony behind it with Play Station as well as Steam well Vulcan offers more platforms to easily target ones software too.  Also opens the door in GPU's away from Nvidia where AMD has shown strong performance on their hardware with Vulcan titles.

 

I can't understand why coders from Eastern Countries would want to embrace proprietary tech anyway.

 

You are right though - Core frequency has stagnated over the last 5 or so years.  Dreaming of that 10Ghz core clock to drive your Il2 VR experience is just that as the industry moves more and more to parallelism and chiplet based designs to deliver performance increases in computing.

Share this post


Link to post
Share on other sites
10 hours ago, chiliwili69 said:

 

There are four things that should affect the CPU single-thread performance (which IL-2 VR performance is heavily correlated as demonstrated in our previous tests):

 

- Clock speed (cycles per second)

- Instructions per cycle (IPC).

- Caches sizes/latencies

- RAM speed (assuming the code to execute is larger than cache size)

 

From Haswell  to Coffee-lake architectures (so from 4th to 9th generation) the CPU has a theoretical 32 single precision IPC:

https://en.wikipedia.org/wiki/Instructions_per_cycle

 

so for the same clock speed the only influence is caches sizes/latencies and RAM speed. Here a table of cache sizes and RAM speed:

cache.jpg.249ba1855dda73be0063d2b1b15c18b9.jpg

 

So there could be a gain from 9900K (16MB cache) to the 9700K/8086K/8700K but apparently is very small according to peak overclocked (5.2) SC mark.

When I look to the CPUbenchmark page I tend to look to the peak overclocked SC mark. So you will see that for 5.2 GHz, the 9900K or 8086K or 8700K they give basically the same SC mark (only 1% difference, top is 8086K with 159):

686778484_9900Kvs8086KSC.thumb.jpg.dfb4a4f105a96defb021f7b028b7d479.jpg

 

418536653_9900Kvs8700KSC.thumb.jpg.bddc35b40bbff32555379207b6b86a30.jpg

 

and with the 8350K, which has less cache and slower RAM, the difference is 5%

 

503204069_9900Kvs8350KSC.thumb.jpg.3e301e6a742177282a7ef193ff8445bf.jpg

 

 

 

I do have an i7-7700k 4,8Ghz overclocked, according to this site if being overclocked (think at 5Ghz???)  it would bring me in the “ peak overclocking bench” of 153Pts. Notice the i7-9700k, i5-9600k or i7-8700 at 159,  156 or 155pts, does this tell me that upgrading in my 7700k  does not have any effect if I could reach the 5Ghz?  To me it seems not that much difference between those “ pts”

Share this post


Link to post
Share on other sites
Posted (edited)
3 hours ago, Dutch2 said:

 

I do have an i7-7700k 4,8Ghz overclocked, according to this site if being overclocked (think at 5Ghz???)  it would bring me in the “ peak overclocking bench” of 153Pts. Notice the i7-9700k, i5-9600k or i7-8700 at 159,  156 or 155pts, does this tell me that upgrading in my 7700k  does not have any effect if I could reach the 5Ghz?  To me it seems not that much difference between those “ pts”

You would see a difference within the margin of error. IE not worth it at all. Dburne went from a much older intel cpu to a new one that's why he saw a performance increase.

Edited by JonRedcorn

Share this post


Link to post
Share on other sites

To the OP: try using Oculus Tray Tool in game to see the performance numbers. You’re looking for app render timing. CPU and GPU both need to be under 11ms for 90fps. Get into some action and see what’s slow. If it’s the GPU, decrease your supersample. If it’s the CPU, decrease to Balanced settings or overclock your 8700 (you should be able to get 3% out of it with a BCLK overclock).

Share this post


Link to post
Share on other sites

@blitze Vulkan is another framework, and the Devs have used DirectX so far. A switch in frameworks doesn't just require rewriting a lot but also throw away experience in coding. This leads to a lot of manhours put into relearning,a loss in effectivity/speed in the workflow, probably some bugs in the process, and is thus something a monetary mountain that 777 with its tight budget and tight schedule to integrate and release their new expansions cannot afford.

 

DirectX 12 however is in the realms of possibility and also open to both Nvidia and AMD. Even if Vulkan may be better as the framework from the technical point of view.

  • Upvote 1

Share this post


Link to post
Share on other sites
Posted (edited)
On 4/25/2019 at 1:24 AM, JonRedcorn said:

You would see a difference within the margin of error. IE not worth it at all. Dburne went from a much older intel cpu to a new one that's why he saw a performance increase.

 

So if this has no sense, then I think I will not update to an i7-9700k, I stay to my i7-7700k but let this CPU deliding to get the last drop of OC and reach the 5ghz. 

Edited by Dutch2

Share this post


Link to post
Share on other sites
Posted (edited)
15 hours ago, [CPT]Crunch said:

Never stated what kind of drive he had, that too can make an impact. 

 

I installed the game on a PCIe NVMe SSD to minimize latency from the storage media. After posting my original question I had to reinstall Windows due to a system glitch. I was hoping that the reinstall would improve performance, but today I fired up the game again in VR and it's the same as before. FPS in the 40s and 50s in combat situations, using low graphics settings with 2XAA. Maybe I'll just wait for better days (optimization, ASW 2 implementation etc) and come back later.

Edited by VeganMobster
  • Upvote 1

Share this post


Link to post
Share on other sites
3 hours ago, VeganMobster said:

 

I installed the game on a PCIe NVMe SSD to minimize latency from the storage media. After posting my original question I had to reinstall Windows due to a system glitch. I was hoping that the reinstall would improve performance, but today I fired up the game again in VR and it's the same as before. FPS in the 40s and 50s in combat situations, using low graphics settings with 2XAA. Maybe I'll just wait for better days (optimization, ASW 2 implementation etc) and come back later.

Why did you go for the i7-8700 and not for the same priced i5-8600k or the i7-8700k, is this you do not have the mobo that can handle this k-series CPU? 

 

 

Share this post


Link to post
Share on other sites
On 4/26/2019 at 10:13 AM, SCG_Fenris_Wolf said:

@blitze Vulkan is another framework, and the Devs have used DirectX so far. A switch in frameworks doesn't just require rewriting a lot but also throw away experience in coding. This leads to a lot of manhours put into relearning,a loss in effectivity/speed in the workflow, probably some bugs in the process, and is thus something a monetary mountain that 777 with its tight budget and tight schedule to integrate and release their new expansions cannot afford.

 

DirectX 12 however is in the realms of possibility and also open to both Nvidia and AMD. Even if Vulkan may be better as the framework from the technical point of view.

 

I agree with your points with the realising the migration being easier to DirectX 12 but from what I've seen DirectX 12 doesn't seem to bring much to the table over DirectX 11.  Definitely not the same performance gains that Vulcan brings over DirectX 11 on titles that support both.

 

Another thing is that with Vulcan Il2 can easier to be migrated to other Operating Systems - I know, the year of Linux was in 2004 but there have been strides made to make Linux much more gaming friendly compared to years past.  Then there is OS-X or the underlying base which is BSD.  Giving platform options is nice especially how MS has been going recently.

 

Lastly, Vulcan seems to perform very well on GPU's which are compute intensive and that means AMD GPU's which allows for them to be competitive to Nvidia.  This would be nice to have such competitive performance GPU's would give us a little more options in GPU to the price gouging Nvidia GPUs.

 

Anyway - even sticking with DirectX 11 but offloading other CPU intensive elements like Physics and AI across threads which scale given what is presented by ones CPU core counts.  Maybe even weather can be offloaded onto a separate thread.  All helps.

 

Just some thoughts to make a great thing better.

Share this post


Link to post
Share on other sites
Posted (edited)

haha, il2 isn't getting vulkan support ever. I wouldn't even want it since Nvidia cards seem to run like garbage on it.

Edited by JonRedcorn

Share this post


Link to post
Share on other sites
On 4/27/2019 at 10:27 PM, Dutch2 said:

Why did you go for the i7-8700 and not for the same priced i5-8600k or the i7-8700k, is this you do not have the mobo that can handle this k-series CPU? 

 

 

My motherboard is a pretty good one, so that wasn't the reason. I got the i7 8700 because it seemed to yield more stable benchmark performance in games compared to the i5. And I didn't get the K variant because it was more expensive and I didn't expect to have to immediately overclock an 8th gen i7. And I don't need to in any other game I play. 

 

I don't wanna throw more money at this or spend more time looking for the perfect settings, so I'll focus my time and energy on other things. 

 

Share this post


Link to post
Share on other sites
8 hours ago, VeganMobster said:

My motherboard is a pretty good one, so that wasn't the reason. I got the i7 8700 because it seemed to yield more stable benchmark performance in games compared to the i5. And I didn't get the K variant because it was more expensive and I didn't expect to have to immediately overclock an 8th gen i7. And I don't need to in any other game I play. 

 

I don't wanna throw more money at this or spend more time looking for the perfect settings, so I'll focus my time and energy on other things. 

 

Was a poor decision really, the 8600k would of given better performance in nearly all the titles you can play. Like every single game. Buying a non k intel chip is a very poor idea. Clock speed is everything in gaming.

Share this post


Link to post
Share on other sites
10 hours ago, JonRedcorn said:

Was a poor decision really, the 8600k would of given better performance in nearly all the titles you can play. Like every single game. Buying a non k intel chip is a very poor idea. Clock speed is everything in gaming.

 

Uhhh...thanks? 

  • Haha 1

Share this post


Link to post
Share on other sites

Virus* changed something else as well. You've just notified us that his record is faulty, not ceteris paribus. Maybe the old one should get removed, it's false, because the game doesn't use as much RAM anyway.

 

Moving to an i9 9900K would only give you good FPS if you got it overclocked to at least 4.9GHz or better yet 5GHz. The i5 9600K is the secret champion for IL-2, most go up to 5GHz, their heat spreaders are soldered on (much better thermals), and they don't let the VRM spike as much. They are also easier on the portemonaie - better put some more money into good fast RAM and a good quality Mainboard (don't save on the mainboard!!).

  • Upvote 1

Share this post


Link to post
Share on other sites
Posted (edited)
9 hours ago, BlackMambaMowTin said:

I'm using an i7 6700K that won't overclock much. My GPU is a 2080ti on the Rift-S

 

Would moving to an i9 9900k give me steady 80 fps? 

 

In this spreadsheet, I notice one guy, Virus* goes from 57fps to 87fps when he changes from 16GB to 32 GB of ram. 

 

https://docs.google.com/spreadsheets/d/1gJmnz_nVxI6_dG_UYNCCpZVK2-f8NBy-y1gia77Hu_k/edit#gid=46936995

Like my more a less the same 7700k this  i7-6700k does need to be delidded, afterthat it should be reach the sweetspot of 5ghz and you could even use an good aircooler, no need for watercooling on that quadcores.  In my case the whole delidding did cost me €30,- for the Der8auer tool and €15,- for the needed cooling paste and liquid metal.  

My 7700k is only an refresh from your 6700k. 

 

Edited by Dutch2

Share this post


Link to post
Share on other sites
On 6/8/2019 at 10:59 PM, BlackMambaMowTin said:

In this spreadsheet, I notice one guy, Virus* goes from 57fps to 87fps when he changes from 16GB to 32 GB of ram. 

 

https://docs.google.com/spreadsheets/d/1gJmnz_nVxI6_dG_UYNCCpZVK2-f8NBy-y1gia77Hu_k/edit#gid=46936995

 

The initial test of Virus* was a bit weird, that´s why it is marked in yellow (yellow means "not as expected" or "something wrong here"). So, don´t take the first test as valid.

 

There is no gain from going from 16Gb to 32Gb.

Also, there is no gain from going from whatever CPU at 5.0Ghz to 9900K at 5.0GHz.

Share this post


Link to post
Share on other sites
Posted (edited)
On 6/10/2019 at 3:51 AM, chiliwili69 said:

 

The initial test of Virus* was a bit weird, that´s why it is marked in yellow (yellow means "not as expected" or "something wrong here"). So, don´t take the first test as valid.

 

There is no gain from going from 16Gb to 32Gb.

Also, there is no gain from going from whatever CPU at 5.0Ghz to 9900K at 5.0GHz.

 

Thanks for clearing that up. If I move from my i7 6700k @ 4.2 Ghz to i9 9900k will I be able to run at a steady 80fps in Rift S with my 2080ti?  My i7 6700k is a dud in terms of OC. 

 

Also isn't the i9 9900k @ 5.0 Ghz? 

Edited by BlackMambaMowTin

Share this post


Link to post
Share on other sites
2 hours ago, BlackMambaMowTin said:

 

Thanks for clearing that up. If I move from my i7 6700k @ 4.2 Ghz to i9 9900k will I be able to run at a steady 80fps in Rift S with my 2080ti?  My i7 6700k is a dud in terms of OC. 

 

Also isn't the i9 9900k @ 5.0 Ghz? 

 

I have my i9 9900k running 5.1 GHz on all 8 cores, no AVX offset.

With my 2080 Ti and Rift S, I see 80 fps the majority of the time in IL-2. Even cold start and taxi to take off in PWCG, I get 80 fps.

There will be some occasional dips when lots of planes/heavy combat with multiple planes, but it is still very smooth.

Usually just 4 on 4 results in a mostly steady 80 fps for me.

 

Stock settings for the i9 9900k one core boosts to 5 GHz, remaining cores 3.6 GHz.

Very easy to overclock with a good cooler ( I use 360mm AIO) and good overclocking motherboard.

 

I am running graphics on Ultra, and 2x AA. SS at 1.4 via Oculus Tray Tool.

I switched recently from Steam VR to Open Composite and that helped my performance. For some reason that last Steam VR update I got last week hurt my performance.

  • Like 1

Share this post


Link to post
Share on other sites
7 hours ago, dburne said:

 

I have my i9 9900k running 5.1 GHz on all 8 cores, no AVX offset.

With my 2080 Ti and Rift S, I see 80 fps the majority of the time in IL-2. Even cold start and taxi to take off in PWCG, I get 80 fps.

There will be some occasional dips when lots of planes/heavy combat with multiple planes, but it is still very smooth.

Usually just 4 on 4 results in a mostly steady 80 fps for me.

 

Stock settings for the i9 9900k one core boosts to 5 GHz, remaining cores 3.6 GHz.

Very easy to overclock with a good cooler ( I use 360mm AIO) and good overclocking motherboard.

 

I am running graphics on Ultra, and 2x AA. SS at 1.4 via Oculus Tray Tool.

I switched recently from Steam VR to Open Composite and that helped my performance. For some reason that last Steam VR update I got last week hurt my performance.

 

I too noticed a performance hit after the last Steam VR update. That did it for me so now using OpenComposite.

Share this post


Link to post
Share on other sites

SteamVR... I reinstall it every week. It's too damn inconsistent, you're absolutely right.

Share this post


Link to post
Share on other sites
15 hours ago, dburne said:

I am running graphics on Ultra, and 2x AA. SS at 1.4 via Oculus Tray Tool.

I switched recently from Steam VR to Open Composite and that helped my performance. For some reason that last Steam VR update I got last week hurt my performance.

 

Any views on image quality at 4x AA vs. resulting performance hit? I always use 4x, didn't like the image quality at 0x, but never tried at 2x...

 

 

Share this post


Link to post
Share on other sites

2×AA on 4K 55" TV looks bad. Serious amount of shimmer on planes and other objects. 

 

Graphics on Ultra. I can't say how is it on VR but 0 AA in VR looks horrible. Everything crawls and shimmers.

Share this post


Link to post
Share on other sites
20 hours ago, BlackMambaMowTin said:

Thanks for clearing that up. If I move from my i7 6700k @ 4.2 Ghz to i9 9900k will I be able to run at a steady 80fps in Rift S with my 2080ti?  My i7 6700k is a dud in terms of OC. 

 

Also isn't the i9 9900k @ 5.0 Ghz? 

 

You probably don't need to get spendy and could go with the 9600K or 9700K. Get the 360mm cooler as dburne suggests. This still will not guarantee 80 FPS everywhere, but it'll be close to it.

 

2 hours ago, Mephisto said:

Any views on image quality at 4x AA vs. resulting performance hit? I always use 4x, didn't like the image quality at 0x, but never tried at 2x...

 

2 hours ago, 307_Banzai said:

2×AA on 4K 55" TV looks bad. Serious amount of shimmer on planes and other objects. 

 

Graphics on Ultra. I can't say how is it on VR but 0 AA in VR looks horrible. Everything crawls and shimmers.

 

dburne is using 1.4 pixel density, so his 2080ti is actually rendering at nearly 2x the number of pixels (with 2xAA) and then downsampling to the resolution of the Rift S. That should do a hell of a job of reducing jaggies and shimmer (and also probably hurt your spotting ability if you're playing competitive multiplayer...). Personally I run a Rift S, 2080 (non-ti) and 4xAA and 1.1 pixel density (=20% more pixels rendered and then downsampled). It looks better with more AA and more pixel density, for sure.

  • Thanks 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...