Jump to content

i7-8086K at 5.0 GHz with turbo... our new star?


Recommended Posts

chiliwili69
Posted (edited)

For those thinking to have a new CPU for IL-2 in VR....:

 

https://www.theverge.com/2018/6/5/17428534/intel-core-i7-8086k-anniversary-limited-edition-computex-2018

https://ark.intel.com/products/148263/Intel-Core-i7-8086K-Processor-12M-Cache-up-to-5_00-GHz

 

If default Turbo goes to 5.0GHz, I wonder how far you can go with some OC...

Let´s see the price.

 

If you are from one of those lucky countries....? (not my case)

 

https://game.intel.com/8086sweepstakes/#

 

Edited by chiliwili69
Posted (edited)

IMHO it would be a waste if your only requirement is for IL-2 VR.

 

Regardless of what is claimed by most on this board, IL-2 in VR, does not require highly manually overclocked Intel CPUs to reach the holy grail of VR performance nirvana.  It uses a poorly optimised engine that uses a single CPU core and once you reach 4.2GHz + you start to get diminishing returns.

 

With my 4770K at 4.2GHz it gave similar minimum FPS (in VR) than when I overclocked to 4.8GHz.  Getting to 4.8GHz was not easy and it was just a quick balls to the wall test at the time to see this "holy grail claim in BoX".

 

When I upgraded (or downgraded as many here would claim) from my 4770K to a Ryzen 1700X at only 3.8GHz the performance in 2D and in VR was identical.

 

Your own 4790K thread shows this diminishing return perfectly.  Once you prevent your CPU going under 90Hz in all situations (if that is your aim for VR), then there is no point overclocking further, because there is zero benefit as the HMD is locked at 90Hz.  You were getting over 90FPS for every OC you used from 4GHz up to 4.7GHz.  The only time it will be of benefit is if you use ASW off, up the graphics settings and allow FPS to drop below 90FPS.  Or if you are exclusively playing IL-2 on a 1080p 2D 144Hz monitor.

Edited by ICDP
  • Upvote 1
Posted

 It uses a poorly optimised engine that uses a single CPU core.

 

 

Well Il2X maybe uses just one single CPU but IMO that doesnt mean this SIM is poorly optimised.

About the new CPU, I think is not a good idea overclock it as by default requires a bit more voltage than other i7. 

Posted (edited)

I think it's a binned and pre-overclocked CPU, so there may not be much more headroom for overclocking.

 

By definition, if a game engine is using a single core in 2018, then it is poorly optimised.  Multi core CPUs have been around for decades now and have been exclusively mainstream in the PC CPU market for well over a decade.

Edited by ICDP
  • Upvote 2
Posted (edited)
1 hour ago, ICDP said:

IMHO it would be a waste if your only requirement is for IL-2 VR.

 

Regardless of what is claimed by most on this board, IL-2 in VR, does not require highly manually overclocked Intel CPUs to reach the holy grail of VR performance nirvana.  It uses a poorly optimised engine that uses a single CPU core and once you reach 4.2GHz + you start to get diminishing returns.

 

 

I think thats true to be honest I dont really see or feel much of a difference between stock or an OC of 5ghz with the i7-8700k. I might not even bother any more. Kinda realized when I didnt realize my OC was not actually even running!

 

I dont even bother with a powerful OC any more and just use one that is set to about 4.5 as I've got to the point of thinking it was all just placebo! There may have been an fps gain, but as you say being dependant on reprojection makes the difference it does make rather difficult to discern once you get past assuming that it must be making a difference. And I need reprojection!

 

The performance problems both flight sims (this and the other one) have at times are also not going to be solved by an OC no matter how high they are OC'd and are more to do with sudden and rapid FPS drops that are relatively rare with BOS but still occur on certain campaign missions when there is simply too much going on for the game to handle in VR. Outside of the campaign missions for example the HUD can be used with no problem, whereas durring the campaign it causes large fps drops which cause the game to flash.

 

Performance is great by and large but I do hope the developers wont just stop where they are with BOS and VR developement as there are still areas to improve like the zoom (fps drops) and campaign performance that could be improved. I still think 3.002 has much more performance problems (sudden fps drops) than 3.001 had. 

Edited by Wolf8312
Posted

Yeah no matter how you cut it VR basically halves the fps versus Monitor.

Very demanding as it is rendering the scene twice and merging them together.

Makes it tough on complex games that were initially developed without VR being much of a thought.

Historically much of flight simming was spent on tweaking one's system to get the last ounce of fps one could out of it. 

VR is more of tweaking to get as much as an acceptable level of performance one can live with, rather than an extra 5-10 fps.

 

I think for the future much will depend on the VR Hardware makers optimizing their devices and software, along with Nvidia maybe optimizing drivers some more for VR also.

 

  • Upvote 1
Posted

Exactly guys, in VR once you are in to the reprojection zone then a few % increase in FPS is not going to make a real difference.

 

I read somewhere on the board that the devs were planning to implement a Vulkan version of BoX.  Now this would make a difference, modern engine using more cores.  Was I imaging this statement?

Posted (edited)
20 minutes ago, dburne said:

Yeah no matter how you cut it VR basically halves the fps versus Monitor.

Very demanding as it is rendering the scene twice and merging them together.

Makes it tough on complex games that were initially developed without VR being much of a thought.

Historically much of flight simming was spent on tweaking one's system to get the last ounce of fps one could out of it. 

VR is more of tweaking to get as much as an acceptable level of performance one can live with, rather than an extra 5-10 fps.

 

I think for the future much will depend on the VR Hardware makers optimizing their devices and software, along with Nvidia maybe optimizing drivers some more for VR also.

 

 

I sometimes pity new VR users because I know how long they will need to get adjusted to it and understand its limitations! The endless tweaking!

 

I'm putting my hopes in the next 1180 or volta Ti or whatever its going to be called. VR is great but I just cant wait for the day when we are at constant 90 fps and not only that but (especially with the other sim) no mission is off limits!

 

4 minutes ago, ICDP said:

Exactly guys, in VR once you are in to the reprojection zone then a few % increase in FPS is not going to make a real difference.

 

I read somewhere on the board that the devs were planning to implement a Vulkan version of BoX.  Now this would make a difference, modern engine using more cores.  Was I imaging this statement?

 

I think thats DCS buddy!

 

But that would be great, and maybe if DCS does it BOS will follow suit cause you're right, being limited by one core is a real hinderance to future development.

Edited by Wolf8312
chiliwili69
Posted
4 hours ago, ICDP said:

With my 4770K at 4.2GHz it gave similar minimum FPS (in VR) than when I overclocked to 4.8GHz

 

I don´t think the OC is a placebo effect since it can be measured.

I conducted a recent OC test in monitor with Samuel track over a wider range and the results were measurable (about 4fps per 0.2 GHz) till 5.0GHz:

https://forum.il2sturmovik.com/topic/29322-measuring-rig-performance-common-baseline-for-il-2-v3/?do=findComment&comment=603630

 

And also here with the old Balapan track (about 5fps per 0.2 GHz):

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485112

 

In VR it really depends if you use ASW ON or OFF. In my case I always have ASW OFF.

In that case this is the test in VR I did with the 1070 card:

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485296

There were other factor that decreased the gain beyond 4.3 GHz.

One of thosefactors is that the closer you are to 90fps the less gain you achieve (in monitor test you don´t have this problem).

Other factors could be GPU (it was a 1070) or RAM.

 

One of these days I will re-run this OC test with VR with Samuel track and with my 1080Ti.

4 hours ago, ICDP said:

Your own 4790K thread shows this diminishing return perfectly.  Once you prevent your CPU going under 90Hz in all situations (if that is your aim for VR), then there is no point overclocking further, because there is zero benefit as the HMD is locked at 90Hz.  You were getting over 90FPS for every OC you used from 4GHz up to 4.7GHz. 

I think this test you refers was running in just monitor, it is this one posted above:

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485112

 

Obviously, if you have a rig that runs IL-2 VR always at 90fps you don´t need to overclok it anymore since you have achieve the Nirvana. That rig doesn´t exist today without using liquid nitrogen.

chiliwili69
Posted (edited)
4 hours ago, ICDP said:

I think it's a binned and pre-overclocked CPU, so there may not be much more headroom for overclocking.

 

Yeah, very much that. It is a 8700K pre-overclocked: (let´s see how 8700K prices drop...)

 

 

 

Edited by chiliwili69
Posted (edited)
1 hour ago, chiliwili69 said:

 

I don´t think the OC is a placebo effect since it can be measured.

I conducted a recent OC test in monitor with Samuel track over a wider range and the results were measurable (about 4fps per 0.2 GHz) till 5.0GHz:

https://forum.il2sturmovik.com/topic/29322-measuring-rig-performance-common-baseline-for-il-2-v3/?do=findComment&comment=603630

 

And also here with the old Balapan track (about 5fps per 0.2 GHz):

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485112

 

In VR it really depends if you use ASW ON or OFF. In my case I always have ASW OFF.

In that case this is the test in VR I did with the 1070 card:

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485296

There were other factor that decreased the gain beyond 4.3 GHz.

One of thosefactors is that the closer you are to 90fps the less gain you achieve (in monitor test you don´t have this problem).

Other factors could be GPU (it was a 1070) or RAM.

 

One of these days I will re-run this OC test with VR with Samuel track and with my 1080Ti.

I think this test you refers was running in just monitor, it is this one posted above:

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485112

 

Obviously, if you have a rig that runs IL-2 VR always at 90fps you don´t need to overclok it anymore since you have achieve the Nirvana. That rig doesn´t exist today without using liquid nitrogen.

 

Apologies I thought that was a VR test you did at first, though in hindsight I should have realised the numbers were far too high.  I read on down the page and found your VR results.

 

I won't (and indeed can't) deny that the results of overclocking are measurable, your excellent work in this regard proves this.  The issue I am pointing out is that when looking purely at a VR use, the overclock beyond ~ 4.3GHz doesn't make any difference.  There is no current or future CPU that will run IL-2 VR without issues because even Intel are going more cores rather than GHz now.  IL-2 graphics engine is simply not designed for optimal for VR use.  Your own 4790K testing shows the minimums for VR are all pretty much identical at 43-44 FPS regardless of overclock.  VR is measured differently to normal 2D benchmarks.

 

Simply saying that in VR a CPU overclock yields a 20% increase in average FPS does not mean much.  With VR it is the minimums that matter, no matter if ASW is on or off.  When I had my Rift, in IL-2 it was either smooth with artifacts and ASW on, or ASW off and stutters.  There is a reason ASW/reprojection is on by default in VR HMDs, VR needs constant smoot FPS, not fluctuations.

 

 

Edited by ICDP
BeastyBaiter
Posted (edited)

Well, at 4.7GHz I get 90 fps 95% of the time and 45 fps the other 5%. If I run it stock (4.3 GHz), I get 45 fps 95% of the time and 90 fps 5% of the time. Yes, the minimums are the same, but one is clearly a lot better than the other.

 

On topic, this 8086 is just a binned 8700k from what I've read. It even uses thermal toothpaste still.

Edited by BeastyBaiter
Posted (edited)
21 minutes ago, BeastyBaiter said:

Well, at 4.7GHz I get 90 fps 95% of the time and 45 fps the other 5%. If I run it stock (4.3 GHz), I get 45 fps 95% of the time and 90 fps 5% of the time. Yes, the minimums are the same, but one is clearly a lot better than the other.

 

On topic, this 8086 is just a binned 8700k from what I've read. It even uses thermal toothpaste still.

 

Yes the number of times your FPS will drop to 45 are reduced with an OC, but I very much doubt there is such a dramatic shift in percentages as you claim.  While I only tested on a 4770K up to 4.8GHz there was nowhere near that much of an improvement, for that matter at no point was I getting 45 fps 95% of the time.  Can you post some results from a FRAPS run on the VR test mission please?

Edited by ICDP
BeastyBaiter
Posted

The VR test mission is bullshit and completely detached from actual gameplay. It is useless for benchmarking purposes. The settings used in that benchmark thread are also completely unplayable without icons. So no, I won't do that. I will say that with the high preset, high clouds, 2x AA, most everything else set to medium-ish and 1.3 PD, 90 fps is the norm in career missions, including in combat with my system.

Posted
4 minutes ago, BeastyBaiter said:

The VR test mission is bullshit and completely detached from actual gameplay. It is useless for benchmarking purposes. The settings used in that benchmark thread are also completely unplayable without icons. So no, I won't do that. I will say that with the high preset, high clouds, 2x AA, most everything else set to medium-ish and 1.3 PD, 90 fps is the norm in career missions, including in combat with my system.

 

 I agree with you on the VR test mission for the most part but it is a common baseline for all.  How about you do a test in the same mission of your choice at 4.3GHz compared to 4.7GHz?  All other GPU and RAM settings at identical.  I genuinely find it intriguing you are getting such a dramatic shift with less than a 10% CPU overclock.  If you were exaggerating for dramatic effect then I understand ;)

Posted (edited)
19 minutes ago, BeastyBaiter said:

The VR test mission is bullshit and completely detached from actual gameplay. It is useless for benchmarking purposes. The settings used in that benchmark thread are also completely unplayable without icons. So no, I won't do that. I will say that with the high preset, high clouds, 2x AA, most everything else set to medium-ish and 1.3 PD, 90 fps is the norm in career missions, including in combat with my system.

 

Yep fully agree, that benchmark is good for some things I guess but it is not an accurate representation of what one experiences actually playing the game. They are two very different things.

I certainly would not base any hardware buying decisions off of it.

;)

 

I fully plan on a new build this year, already have the funds set aside for it and this new CPU caught my eye, but think I am going to still hold off until late in the year and see what might be coming out around then.

Edited by dburne
Posted

I think it is a sign that the rumours are true about Intel getting nowhere with their 10nm die shrink.  They are stuck at 14nm and needed to keep people thinking there is a genuine upgrade path coming out and of course to keep the shareholders happy.  It is pure marketing BS to be frank.

BeastyBaiter
Posted
1 hour ago, ICDP said:

 

 I agree with you on the VR test mission for the most part but it is a common baseline for all.  How about you do a test in the same mission of your choice at 4.3GHz compared to 4.7GHz?  All other GPU and RAM settings at identical.  I genuinely find it intriguing you are getting such a dramatic shift with less than a 10% CPU overclock.  If you were exaggerating for dramatic effect then I understand ;)

 

That can be arranged, but it will have to wait till the weekend. I can't tell you the exact difference between 4.3 and 4.7 GHz, but it's pretty extreme at the settings I use. I think I ran this same type of test with the old test track. It went from somewhere around 50 fps average to 88 fps average if I'm remembering correctly. The settings I use currently give a fairly similar result.

 

Why this happens is 4.3 GHz is slightly under what is needed to hit 90 fps with any degree of consistency. And so even with ASW off, the system is largely locked to 45 fps. Pushing to 4.7 GHz reverses the situation, the system has just ever so slightly more than is needed to consistently hit 90 fps. It will dip with lots of shooting going on, but normally just barely scrapes by at 90 fps. Of course, the situation with a monitor is totally different. In that case it's something truly trivial like 160 fps vs 170 fps (made up numbers, don't know off hand what it is). But since VR is an all or nothing system when it comes to 90 fps, it has a bigger effect.

Posted

That does make more sense because my experience shows VR throws all existing benchmark "truths" out the window.

TUS_Samuel
Posted
15 hours ago, chiliwili69 said:

If default Turbo goes to 5.0GHz, I wonder how far you can go with some OC...

 

Our CPUs already work at around 4.8 GHz. Ideally min FPS must double, but no CPU can go up to 9 GHz. So our only hope are devs splitting load by cores.

From my overclocking experience 10-20% FPS gain is not visible by eye and rarely can be a reason for an upgrade.

 

VR benchmarks helped to locate the bottleneck and find out which CPUs are suitable for IL-2 and which are not. Some CPUs unexpectedly were a junk in IL-2 and people were prevented from bying them. So usefulness of chilli's thread is not even under a question. The dogs bark, but the caravan moves on.

  • Upvote 2
Posted
16 hours ago, ICDP said:

I think it's a binned and pre-overclocked CPU, so there may not be much more headroom for overclocking.

 

By definition, if a game engine is using a single core in 2018, then it is poorly optimised.  Multi core CPUs have been around for decades now and have been exclusively mainstream in the PC CPU market for well over a decade.

 

Seems too unfair to just say IL-2 engine is "poorly optimised" , surely the issue is DX11 rather than the proprietary IL-2 game/graphics engine... 

 

Pretty sure rumours of vulkan were for DCS, but they took well over 5 years for the development and change to DX11 so I personally would not hold breath for that one 

 

The amount of 2017/18 released games that are designed around DX12 or vulkan are not exactly totally mainstream. 

The change to DX11 for IL-2 was fairly recent, am sure development for DX12 is considered but a lot more customers /people would currently be left out of using IL-2 than there are VR users at this time  by an immediate switch right now (hardware limitations) . Personally I would love the jump to DX12 or vulkan but it is not very simple or cheap. 

 

Or perhaps I am missing something? 

 

Cheers, Dakpilot 

Posted

Being multi-threaded is not exclusively a DX12 or Vulkan requirement.  Granted it is harder to implement multi threaded code and DX12 and Vulkan do make it easier.  Stating that the IL-2 engine is poorly optimised for multicore CPUs is a fact, unfair has nothing to do with it.

  • Upvote 1
Posted
2 hours ago, ICDP said:

Being multi-threaded is not exclusively a DX12 or Vulkan requirement.  Granted it is harder to implement multi threaded code and DX12 and Vulkan do make it easier.  Stating that the IL-2 engine is poorly optimised for multicore CPUs is a fact, unfair has nothing to do with it.

 

My understanding of the (often misused) phrase "poorly optimized" just seems different to yours, no worries ?

 

Cheers, Dakpilot 

Posted
On 6/6/2018 at 8:32 AM, chiliwili69 said:

If default Turbo goes to 5.0GHz, I wonder how far you can go with some OC...

Why the Hell would you have to overclock that?

=362nd_FS=Hiromachi
Posted

There is no guarantee it will clock any higher. While it may be easier to hit 5.0 - 5.2 Ghz, beyond that you may be reaching silicon limitations. And well, yeah, there would be no reason to OC it beyond that. Overclocked 8700k at 4.9 Ghz has power consumption around 170W when put under heavy loads. 

Posted (edited)

Hello !

 

On 6/6/2018 at 4:32 PM, dburne said:

I think for the future much will depend on the VR Hardware makers optimizing their devices and software, along with Nvidia maybe optimizing drivers some more for VR also.

 

Totally agree ! I guess Oculus and HTC/Steam would already have released much higher density HMD if they weren't also looking for reliable ways to alleviate CPU and GPU workload (eye tracking etc.). 

 

Not exactly related to nVidia driver optimization, but I remember nVidia presented new VR APIs during the presentation of the 10x0 generation (Multi-Res and Lens Matched Shading, making it possible to lower GPU load and render both eye views in a single pass IIRC), and was wondering if IL2 (or DCS) were making any use of it...

Edited by ShugNinx
Added links to nvidia VRWorks
  • Thanks 1
BeastyBaiter
Posted

DCS will not use any sort of hardware specific features (exception for Oculus Rift since it predates steamVR), so any part of Nvidia's gameworks are automatically rejected. Pretty sure BoX does the same. Doing otherwise is an absolute nightmare for the programmers and so such things are typically only done by the largest studios or by game engine makers (Unreal, Unity...). That said, the biggest limitation for VR in both BoX and DCS is lack of CPU utilization. They have to move to a multi-threaded graphics engine in order to feed the graphics card fully.

Posted
9 hours ago, BeastyBaiter said:

Doing otherwise is an absolute nightmare for the programmers and so such things are typically only done by the largest studios or by game engine makers (Unreal, Unity...). 

I was suspecting only major studios and game engine developers could afford to fully optimize their rendering engine for VR, yet I wish I knew why implementing nVidia VRWorks is a such a nightmare for small sized studios (you guessed it, I'm no developper).

 

VR is by far the biggest leap in combat flight simulation we've seen in years if not decades, ED and 1C will have to optimize their respective graphic engines sooner or later...

chiliwili69
Posted
On 6/7/2018 at 9:39 PM, Poochnboo said:

Why the Hell would you have to overclock that?

The more OC --> the less bottlenecked is the CPU --> the more fps we get in VR:

 

https://forum.il2sturmovik.com/topic/29881-overclocking-a-4790k-for-better-bos-performance/?do=findComment&comment=485112

 

or take a look of this tests;

https://docs.google.com/spreadsheets/d/1gJmnz_nVxI6_dG_UYNCCpZVK2-f8NBy-y1gia77Hu_k

BeastyBaiter
Posted
13 hours ago, ShugNinx said:

I was suspecting only major studios and game engine developers could afford to fully optimize their rendering engine for VR, yet I wish I knew why implementing nVidia VRWorks is a such a nightmare for small sized studios (you guessed it, I'm no developper).

 

VR is by far the biggest leap in combat flight simulation we've seen in years if not decades, ED and 1C will have to optimize their respective graphic engines sooner or later...

 

Fair question and the reason is pretty straight forwards: you do not want to make your game only work on one company's components. AMD is a major player in the gaming PC space. They currently hold around 20-30% of GPU market share, held 50% not that long ago and are very likely to be around 50% again within a few years. Implementing a vendor specific coding solution in this situation is extremely problematic. Any game company that implements Nvidia's Gameworks must also create non-Gameworks code for all GPU's not compatible with it or simply lose those potential customers as they cannot run the program. So it's a choice between don't use it, double your workload for that component or lose half your customers. For small to mid-sized gaming studios, that's a very easy choice.

 

It is worth pointing out that Nvidia's Gameworks doesn't even work on most Nvidia cards. Every time they release a new generation of graphics card, they add new Gameworks features. However those don't necessarily work on the previous generation of cards. Their upcoming ray tracing thing is a prime example. It will work on the 1180 but not the 1080 Ti, despite the two appearing to be the same card with not much more than a name change and slightly different VRAM.

 

10 hours ago, chiliwili69 said:

 

 

Exactly. VR performance is a step function. Below a certain point, you will get 45 fps no matter what in BoX and above a certain point, you'll get 90 fps instead. With the i7-8700k, that point is around 4.5-5.0 GHz. There currently isn't a single CPU on the market with that clock speed (all core turbo as that's the real clock speed).

Posted
9 hours ago, BeastyBaiter said:

 

Fair question and the reason is pretty straight forwards [...]

Indeed, I could have figured that one out with little effort ?

 

This is a consequence of lack of VR standard, though I guess OpenVR (used by SteamVR IIRC) might become one. In that case, could OpenVR dev team implement AMD/nVidia proprietary solutions into a standard set of API ?

SCG_Fenris_Wolf
Posted (edited)
On 6/10/2018 at 9:51 PM, ShugNinx said:

Hello !

 

 

Totally agree ! I guess Oculus and HTC/Steam would already have released much higher density HMD if they weren't also looking for reliable ways to alleviate CPU and GPU workload (eye tracking etc.). 

 

Not exactly related to nVidia driver optimization, but I remember nVidia presented new VR APIs during the presentation of the 10x0 generation (Multi-Res and Lens Matched Shading, making it possible to lower GPU load and render both eye views in a single pass IIRC), and was wondering if IL2 (or DCS) were making any use of it...

I tried propagating this a year ago, but the echo was even hostile, because it "would favor Nvidia over AMD". 

 

Even though AMD had nothing in technology available for such cases in comparison to Nvidia. 

Edited by SCG_Fenris_Wolf
SCG_Fenris_Wolf
Posted (edited)

Yep, but even that is new ;)

https://gpuopen.com/gaming-product/liquidvr/?webSyncID=91090c0c-6164-3697-95cd-2928c5a80821&sessionGUID=4134ea7e-a2aa-6404-5284-a433b0cc146b

Checking this however, AMD does not have the same tech as NVidia for calculating one scene but casting it onto both eyes with an offset. They just rebranded a part of their drivers to "LiquidVR" that's working with VR, technology which is part of the standard package with NVidia Geforce drivers. AMD's strategy on the market isn't differentiation like NVidia's - which is sad imo, as competition is always good. Instead, both companies have carved out a piece of the cake, one for new technology development, the other one getting the best bang for the buck.

 

 

Still, even for AMD users there may be some improvements with it. And by now we could have NVidia's API implemented, and AMD's getting implemented as soon as they're ready as well... ?

Edited by SCG_Fenris_Wolf
Posted

I'm a nobody here, but i agree with SCG Fenris wolf. 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...