Jump to content

Voyager

Members
  • Content Count

    538
  • Joined

  • Last visited

Community Reputation

165 Excellent

1 Follower

About Voyager

  • Rank
    Member

Recent Profile Visitors

819 profile views
  1. He's running into a different problem. Basically IL-2 can handle up to a certain number of controller inputs then it can't talk to new controllers. I want to say the number was around 16(?). I ran into something similar a few months back, and this was basically what I had to do too, to fix it. Though in my case it was a new pair of MFG pedals that pushed it over the edge lol This comes up more with the CH Control Manager because it can make a lot of different virtual controllers, which makes it easier to accidentally fill up your inputs list.
  2. Honestly, I had been hoping performance would scale better with the additional cuda cores, but now I'm wondering if we've passed the point of diminishing returns?
  3. @chiliwili69 If those numbers bear out, I'm wondering if the 30 series has passed the point of diminishing returns of parallelism? I think the review embargo should lift tomorrow morning and we will see official numbers. I'm hoping we can see some tests with people in the Il-2 community soon too. What I'm hearing from the other flight sims haven't been very encouraging. 😕
  4. The standard 3070 uses standard GDDR6, so I'd expect a 16Gb 3070 to retain the GDDR6, and a 3080 20Gb to use GDDR6X. Seems odd to nerf the 3080 by using slower memory.
  5. Apparently some people were writing bits to submit false bids of scalper cards, so even if you had gotten it and tried to flip it, it would probably have resulted in a set of no-pay bids.
  6. I've resigned myself to getting one a few months from now when they are actually available. Or if the RDNA2 chips turn out to be good.
  7. The 3090 is *not* a cut down Titan card. It replaces the Titan line, which was an nVidia exclusive part, with a part that can be sold through other board vendors. You can tell that it is a titan card because it will require all of the memory controller units filled and will have all SMs active. As I understand it, the issue nVidia had with doing between 10 and 24Gb of bRain is that there were only 1GB and 2GB GDDR6X modules designed. I think that they're currently held up on doing a normal Super on Micron developing 1.5Gb modules. From the current most likely rumors, AMD is using standard GDDR6 on their cards so are not limited in that way. But, to blow cold with the same breath, nVidia generally has had better compression techniques so historically seems to use less vram for the same amount of actual data. The interesting thing is, if the 6900 is an 80CU card with 16Gb of vram, that wound imply that the RDNA CU are not tied to the memory driver as closely as they seem to be in the NVidia architectures. That's just something that hit me right now, but I haven't looked at a i numbers for it yet. @ZachariasX From what I understand, only Doom Eternal with the maximum resolution textures ends up exceeding 8Gb of VRam usage. It shows up as a fairly significant drop in frame rates, relative to other cards in its performance class when enabling those textures. I think Hardware Unboxed mentioned it in their reviews. The allocation is the program saying "Don't let anyone else have this space." The usage is the program stuffing stuff into the space. If it runs out of space in VRam it stores the rest of the textures in System memory, and pulls those parts from there when it needs to, but that is a *lot* slower than storing it locally in the VRam.
  8. I'd be careful of assuming that without knowing the number of polygons and CPU specific calculations being performed. Recall, the big hammers are New York City, and aircraft avionics, we don't know the relative difference in complexity between the aerodynamic models, and the weather model is significantly more complex than what I've ever seen before. Without being able to compare the complexity between competitors, all we can say for certain is it is very single thread performance bound, and it hits CPU hardware harder than other flights sims, but it is much harder to compare the relatives optimization of the engine.
  9. Haven't seen that one yet, but I think he is seriously misinterpreting it. If you take a look at the 2080 Ti, it lists 4352 cores, but each of those cores is a paired FP32 and an Int32 unit, and can do both in a single pass. In contrast, when the 3080 lists "8704" cores, they are actually talking about 4352 FP32 only cores paired with 4352 FP32/Int32 cores. If you have an even split of FP32 operations and Int32 operations, it takes exactly the same number of passes to complete as a 2080 TI does. If, in contrast, you have a purely FP32 driven render, you should be able to do it in half the passes that a 2080 TI would. That why it will matter the type of operation we're looking at, and we really aren't able to tell without either benchmarks or access to the source code and enough specialist knowledge to understand the ratio of floating point vs integer math it's using. I believe nVidia believes that most of the rendering is limited by floating point operations, and feels that being able to give up integer operations in exchange for more FP32 ones will be a big advantage. I'm honestly a bit bugged that it got tangled up in marketing speak, but I understand why marketing is pushing it that way 😕
  10. @chiliwili69 It's actually a bit more interesting than that. Apparently each Cuda core used to have an FP32 and an Int32 module. What they've done here is modified the Int32 part so not can either run Int32 or FP32 instructions. So we should actually see pretty big lifts on games that are very dependent on the FP32 instructions, but much less on the ones that live in Int32 land. At the moment I'm probably going to get the 3090 myself, but I am going to be very interested in seeing the Dx11 and FS2020 benchmarks* of the card before I actually buy it. Depending on what is and is not essential to Il-2, and what is expected about Big Navi (2.23 Ghs, 80 CU) it is conceivable that it could be faster than the 3080's, but again, that will depend on how flight sims use the cuda cores. We could, just as easily, see a near perfect 2x lift. Or none at all. Just don't know, yet. It's going to be interesting times Harry Voyager *From what I've seen of the FS2020 and Il-2 benchmarks, FS2020 does seem to take a pile driver to the same sort of things that Il-2 and other major flight sims in the market do, and as good as Il-2 is, I suspect the only flight sim we're going to see benchmarks of will be FS2020
  11. I actually prefer it myself. I've found that sometimes you need to rec enter everything when you spawn in, particularly in VR, and I would rather have that moment after load in where I can cycle all the controls and get the view locked up before the launch. There's nothing quite as disorienting as spawning in and finding your view point is behind the headrest. On some other flight sims, it still takes long enough to load in that I still step away after the launch and wait for the pause.
  12. @Alonzo So apparently an nVidia rep explained it here: https://www.nvidia.com/en-us/geforce/news/rtx-30-series-community-qa/ I think what they are saying was they used to have a 16x FP32 16x INT32 path, and made them capable of being 32 FP32 paths to get the doubling effect, but I don't think I really understand it yet.
  13. And one thing I didn't notice initially was it looks like nVidia has done something with the Cuda cores, that they're reporting twice the number of cuda cores than one would expect from the transistor count. Rumors were that the 3090 was going to be a 82 SM part, corresponding to a 5248 cuda core part. Bigger than the 2080 To yes, but it's reporting in as a 10496 Cuda core unit. That's a lot of raster performance, but they only increased the transistor count by 50%. Given everything else they're packing into the cards, I find that hard to believe, unless they've found some way to double up the Cuda cores, either through some sort of hyper threading like thing, or something else. I think I want to see the 3080 benchmarks before I decide to buy anything. If it's really getting a near 100% lift in raster performance over the 2080, I may well go for the 3090 and start tracking the Pimax 8KX+. (which I had previously figured was un-drivable for at least another GPU generation). But I want to see real, impartial numbers first.
  14. Honestly, changing the threshold for everyone would leave everyone at the same spot we are at now; the same. I'm happy with it the way it is. I just need to learn the plane's and pilot's limits and respect them.
  15. @Alonzo I expect for Il-2, PCIe 4.0 is not going to be a factor. Last I checked it didn't cap out either my local memory or my vRAM in my 1080 ti, so don't think it will be a problem. As I think about it, it will likely make more sense for me to stick with a 3080, wait for a 3080 Ti, or see what RNDA2 does, and do an early step up to Zen 3, than going for a 3090 will. We shall see. Addendum: Has anyone actually benchmarked Il-2 with a 5700 card yet? I just noticed there were none in Chiliwilies' test results.
×
×
  • Create New...