Jump to content

Nvidia teases a 21 day countdown to unveil or release of the RTX 3000


Recommended Posts

9 hours ago, Confused_2018 said:

I will wait to see what team red does. 

 

Question, saw this techtuber and he makes a point re lack of Intel PCIE gen 4 support? PCIE 3 saturation with the high end cards. Skip to 7:50

 

Video by UFD tech

https://www.youtube.com/watch?v=VgCdeGDXZVo

 

Is this much of an issue?

 

Thanks Intel?

 

 

 

 

I'm almost certain the 3090 is able to saturate the gen 3.0 PCIE interface. Maybe even 3080 can. 

  If that's the case, this cards installed in recent AMD motherboards would be able to perform at full potential, while the gen 3.0 PCIE, intel motherboards could bottleneck the cards.

   We'll have to wait for the benchmarks and reviews after release. 

  • Upvote 1
Link to post
Share on other sites
3 hours ago, Jaws2002 said:

I'm almost certain the 3090 is able to saturate the gen 3.0 PCIE interface. Maybe even 3080 can. 

Haven't seen a game yet that takes much FPS impact in running current high end cards at 8 lanes PCIe 3.0 instead of 16 lanes. All tests I've seen are in the 1% range when cutting 16 lanes down to 8 lanes.

 

I would be very much surprised if bandwith really mattered now. Typically, textures get fed in the VRAM upon loading of a level and then they are processed there. It is just this loading process that *might* take longer and *might* result in a quick stutter, but other that that, there is very little straming to VRAM.

 

It could probably make a bit of an issue if a game decides to use 16 GB of VRAM by you deciding to run a scene now at much higher resulotion. Then the bus should be twice as fast to fill those 16 GB than 8 GB previously, rendering essentially the same, but just nicer.

 

All of this certainly looks different when doing worksation tasks...

  • Thanks 1
Link to post
Share on other sites
8 hours ago, RAAF492SQNOz_Steve said:

Hmmm..... The review posted in this forum (By CDRSEEBEE) suggested that the 3080 would be about 1.8 times the performance of a standard 2080 GPU depending on the game.

 

The 2080 comparision card results was not for a Super or a Ti card, so nope not twice as powerful as either of them (but still an improvement).

Will have to wait for some proper test results to find out how much better the 3080 is.

 

My  Power supply fails the recommended 750 watt rating, damn!

 

Different (pre) reviews saying different things. All I can tell you is what I heard. 

 

I have the 750w PSU. But do I have the $1000 for the card or the inclination t spend it? Hmmmm...

Link to post
Share on other sites

Impressive showing by Nvidia yesterday, especially the pricing of the 3070 and 3080.  Most knew the 3090 would be salty but WOW it sounds like a Monster.

 

I'm happy to wait until the AMD Big Navi is released.

 

All of the Horsepower just announced is wasted if you don't have the high end monitor to utilize it.

Link to post
Share on other sites
1 hour ago, skline00 said:

Impressive showing by Nvidia yesterday, especially the pricing of the 3070 and 3080.  Most knew the 3090 would be salty but WOW it sounds like a Monster.

 

I'm happy to wait until the AMD Big Navi is released.

 

All of the Horsepower just announced is wasted if you don't have the high end monitor to utilize it.

 

Or VR Headset - unfortunately...

:wacko:

Link to post
Share on other sites

What is the equivalent of the HP G2 on a screen? Just so we can get some idea what to get and check out the upcoming beanchmarks with some idea of what to look for. I assume there is a 4k screen size that would require the same horsepower as the G2 will.

 

Also, I read this on the Nividea website. 

GEFORCE RTX VR FEATURES
ENHANCED VRWORKS GRAPHICS:
Variable Rate Shading (VRS): This new technique increases rendering performance and quality by applying full GPU shading horsepower to areas of the VR scene that need it most, and less GPU horsepower to areas that don’t.

Single Pass Multi-view: Single Pass Stereo was introduced with GeForce GTX 1080 and accelerates rendering by drawing geometry only once, then simultaneously projecting it to both right-eye and left-eye views. This let developers almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds. Single Pass Multi-view improves on this technique by simultaneously projecting up to four views to accelerate next-generation VR headsets with an ultra-wide field of view.

ACCELERATED RAY-TRACED VR AUDIO:
NVIDIA VRWorks Audio is a ray-traced audio solution that creates a complete acoustic image of the virtual environment in real-time, delivering physically realistic audio that conveys the size, shape, and material properties of the virtual environment. It’s accelerated on GeForce RTX’s new hardware-based real-time ray tracing technology, resulting in substantially increased performance and quality.

Link to post
Share on other sites
14 minutes ago, CDRSEABEE said:

What is the equivalent of the HP G2 on a screen? Just so we can get some idea what to get and check out the upcoming beanchmarks with some idea of what to look for. I assume there is a 4k screen size that would require the same horsepower as the G2 will.

 

Also, I read this on the Nividea website. 

GEFORCE RTX VR FEATURES
ENHANCED VRWORKS GRAPHICS:
Variable Rate Shading (VRS): This new technique increases rendering performance and quality by applying full GPU shading horsepower to areas of the VR scene that need it most, and less GPU horsepower to areas that don’t.

Single Pass Multi-view: Single Pass Stereo was introduced with GeForce GTX 1080 and accelerates rendering by drawing geometry only once, then simultaneously projecting it to both right-eye and left-eye views. This let developers almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds. Single Pass Multi-view improves on this technique by simultaneously projecting up to four views to accelerate next-generation VR headsets with an ultra-wide field of view.

ACCELERATED RAY-TRACED VR AUDIO:
NVIDIA VRWorks Audio is a ray-traced audio solution that creates a complete acoustic image of the virtual environment in real-time, delivering physically realistic audio that conveys the size, shape, and material properties of the virtual environment. It’s accelerated on GeForce RTX’s new hardware-based real-time ray tracing technology, resulting in substantially increased performance and quality.

4k is 4k, screen size shouldn't matter. It's just the resolution.

Looks like the G2 has roughly the same pixels as a 4K monitor (2160X2160 per eye). So a 4K monitor running the same refresh rate as the G2 might get you close. 

Link to post
Share on other sites
8 hours ago, Voyager said:

The thing I'm wondering most about is how they will scale with Direct X 11 titles, like Il-2,and the other major flight and flight-like games, especially in VR. 

 

While the 2080 TI did end up performing significantly better than the 1080 TI in DirectX12 titles, I've noticed that the benchmark differences are much smaller for the Dx11 ones. (even Fire Strike Ultra vs TimeSpy Extreme we see a big gain in Dx12, and a smaller one in DX11).

 

I'm hoping that the wonderful @dburne (and others) will do the comparisons so we get a good idea of what to expect. I believe he has a heavily overclocked 2080ti at the moment and is planning on purchasing a 3090 card (again, heavily overclocked). Maybe he and others who are going to get cards early can do a comparison benchmark for us.

 

My suspicion is that the 3090 is going to be a bit more money than I want to pay, and not enough of a performance delta from the 3080 to warrant the cost. But we'll see, it could be that the 3000-series silicon can be clocked really high if you can give it enough power and deal with the heat, in which case I may have my eye on one of those 3080 'hybrid' cards.

 

There's also the question of whether the 3080/3090 will have a slight bottleneck on PCIe3 and whether we notice that in IL2. That might become a bit of a conundrum actually -- Intel chips are better for IL2 in VR because the single thread speed is better, but AMD chips give you access to PCIe4 and allow the GPU to stretch its legs more. I guess it will depend on whether PCIe3 actually is a bottleneck at the kinds of frame rates we need for simulators.

1 hour ago, RedKestrel said:

4k is 4k, screen size shouldn't matter. It's just the resolution.

Looks like the G2 has roughly the same pixels as a 4K monitor (2160X2160 per eye). So a 4K monitor running the same refresh rate as the G2 might get you close. 

 

Reverb @ 90hz: 2160 x 2160 x 2 (eyes) x 90 = 839,808,000 pixels/sec.

4K monitor @ 60hz: 3840 x 2160 x 90 = 497,664,000 pixels/sec.

4K monitor @ 100hz: 3840 x 2160 x 100 = 829,440,000 pixels/sec.

 

So a Reverb headset is roughly equivalent to a 4K monitor @ 100hz.

 

Edit: I should emphasize roughly equivalent. VR games are generally doing more CPU and geometry work because each eye sees a different view of the world, so the overall effort is slightly higher than with a 4K monitor where you just have one view and the CPU says to the GPU "now render that really big".

Edited by Alonzo
Link to post
Share on other sites

@Alonzo I expect for Il-2, PCIe 4.0 is not going to be a factor. Last I checked it didn't cap out either my local memory or my vRAM in my 1080 ti, so don't think it will be a problem. 

 

As I think about it, it will likely make more sense for me to stick with a 3080, wait for a 3080 Ti, or see what RNDA2 does, and do an early step up to Zen 3, than going for a 3090 will.

 

We shall see. 

 

Addendum: Has anyone actually benchmarked Il-2 with a 5700 card yet? I just noticed there were none in Chiliwilies' test results. 

Edited by Voyager
Link to post
Share on other sites

Based on what I have read so far, PCIe3 is nowhere near bottlenecked yet, and the gain with PCIe4 would be minimal.

I personally have no idea though. I certainly am not going to replace my current rig anytime soon just for that though.

 

 

Link to post
Share on other sites
4 hours ago, RedKestrel said:

Single Pass Multi-view: Single Pass Stereo was introduced with GeForce GTX 1080 and accelerates rendering by drawing geometry only once, then simultaneously projecting it to both right-eye and left-eye views. This let developers almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds. Single Pass Multi-view improves on this technique by simultaneously projecting up to four views to accelerate next-generation VR headsets with an ultra-wide field of view.

 

3 hours ago, Alonzo said:

Reverb @ 90hz: 2160 x 2160 x 2 (eyes) x 90 = 839,808,000 pixels/sec.

4K monitor @ 60hz: 3840 x 2160 x 90 = 497,664,000 pixels/sec.

4K monitor @ 100hz: 3840 x 2160 x 100 = 829,440,000 pixels/sec.

 

I hope someone can educate me on this. In my mind the 4k monitor is WAY bigger than the screens in the G2. But there are the same amount of pixels? I dont understand that. Seems the screen would be way more pixels. Is the VR screens that much more compressed with pixels? 

 

And with the Single pass multi view, would that help in processing each screen? 

 

I am seeing the 3080 doing 60 fps with a 4k screen on a few Youtube channels. 

 

Thanks for helping me understand.

Link to post
Share on other sites
6 minutes ago, CDRSEABEE said:

 

 

I hope someone can educate me on this. In my mind the 4k monitor is WAY bigger than the screens in the G2. But there are the same amount of pixels? I dont understand that. Seems the screen would be way more pixels. Is the VR screens that much more compressed with pixels? 

 

 

On a bigger screen with the same resolution as a smaller one, a pixel is literally bigger. The GPU just renders pixels and supplies the data to the monitor and doesn't care how big the pixels are - to a certain extent the GPU doesn't 'know' how big your monitor is, only how many pixels it has to make. You could play a game on a ten foot wide monitor at 4K and it would have the same impact on the GPU as a monitor the size of a phone screen, because the number of pixels is the same. Of course both scenarios are just stupid, because in both cases the resolution doesn't work with the size.

That's why when you buy a monitor, the size has to match the resolution in a way. if you buy a huge monitor but its 1080p, then it will look pixelated and blocky or blurry. Higher resolution at the same size will look crisper because there are more pixels available to display information per square inch of screen.

 

With big TVs, you don't notice pixelation because you sit farther away. This is why resolution is so important in VR, low resolution right against your eyes is extremely noticeable.

If you are seeing the 3080 do 60FPS on a 4k monitor, it might actually be capable of doing better than that, but the monitor itself only has a max frame rate of 60FPS, which is true of a lot of 4k monitors - not many people have a GPU that can render 4k frames much faster than that, so there's not a huge market for high framerate monitors. But you can find 1080p monitors that roll at like 200+FPS because even mid-range GPUs can generate high frame rates at 1080 p resolution. See Alonzo's numbers above about "pixels per second" to show the difference in data processing needed for different framerates. 

High framerate+High resolution is the magic needed for VR, which is why you need beastly cards.

  • Like 1
Link to post
Share on other sites
5 hours ago, CDRSEABEE said:

ENHANCED VRWORKS GRAPHICS:
Variable Rate Shading (VRS): This new technique increases rendering performance and quality by applying full GPU shading horsepower to areas of the VR scene that need it most, and less GPU horsepower to areas that don’t.

It's already available on 20x0 cards, and doesn't work with/isn't implement by the game. In general, tech that is vendor-specific isn't used by IL-2's devs, so unless AMD also implements it, and an API common with NVidia is added to DirectX11, we won't see it. There might be hope if this is implemented by NVidia's themselves driver-side, but I doubt they'll be interested to test and make it work for a "small" game like IL-2 GB.

 

5 hours ago, CDRSEABEE said:

Single Pass Multi-view: Single Pass Stereo was introduced with GeForce GTX 1080 and accelerates rendering by drawing geometry only once, then simultaneously projecting it to both right-eye and left-eye views. This let developers almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds. Single Pass Multi-view improves on this technique by simultaneously projecting up to four views to accelerate next-generation VR headsets with an ultra-wide field of view.

Same as above: Vendor-specific feature, unlikely to be used by the devs.

 

5 hours ago, CDRSEABEE said:

ACCELERATED RAY-TRACED VR AUDIO:

That's just audio, and not super useful outdoors. Could be cool to simulate sound bouncing in the cockpit, but it would require a detailed modelling of the various surface properties, likely too much work. And it's vendor-specific.

 

So... don't expect too much on the APIs' side of things when it comes to this game.

  • Like 1
Link to post
Share on other sites
1 minute ago, coconut said:

It's already available on 20x0 cards, and doesn't work with/isn't implement by the game. In general, tech that is vendor-specific isn't used by IL-2's devs, so unless AMD also implements it, and an API common with NVidia is added to DirectX11, we won't see it. There might be hope if this is implemented by NVidia's themselves driver-side, but I doubt they'll be interested to test and make it work for a "small" game like IL-2 GB.

 

Same as above: Vendor-specific feature, unlikely to be used by the devs.

 

Yep. It's unfortunate, because especially things like VRS could really really help the game in both VR and non-VR modes. For example, the cockpit instruments would benefit from a higher shading rate, the sky maybe only needs a lower shading rate, the wings as you look outside need a different shading rate, and again the landscape a different shading rate. Arguably you actually want less of a shading rate on the landscape because crushing pixels together makes spotting worse. But because it's NVidia-specific tech, I can see why the dev team considers it a lower priority feature.

 

That's why people are interested in the 3090. Once you've dropped a thousand bucks (or more) on flight sim peripherals, $600 on a VR headset, $2k on a gaming rig, spending $1500 on a 3090 to brute-force the graphics actually seems like a sane thing to do. My suspicion is that the 3090 is going to be 20-25% faster than a 3080 but for more than double the cost, so I might have to put up with (gasp) not enabling MSAA on my rig. But it depends on reviews. 3080 hybrid card with an overclocking bios might do a good job, or a 3090 hybrid might do even better.

Link to post
Share on other sites
26 minutes ago, Alonzo said:

3090 hybrid

??? hybrid ... dual 3090 cards ??? do you mean SLI, NVLink 2 or more cards with that ???

Buying anything less than a 3090 if going VR is wrong ... you will want to upgrade the 3080 by next year !

That 3090 is going to sell like hotcakes, regardless of the insane price because it is the only true 90 / not even 144HZ ! "VR GPU" (IMHO)

Link to post
Share on other sites
7 minutes ago, Alonzo said:

 

Yep. It's unfortunate, because especially things like VRS could really really help the game in both VR and non-VR modes. For example, the cockpit instruments would benefit from a higher shading rate, the sky maybe only needs a lower shading rate, the wings as you look outside need a different shading rate, and again the landscape a different shading rate. Arguably you actually want less of a shading rate on the landscape because crushing pixels together makes spotting worse. But because it's NVidia-specific tech, I can see why the dev team considers it a lower priority feature.

 

That's why people are interested in the 3090. Once you've dropped a thousand bucks (or more) on flight sim peripherals, $600 on a VR headset, $2k on a gaming rig, spending $1500 on a 3090 to brute-force the graphics actually seems like a sane thing to do. My suspicion is that the 3090 is going to be 20-25% faster than a 3080 but for more than double the cost, so I might have to put up with (gasp) not enabling MSAA on my rig. But it depends on reviews. 3080 hybrid card with an overclocking bios might do a good job, or a 3090 hybrid might do even better.

 

This is all USD right? I'm seeing 2080TIs going for $1500+ right now. I was thinking they would drop but maybe the 3000s will be out of stock so fast people will grit their teeth and buy last gen at standard prices rather than wait weeks or months.

 

Out of curiousity I estimated my total spending on hardware and over 4 years or so its:
$600 for stick, pedals, throttle and throttle quadrant (CH stick, throttle, pedals and Logitech quadrant)

$1200 for gaming PC including (cheapo)monitor

$450 for GPU (1660Super + RAM upgrade)
$400 for good 1440p monitor

With upgrades needed for pedals, processor/Mobo, case and power supply - basically so I can build a secondary PC from the old parts so the kid can minecraft on her own rig instead of mine, LMAO. Oh, and schoolwork I guess, but we both know that's secondary.

If only the gold she mined was real, we could afford a 3090. :(

  • Upvote 1
Link to post
Share on other sites
39 minutes ago, simfan2015 said:

??? hybrid ... dual 3090 cards ??? do you mean SLI, NVLink 2 or more cards with that ???

Buying anything less than a 3090 if going VR is wrong ... you will want to upgrade the 3080 by next year !

That 3090 is going to sell like hotcakes, regardless of the insane price because it is the only true 90 / not even 144HZ ! "VR GPU" (IMHO)

 

"Hybrid" in EVGA language is the air/water combined cooler cards. They have an AIO liquid cooler for the GPU, and an air cooler for the memory and other components. Because GPUs depend so much on staying cool to keep their boost clocks, a hybrid card can sometimes give you better or more consistent performance. Other times it does nothing for performance but might be preferable for other reasons (silence, venting hot air directly out of the case, bragging rights).

 

I think the 3090 is going to sell like hotcakes, but not because it's truly required for VR. The 3080 will be enough for most games, and foveated rendering should begin to catch up now that many manufacturers are doing eye tracking. I think the 3090 is going to sell because there are tons of people who enjoy using their PC as a hobby and are happy to pay for the best stuff, even if it's objectively bad value. I could never quite bring myself to buy the 2080ti because I knew it was crummy value (having said that, though, I bought the 8086K because history, even though it's just an 8700K really).

 

Twice the price on the 3090 over the 3080 for 25% performance uplift is like saying "is 150% supersample + 2xMSAA worth twice the price over 150% supersample and no MSAA?" Personally I think it's not worth it, I'd rather run without the MSAA. And remember, every non-simulator game is going to be much better optimized and do a really great job in VR on the 3080. Maybe with the exception of Skyrim and FO4.

 

38 minutes ago, RedKestrel said:

This is all USD right? I'm seeing 2080TIs going for $1500+ right now. I was thinking they would drop but maybe the 3000s will be out of stock so fast people will grit their teeth and buy last gen at standard prices rather than wait weeks or months.

 

I think given that NVidia is touting the 3070 as "faster than a 2080ti" anyone buying 2000-series cards as of yesterday is just wasting their money. Anyone who knows what they are doing is now waiting for the new cards, or if they really really need one today they should buy an EVGA and use their step up program.

  • Upvote 1
Link to post
Share on other sites
1 hour ago, simfan2015 said:
1 hour ago, Alonzo said:

3090 hybrid

??? hybrid ... dual 3090 cards ??? do you mean SLI, NVLink 2 or more cards with that ???

 

He's talking about hybrid cooling solution. Many partners have hybrid cards already. It typically means they are using an all in one water cooler to cool the GPU and a blower to cool the memory and the rest of the PCB.

 

Agree with Alonzo. EVGA step up is a good solution for someone who wants a 2000 series right now.

 Another solution is ebay. Some bloggers were reporting 2080ti cards showing up on ebay for 500 USD.

Edited by Jaws2002
Link to post
Share on other sites
13 hours ago, ZachariasX said:

Haven't seen a game yet that takes much FPS impact in running current high end cards at 8 lanes PCIe 3.0 instead of 16 lanes. All tests I've seen are in the 1% range when cutting 16 lanes down to 8 lanes.

 

I would be very much surprised if bandwith really mattered now. Typically, textures get fed in the VRAM upon loading of a level and then they are processed there. It is just this loading process that *might* take longer and *might* result in a quick stutter, but other that that, there is very little straming to VRAM.

 

It could probably make a bit of an issue if a game decides to use 16 GB of VRAM by you deciding to run a scene now at much higher resulotion. Then the bus should be twice as fast to fill those 16 GB than 8 GB previously, rendering essentially the same, but just nicer.

 

All of this certainly looks different when doing worksation tasks...

 

It will matter over the next few years because streaming everything all the time is exactly what the xbox and ps5 will be doing. This is why leather jacket man spent a decent amount of time talking about texture decompression gpu side.

 

5 hours ago, coconut said:

It's already available on 20x0 cards, and doesn't work with/isn't implement by the game. In general, tech that is vendor-specific isn't used by IL-2's devs, so unless AMD also implements it, and an API common with NVidia is added to DirectX11, we won't see it. There might be hope if this is implemented by NVidia's themselves driver-side, but I doubt they'll be interested to test and make it work for a "small" game like IL-2 GB.

 

VRS is a DX12 feature.

  • Upvote 1
Link to post
Share on other sites
40 minutes ago, LizLemon said:

VRS is a DX12 feature.

 

It's also in DX11 but it's not as full-featured. I read through NVidia's info page so I could argue with someone in the VR forums a few months ago, can't remember the details. Basically although it's "in" DX11 it's a lot more usable and useful (and less effort for developers) in DX12.

Link to post
Share on other sites

Guys keep this thread going.

By the time I’m ready to do another build this page will be my one stop research source. :)

 

I remember back when they 900 series was released, the 970 was the price/performance sweet spot. The 970 overclocked was equal to a 980 so that the way I went. That was a great decision.

 

Might be the same way this time around.

Link to post
Share on other sites
3 hours ago, Gambit21 said:

price/performance sweet spot

That is quite easy to determine (although it still also ... depends on personal needs of course).

Today the 3080 seems more than OK for IL-2 (even VR as others here wrote).

However ... we do not buy an expensive PC setup for today only ...

How crazy it may seem I, personally, would go for the best system (possible), regardless of price/performance today.

4K 144HZ, even 360 Hz and even 8K monitors are already a reality.

BTW it seems the 3090 is not even OK for all games that already exist today ... you can't get a constant 60 FPS @4K Ultra , let alone 120 or 144 or more !

https://www.youtube.com/watch?v=4b-djsb9w2U

Being 'overpowered' is not my problem (never will be) ... having to deal with my low FPS today however is a constant fight against my low end PC config.

Therefore a ... 4090 can't come soon enough.

Edited by simfan2015
Link to post
Share on other sites

And one thing I didn't notice initially was it looks like nVidia has done something with the Cuda cores, that they're reporting twice the number of cuda cores than one would expect from the transistor count. Rumors were that the 3090 was going to be a 82 SM part, corresponding to a 5248 cuda core part. Bigger than the 2080 To yes, but it's reporting in as a 10496 Cuda core unit. 

 

That's a lot of raster performance, but they only increased the transistor count by 50%. Given everything else they're packing into the cards, I find that hard to believe, unless they've found some way to double up the Cuda cores, either through some sort of hyper threading like thing, or something else. 

 

I think I want to see the 3080 benchmarks before I decide to buy anything. If it's really getting a near 100% lift in raster performance over the 2080, I may well go for the 3090 and start tracking the Pimax 8KX+. (which I had previously figured was un-drivable for at least another GPU generation).

 

But I want to see real, impartial numbers first. 

Link to post
Share on other sites

Some more info about the new cards.

First Nvidia posted a performance comparation between 2080ti and 3080 in Doom Eternal at 4k resolution and it looks like the new 3080 is destroying the old card.

https://videocardz.com/newz/nvidia-publishes-doom-eternal-4k-gameplay-captured-with-geforce-rtx-3080

 

3080 is able to sustain around 120fps at 4k in heavy play. That looks impressive.

Another interesting piece of information came from a Qand A session. Someone asked Nvidia to clarify what they mean when they say the new 3070 will outperform the 2080ti, and i quote:

 

"

GeForce RTX 3070

tldrdoto – Please clarify if the slide saying RTX 3070 is equal or faster than 2080 Ti is referring to traditional rasterization or DLSS/RT workloads?

Very important if you could clear it up, since no traditional rasterization benchmarks were shown, only RT/DLSS supporting games.

[Justin Walker] We’re talking about both. Games that only support traditional rasterization and games that support RTX (RT+DLSS). You can see this in our launch article here https://www.nvidia.com/en-us/geforce/news/introducing-rtx-30-series-graphics-cards/

 

So looks like the old king 2080ti will be outperformed by 2070, 2080 and 2090 in old games as well, not only in new games with ray tracing.

Cool.

Link to post
Share on other sites
1 hour ago, LLv34_Flanker said:

S!

 

As I suspected, launch prices of 3090 here ~2000€. *puts on the trolololo song* The way you´re meant to be shafted!

On a straight currency conversion...that's the equivalent of 50% more expensive.

Are they couriering it over from the US on first class or something?

Link to post
Share on other sites

That was my prediction too... Up to 2000 euro or 2200 usd on the old continent. 

The RTX 2080ti already got to 1500 euro here, so 2000 euro for the 3090 is no surprise! 

If was able to visit the US this fall I could have made the trip... for free by simply buying the 3090 in the US. This goes for a lot of electronics. Photography gear is also far cheaper in the US and Asia and can save you thousands of dollars getting it over there! 

In the more socialist countries people are taken care of better than in the US, but in the end someone has to pay... The consumer! 

Edited by simfan2015
Link to post
Share on other sites
8 hours ago, Voyager said:

And one thing I didn't notice initially was it looks like nVidia has done something with the Cuda cores, that they're reporting twice the number of cuda cores than one would expect from the transistor count. Rumors were that the 3090 was going to be a 82 SM part, corresponding to a 5248 cuda core part. Bigger than the 2080 To yes, but it's reporting in as a 10496 Cuda core unit. 

 

That's a lot of raster performance, but they only increased the transistor count by 50%. Given everything else they're packing into the cards, I find that hard to believe, unless they've found some way to double up the Cuda cores, either through some sort of hyper threading like thing, or something else.

 

Hardware Unboxed is speculating that each shader unit now has two FP32 cores and one F64 core (up from one FP32, one float), so depending on whether your application can use the two FP32s effectively, you get double the cores. It's better than hyper threading as it looks like it's giving almost double effective performance, whereas HT gets you only about 15-20% depending on application.

Link to post
Share on other sites

@Alonzo So apparently an nVidia rep explained it here: 

 

I think what they are saying was they used to have a 16x FP32 16x INT32 path, and made them capable of being 32 FP32 paths to get the doubling effect, but I don't think I really understand it yet.

Edited by Voyager
Let's try the nVidia homepage link for a less, kaboom page...
Link to post
Share on other sites

So the 3090 is twice the price of the 3080, that means it performs twice as fast doesn't it ;)  :)

 

I bought a 1080ti just before bitcoin bollox.  I paid £650, which used to be the cost of a top tier GFX card only 2 years ago.

2 years later I can sell my 1080 for more than I paid for it new.

 

Until Nvidia/AMD get their heads out of their arses and start selling at sensible prices again, I'm out.

 

 

Edited by J2_SteveF
Link to post
Share on other sites
6 minutes ago, J2_SteveF said:

So the 3090 is twice the price of the 3080, that means it performs twice as fast doesn't it ;)  :)

 

I bought a 1080ti just before bitcoin bollox.  I paid £650, which used to be the cost of a top tier GFX card only 2 years ago.

2 years later I can sell my 1080 for more than I paid for it new.

 

Until Nvidia/AMD get their heads out of their arses and start selling at sensible prices again, I'm out.

 

 

 

The 3080 cards are certainly more reasonable in price than their 20x card predecessors. 

Going to hurt the resale value of my 2080 Ti for sure.

Edited by dburne
Link to post
Share on other sites
2 hours ago, sevenless said:

3070, that will do the trick for me, I guess.

 

 

That’s what I’m thinking - sweet spot once again just like the 970 was.

 

Especially since my 4K monitor has a refresh of 60hz.

  • Like 1
Link to post
Share on other sites
On 9/1/2020 at 6:53 PM, Alonzo said:

RTX 3090: 24GB GGDR6X, $1499, claims 36 TFlops shader performance.

RTX 3080: 10GB GDDR6X, $699, claims "up to twice as fast" as 2080, 30 TFlops shader performance.

RTX 3070: 8GB GGDR6, $499, "faster than 2080ti", 20 TFlops shader performance

 

We should be cautios about what performance increase we would get in IL-2, a non Ray-Tracing game.

NVidia was mixing RT and non RT performances in those tables, no so sure about what is the effective gain for non-RT games.

 

So I will stick to my 1080Ti until we have solid proof of sufficient performance gain in IL-2 VR. 

Remember that NVIDIA is very very clever doing marketing speeches. And making you believe things that are not really real.

  • Upvote 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...