Voyager Posted October 27, 2022 Posted October 27, 2022 Wondering how difficult it would be to implement ray tracing as an option for the aircraft mirrors? It occurs to me, right now we've got a surplus of GPU power, and both nVidia and AMD GPUs supporting raytracing. Given that the mirrors are currently me of the heavier CPU hits, I'm wondering if it would be possible to do the mirrors using GPU raytracing? I also find myself wondering if that might be something to look into for AI vision testing, and if that could reduce the impacts of having lots of AI active at the same time?
1PL-Husar-1Esk Posted October 27, 2022 Posted October 27, 2022 Refraction , this technology would benefit the game but ray tracing ? The dynamic lighting is not fully implement , watch how things burrns... Or how engine exhaust flames looks during different day times. 2
Voyager Posted October 27, 2022 Author Posted October 27, 2022 6 hours ago, 1PL-Husar-1Esk said: Refraction , this technology would benefit the game but ray tracing ? The dynamic lighting is not fully implement , watch how things burrns... Or how engine exhaust flames looks during different day times. I mean sure, but as near as I can tell the current mirror system is generating the reflection using software ray casting, so having an option to do that using the GPU should at least reduce that load and possibly work better. And I'm only talking about using to generate the mirror image, not as a whole full up lighting system.
1PL-Husar-1Esk Posted October 27, 2022 Posted October 27, 2022 If that could free some CPU cycles, it is good idea.
AEthelraedUnraed Posted October 28, 2022 Posted October 28, 2022 (edited) 20 hours ago, Voyager said: Wondering how difficult it would be to implement ray tracing as an option for the aircraft mirrors? Very. You'd basically have to write a completely different rendering system in addition to the already existing one. 13 hours ago, Voyager said: I mean sure, but as near as I can tell the current mirror system is generating the reflection using software ray casting, so having an option to do that using the GPU should at least reduce that load and possibly work better. I'm curious as to why you think the mirrors are done using ray casting. There are other existing techniques to render mirrors. Also, ray casting is in itself not too different from ray tracing (basically the only difference is that ray tracing "continues" and even sometimes splits up the ray after hitting something, allowing for advanced effects such as reflections and refractions (which would make it extremely easy to implement ray tracing since they'd already have a big part of the infrastructure in place, but I doubt they use ray casting). Regardless of if they use ray casting or not, are you completely sure the mirror shader executes on CPU? That sounds rather... unlikely to me. Yes, a mirror shader is software, but so is literally every shader. Shaders, including the ones a mirror effect would consist of but also the cockpit material, wings, terrain, sky etc., consist of small pieces of code (i.e. software) that run on the GPU. There is no reason I know of why a mirror shader would run on the CPU. Note that in essence, a GPU is not much different from a CPU, only that they're optimised for different things. Basically, a CPU is made to do single calculations really fast while a GPU is slower, but can do lots of calculations simultaneously. In theory, there's nothing stopping you from running background Windows processes on your GPU, while doing the desktop rendering on CPU (apart from that it's really, really stupid as well as really, really slow) That is not to say that a shader runs *entirely* on the GPU. There's always some data transfer going on between CPU and GPU, and it's plausible that some of the data pre-processing is done on the former. But applications where you need to do similar calculations for lots of pixels are best done on the GPU, no matter if they use raycasting or other techniques. Regardless of where they are executed, anywhere between a large part and the entirety of the shading would be considered software. 20 hours ago, Voyager said: I also find myself wondering if that might be something to look into for AI vision testing, and if that could reduce the impacts of having lots of AI active at the same time? Not really. The AI vision testing is more likely than not already ray based. Edited October 28, 2022 by AEthelraedUnraed
firdimigdi Posted October 28, 2022 Posted October 28, 2022 (edited) 14 hours ago, Voyager said: generating the reflection using software ray casting Interesting conclusion, I'll offer you a simpler one: like the camera that's strapped to your pilot's face rendering the forward view there's a similar camera (lower resolution, different LOD settings controlled by the "mirror quality" settings) strapped on the back of your pilot's head. The output from the latter is rendered as a texture on the area defined as the plane's mirror in the model by a material assignment. Edited October 28, 2022 by firdimigdi
Voyager Posted October 28, 2022 Author Posted October 28, 2022 @AEthelraedUnraed I was thinking that because last time I used the mirrors they were a big CPU hit and had the weird sort of blocker build that you usually get from ray casting type systems. However, that was a while ago, and the mirrors now look very different from what I recall. I can say they are *not* using ray tracing on the AI vision systems because they see through obscuration. The whole point of ray traced vision is to correctly tell when the vision source can and cannot see something. Plus they certainly are not using GPU accelerated ray tracing for it because they support GPUs that don't support raytracing. @firdimigdi It is more complicated than that because the view angle changes with head position. Though I have not attempted to check if it's properly doing the angles, but it's definitely more complicated than just a camera strapped to the back of the head.
firdimigdi Posted October 28, 2022 Posted October 28, 2022 53 minutes ago, Voyager said: because the view angle changes with head position Precisely why the rear viewing camera is linked with the invisible virtual head. Trust me, that's how this sausage is made and pretty much how any game with a rear view mirror does it.
AEthelraedUnraed Posted October 28, 2022 Posted October 28, 2022 2 minutes ago, Voyager said: @AEthelraedUnraed I was thinking that because last time I used the mirrors they were a big CPU hit and had the weird sort of blocker build that you usually get from ray casting type systems. Sure they were a CPU rather than GPU or combined GPU/CPU hit? For me, they definitely have a GPU hit, although I didn't really check my CPU usage. The mirrors surely have a lower resolution, which would also give a "blockier" view. 3 minutes ago, Voyager said: Plus they certainly are not using GPU accelerated ray tracing for it because they support GPUs that don't support raytracing. The keyword is "accelerated" here - that essentially means specialised hardware. Even if a GPU doesn't have dedicated hardware for raytracing calculations, it can still do those calculations using "generic" hardware (albeit less efficiently). Remember that, as I said above, a GPU can do whatever calculations you want, even those normally done on a CPU (look up GPGPU). Although it's completely possible that a shader has *some* CPU impact because of data (pre-)processing (some of the popular solutions for rendering mirrors basically involve rendering the scene twice, and this includes all of the CPU-based overhead), it would strike me as highly unlikely that the mirror shader would be CPU based, whether they use raycasting or not. 10 minutes ago, Voyager said: I can say they are *not* using ray tracing on the AI vision systems because they see through obscuration. The whole point of ray traced vision is to correctly tell when the vision source can and cannot see something. Actually, they cannot see through objects, and much more likely than not, do use raytracing. Note that ray visibility doesn't necessarily mean "screen" visibility. Here, let me explain: Left is what is rendered on your screen. Although there are some objects that partially obscure the tank, there's enough of it visible to detect the tank as a human player, and likely even hit it. Since there's not really any easy way to know whether an AI can see something without rendering the scene, in order to do an equally good visibility check, you'd need to do part of the rendering process for *every* AI. Specifically, you'd need to do the z-buffer pass, including all necessary texture lookups to make sure the alpha channel of any textures isn't transparent (e.g. the trees use partially transparent textures). Texture lookups are *very* expensive. As you yourself noted above, rendering a scene twice (i.e. for the mirror) already has a noticeable impact, whether that's on the GPU or CPU. Rendering the scene separately for every AI would turn the game into a slideshow. Therefore, AI visibility calculations in games usually use hitbox calculations, similar to the calculations employed to determine whether a shell hits the target. This makes use of relatively simple geometric calculations and avoids the texture lookups that would otherwise be necessary. That doesn't mean this system doesn't use rays however: here too, a ray is cast from the AI "eye" to any prospective targets, usually one or a couple of sample points per target (the blue dots in the image above). It is important to note a couple of things: - The hitboxes don't necessarily give the same occlusion as a true render would. Specifically, very likely no texture lookups are done, which results in the different shape of the trees in the example, and the reason why the center dot doesn't return the visible "verdict" on the right image while it would on the same position in the left image. - It doesn't cast a ray for every pixel and then check if there's a target visible (left image), which would be terribly inefficient since by *far* most pixels don't feature any target. Rather, it only casts just a few rays towards a target and then checks if those rays were blocked (right image, blue dots). Incredibly more efficient, but for complex scenes with lots of partial occlusions, it can give inaccurate results. - Again, these calculations could be performed on the GPU, if desired. TLDR: - AI visibility calculations do use a form of raycasting. - AI visibility raycasting is usually highly optimised for its purpose, which can cause slightly worse results but at a much, much, incredibly much lower CPU/GPU cost. DISCLAIMER: There are multiple ways to check for AI visibility, and I can't be certain this is how IL2 does it as I'm not associated with the Devs. However, this is a popular and efficient way to do it. 1 hour ago, Voyager said: @firdimigdi It is more complicated than that because the view angle changes with head position. Though I have not attempted to check if it's properly doing the angles, but it's definitely more complicated than just a camera strapped to the back of the head. Since you have to create a second camera anyhow, there's no reason why you couldn't move its position according to the player's head position. Firdimigdi's suggestion is definitely a valid (and popular) solution for mirror shaders, although certainly not the only one.
354thFG_Drewm3i-VR Posted November 1, 2022 Posted November 1, 2022 Lol, we don't even have multi-threading, FSR/DLSS, or Dx12/Vulkan support and you want ray tracing?!?! ? One thing at a time!
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now