DirectX 12 and Xbox One
Read some forums out there and the authority with which people express opinions on the ramifications of DX12 for the Xbox One and you'd think a degree in computer science was just street level education. Except that people are making vastly unjustifiable claims that expose their complete and utter lack of knowledge on the subject.
Will DirectX12 on the Xbox One make the console faster? That actually depends on a large number of things and your definition of "faster". The components will run at the same speeds they always had (obviously). If you run the exact same code, optimized the exact same you should in theory get the exact same results. And, if the PS4 had the exact same SDK and implementation you should expect the PS4 to run those exact same processes faster.
That is one simple answer. If you can actually find an apples to apples comparison, the PS4 will always win. The PS4 has faster hardware where they share common components, so if the exact same code is optimized the exact same way across both platforms (which means eliminating the eSRAM and any difference in instruction set support) there is quite simply no chance that the Xbox One can win.
Of course, the real world is not so cut and dry. There are other factors at play including definitely the eSRAM and potentially differences in supported hardware level instruction sets. So, limiting yourself to the strictly technical apples to apples comparison of the hardware is more than just a simple oversight. What people, even hardware gamers, seem to care about is perceived performance. How many FPS at what resolution combined with how good the image looks.
And that is also unfortunately where any chance of an apples to apples comparison disintegrates. Their SDK's are difference, there are subtle differences in the hardware beyond the common elements (such as the aforementioned eSRAM) and there are unknowns in any given scenario. For instance, what new hardware instructions does DX12 enable (if any) and does the Xbox One GPU support them? The other big one is, how dependent are developers on the supplied SDK and functionality which could be further optimized?
The first is the biggest question. Does the Xbox One contain support new instruction sets which DX12 will enable? If the answer is yes, the potential for Xbox One's effective performance could actually be improved quite substantially.
To understand why this is such a crucial question, you actually need to understand what it means to support or not support an instruction set at the hardware level.
Basically, within a given CPU or GPU architecture there are core instruction sets (like x86, x64 and ARM). These represent the basics of what that chip needs to implement. Everything the chip can do must be describable using one or a combination of those instructions. But, often times there are functions which are used over and over again that require many instructions to complete and "waste" tons of cycles. The answer is extending the supported instruction set and adding hardware on the chip dedicated to processing certain advanced tasks. Thus began the era of extended instruction sets.
Extended instruction generally are processed by special dedicated logic gates in the processor which can handle something that would take many computations and do it in 1 (or perhaps a couple, but always less than the equivalent without the extended instruction set). As a result, Hardware support for instruction can even run as a much as a couple orders of magnitude faster than without them.
This can represent such a huge difference in performance than in many cases software vendors will simply refuse to support systems that don't support certain instruction sets. And they are arguably more relevant in GPU's than in CPU's. Special instructions are added for texture mapping, lighting calculations, tessellation and virtually everything you can think of.
So, if you add support for new critical instruction sets in hardware and optimize code around them, while it won't actually make the hardware physically any faster. You can absolutely end up with scenarios where very much more is processed in the same number of cycles. Or put another way, the hardware is effectively faster. And, this is the best way to improve effective performance, because it isn't at the cost of anything. It is doing the same work that it was doing without those instruction sets, just MUCH more effectively.
To give an example of this reality in action. I had a 17 inch Alienware laptop with dual NVidia graphics cards in SLI supporting DX9. I later bought a laptop with integrated Intel graphics that supported DX11. In general, my massively more powerful Alienware crushed the laptop with embedded graphics when it came to gaming. But, as games became more and more optimized for DX11 a funny thing started happening. In some games or parts of games, my crappy integrated graphics would out perform my vastly more powerful SLI Alienware laptop. My Alienware laptop didn't get any slower, and the Intel graphics sure as hell didn't get any faster. But, as games became more and more optimized for DirectX11, the code for those new instruction sets had to be interpreted in software first, then broken into may more older instructions before being processed by my DX9 era graphics card.
So, if the Xbox One truly has hardware support for new DX12 instructions, then as games are released for the Xbox One which are optimized for DX12 and if there are enough new heavily used instructions, it really could put the effective performance of the Xbox One GPU ahead of the PS4. On the other hand, if the instruction sets are for more obscure calculations it may have little to no impact in general.
To stress again, even in the best case scenario that DX12 adds a whack of massively beneficial instructions which the Xbox One supports in hardware, this would not make the Xbox One physically any faster than it is. This would be apparent on any existing game. Any game not optimized for the new instructions would execute the exact same instructions it did prior to DX12 and run at the same speed. The same game re-optimized (effectively just recompiled without any code changes) on the other hand might run drastically faster.
Does it have that hardware support? I don't know, I haven't followed DX12 closely to even know what DX12 consists of compared to DX11.2. But as you can see, it is possible that it could have a tangible impact on performance if the answer is yes.
The next thing in DX12 that was supposed to be a game changer was resource tiling. This what was supposed to make that 32MB of eSRAM better than opting for DDR5 for main memory. Tiling is effectively a resource usage optimization technique. Or put another way, in games, things are often processed which don't need to be, and this is a waste of CPU and GPU power. Things like trying to fully render details on something so far in the distance which the user can't even see the details on. Tiling doesn't actually make the hardware run any faster, either actually or even effectively. It reduces the amount of effort that the system is trying to invest in different areas of the screen where it is unlikely or perhaps even impossible to notice (or perhaps the developer just doesn't care if it looks pretty at a certain distant or not) resulting in a perceived performance gain simply by reducing the amount of effort to render the scene.
Tiling then falls into that apples to apple problem. Tiling may be able to make the Xbox One run the same games at higher resolution, faster frame rates and even look better than on the PS4. But, in those cases, it is because the Xbox One is actually doing less work to achieve the same effective result as the PS4. It wouldn't physically make the Xbox One any faster. But it could allow developers to get more out of it without the need for something technically as fast as the PS4.
And of course, there is really nothing (aside from investment of time) stopping devs on the PS4 from implementing a similar solution in their own rendering engines. Or to stop Sony from implementing an equivalent in their own SDK at some point down the road. But those are theoretical ideas which we'll avoid for now.
How much can it get out of this? I don't know. The onus is likely on the devs to actually make use of it. And so it matters whether or not they do make use of it, and probably also how good they are at making use of it.
To sum up. DirectX12 will likely not affect existing games at all (at least not without a patch) as existing games would not have been optimized to take advantage of any new instruction sets or tiling. So it isn't a magic bullet in that respect. Without hardware instruction support or serious optimization of existing instructions it is unlikely there will be an improvement at the firmware level (effectively the hardware level). Tiling may have a significant impact if used and used well. But exactly how much it can and will help has yet to be seen in a real world example.
So where do we land? Well, effectively we know nothing until we get a number of examples of games that are on both the PS4 and Xbox One where the XB1 examples have been recompiled to make use of DX12 and tiling. Until then it may we can make assumptions all day long. Maybe it will make no impact overall, maybe it will level the playing field and there is even a remote possibility that it will catapult the Xbox One into overall perceived performance supremacy.
I sincerely doubt the latter, it may give it a small edge, but is has been a long time since a single generation of DirectX instructions changes have meant such a huge leap in gaming, and that is only relevant if the instructions are implemented at the hardware level on the console.
Tiling offers some hope depending on easy the API ends up being to use. But it is a Band-Aid and won't likely give orders of magnitude of improvement.
Will DirectX12 on the Xbox One make the console faster? That actually depends on a large number of things and your definition of "faster". The components will run at the same speeds they always had (obviously). If you run the exact same code, optimized the exact same you should in theory get the exact same results. And, if the PS4 had the exact same SDK and implementation you should expect the PS4 to run those exact same processes faster.
That is one simple answer. If you can actually find an apples to apples comparison, the PS4 will always win. The PS4 has faster hardware where they share common components, so if the exact same code is optimized the exact same way across both platforms (which means eliminating the eSRAM and any difference in instruction set support) there is quite simply no chance that the Xbox One can win.
Of course, the real world is not so cut and dry. There are other factors at play including definitely the eSRAM and potentially differences in supported hardware level instruction sets. So, limiting yourself to the strictly technical apples to apples comparison of the hardware is more than just a simple oversight. What people, even hardware gamers, seem to care about is perceived performance. How many FPS at what resolution combined with how good the image looks.
And that is also unfortunately where any chance of an apples to apples comparison disintegrates. Their SDK's are difference, there are subtle differences in the hardware beyond the common elements (such as the aforementioned eSRAM) and there are unknowns in any given scenario. For instance, what new hardware instructions does DX12 enable (if any) and does the Xbox One GPU support them? The other big one is, how dependent are developers on the supplied SDK and functionality which could be further optimized?
The first is the biggest question. Does the Xbox One contain support new instruction sets which DX12 will enable? If the answer is yes, the potential for Xbox One's effective performance could actually be improved quite substantially.
To understand why this is such a crucial question, you actually need to understand what it means to support or not support an instruction set at the hardware level.
Basically, within a given CPU or GPU architecture there are core instruction sets (like x86, x64 and ARM). These represent the basics of what that chip needs to implement. Everything the chip can do must be describable using one or a combination of those instructions. But, often times there are functions which are used over and over again that require many instructions to complete and "waste" tons of cycles. The answer is extending the supported instruction set and adding hardware on the chip dedicated to processing certain advanced tasks. Thus began the era of extended instruction sets.
Extended instruction generally are processed by special dedicated logic gates in the processor which can handle something that would take many computations and do it in 1 (or perhaps a couple, but always less than the equivalent without the extended instruction set). As a result, Hardware support for instruction can even run as a much as a couple orders of magnitude faster than without them.
This can represent such a huge difference in performance than in many cases software vendors will simply refuse to support systems that don't support certain instruction sets. And they are arguably more relevant in GPU's than in CPU's. Special instructions are added for texture mapping, lighting calculations, tessellation and virtually everything you can think of.
So, if you add support for new critical instruction sets in hardware and optimize code around them, while it won't actually make the hardware physically any faster. You can absolutely end up with scenarios where very much more is processed in the same number of cycles. Or put another way, the hardware is effectively faster. And, this is the best way to improve effective performance, because it isn't at the cost of anything. It is doing the same work that it was doing without those instruction sets, just MUCH more effectively.
To give an example of this reality in action. I had a 17 inch Alienware laptop with dual NVidia graphics cards in SLI supporting DX9. I later bought a laptop with integrated Intel graphics that supported DX11. In general, my massively more powerful Alienware crushed the laptop with embedded graphics when it came to gaming. But, as games became more and more optimized for DX11 a funny thing started happening. In some games or parts of games, my crappy integrated graphics would out perform my vastly more powerful SLI Alienware laptop. My Alienware laptop didn't get any slower, and the Intel graphics sure as hell didn't get any faster. But, as games became more and more optimized for DirectX11, the code for those new instruction sets had to be interpreted in software first, then broken into may more older instructions before being processed by my DX9 era graphics card.
So, if the Xbox One truly has hardware support for new DX12 instructions, then as games are released for the Xbox One which are optimized for DX12 and if there are enough new heavily used instructions, it really could put the effective performance of the Xbox One GPU ahead of the PS4. On the other hand, if the instruction sets are for more obscure calculations it may have little to no impact in general.
To stress again, even in the best case scenario that DX12 adds a whack of massively beneficial instructions which the Xbox One supports in hardware, this would not make the Xbox One physically any faster than it is. This would be apparent on any existing game. Any game not optimized for the new instructions would execute the exact same instructions it did prior to DX12 and run at the same speed. The same game re-optimized (effectively just recompiled without any code changes) on the other hand might run drastically faster.
Does it have that hardware support? I don't know, I haven't followed DX12 closely to even know what DX12 consists of compared to DX11.2. But as you can see, it is possible that it could have a tangible impact on performance if the answer is yes.
The next thing in DX12 that was supposed to be a game changer was resource tiling. This what was supposed to make that 32MB of eSRAM better than opting for DDR5 for main memory. Tiling is effectively a resource usage optimization technique. Or put another way, in games, things are often processed which don't need to be, and this is a waste of CPU and GPU power. Things like trying to fully render details on something so far in the distance which the user can't even see the details on. Tiling doesn't actually make the hardware run any faster, either actually or even effectively. It reduces the amount of effort that the system is trying to invest in different areas of the screen where it is unlikely or perhaps even impossible to notice (or perhaps the developer just doesn't care if it looks pretty at a certain distant or not) resulting in a perceived performance gain simply by reducing the amount of effort to render the scene.
Tiling then falls into that apples to apple problem. Tiling may be able to make the Xbox One run the same games at higher resolution, faster frame rates and even look better than on the PS4. But, in those cases, it is because the Xbox One is actually doing less work to achieve the same effective result as the PS4. It wouldn't physically make the Xbox One any faster. But it could allow developers to get more out of it without the need for something technically as fast as the PS4.
And of course, there is really nothing (aside from investment of time) stopping devs on the PS4 from implementing a similar solution in their own rendering engines. Or to stop Sony from implementing an equivalent in their own SDK at some point down the road. But those are theoretical ideas which we'll avoid for now.
How much can it get out of this? I don't know. The onus is likely on the devs to actually make use of it. And so it matters whether or not they do make use of it, and probably also how good they are at making use of it.
To sum up. DirectX12 will likely not affect existing games at all (at least not without a patch) as existing games would not have been optimized to take advantage of any new instruction sets or tiling. So it isn't a magic bullet in that respect. Without hardware instruction support or serious optimization of existing instructions it is unlikely there will be an improvement at the firmware level (effectively the hardware level). Tiling may have a significant impact if used and used well. But exactly how much it can and will help has yet to be seen in a real world example.
So where do we land? Well, effectively we know nothing until we get a number of examples of games that are on both the PS4 and Xbox One where the XB1 examples have been recompiled to make use of DX12 and tiling. Until then it may we can make assumptions all day long. Maybe it will make no impact overall, maybe it will level the playing field and there is even a remote possibility that it will catapult the Xbox One into overall perceived performance supremacy.
I sincerely doubt the latter, it may give it a small edge, but is has been a long time since a single generation of DirectX instructions changes have meant such a huge leap in gaming, and that is only relevant if the instructions are implemented at the hardware level on the console.
Tiling offers some hope depending on easy the API ends up being to use. But it is a Band-Aid and won't likely give orders of magnitude of improvement.
Comments
Post a Comment