AMD on Gaming

This month we had a chat with AMD’s Nick Thibieroz, Ritchie Corpus, and Robert Hallock.

The trio provided diverse perspectives on a wide range of ideas, including proprietary technology and open platforms, the future of graphics, and the inner workings of the graphics industry. Here’s the entire interview – long, but worth the read.

Thanks for speaking with us – could you guys quickly introduce yourselves, for the benefit of the audience?

Nick: I’m the Worldwide Operation Manager for Gaming Engineering at AMD, so I basically oversee the team whose work includes collaboration with game developers, to basically make their games better and faster, look better on our hardware.

Ritche: I’m the Director of Developer Relations. My team focuses on the business engagement with developers and publishers in marketing and sales.

Robert: I’m the PR Manager for high-end graphics and gaming here at AMD.

Glad to have you all here. Without further ado, first question – beyond gaming splash screens, tech companies like yours can often feel faceless to many gamers. What do you think distinguishes your company? What’s the face you’d like gamers to be familiar with?

Ritche: I’ll chime in, but I’m sure Nick will have something to add – from a business perspective, the splash screens signify our guarantee of an experience that a customer is gonna have when they play the game on our hardware. It’s a commitment that we’ve always stated, not just for the gamers who are our customers but also for the developers and publishers. Our intent is to improve the overall gaming experience on the PC whenever possible. Sometimes we do that by introducing new technologies, like TressFX Hair or our multi-monitor support called AMD Eyefinity, and we always work to ensure that the game performs and runs at the very best, 60 frames per second or greater. Those are some of the things we use to distinguish ourselves. We may not get splash screens in every game, but there are always other identifiers, such as logos on the boxes, in the manuals, or even free game bundles like Never Settle Reloaded.

Nick: I would say that one thing which is a clear differentiator for AMD compared to other companies is the fact that we are embracing open standards. Not everybody may know that, but it’s very present in the way we work with developers and the way we try to make a better experience for gamers around the world. Adopting open standards is really something that’s dear to us, because we think it encourages innovation in the creation of new features, and it really drives creativity.

What distinguishes AMD’s products on a technological level? A tech layman might simply look at which card will net the highest framerate for the lowest cost – what is the layman overlooking?

Robert: There are a couple of things which uniquely differentiate us from the competition. Ritche mentioned AMD Eyefinity technology, which is something we brought to the market in 2009. It was the first time a gaming graphics card could support more than three displays simultaneously. Since then, we’ve actually supported up to six monitors at the same time on all of our high-end graphics cards. There are competing solutions which support multi-display but AMD still holds the crown of supporting the most displays at one time. We’ve worked very hard and very closely with game developers to include this feature in titles. And if you look at our AMD Gaming Evolved program, a program Ritche and Nick work on extensively, there are a lot of games coming out supporting Eyefinity. We also care a lot about giving enthusiast users multi-GPU performance – you can combine up to four of our highest end graphics cards and use them at the same time. That’s AMD CrossFire technology. Another big thing we’re working on is our Graphics Core Next architecture, which is the heart and soul of not just our desktop graphics cards but also our mobile graphics cards, our workstation graphics cards, our new cloud rendering graphics cards, and it’s in the next-gen consoles as well. It’s become the fabric of a huge cross-section of the game industry, and going forward you’re going to see a lot of optimization for this architecture, because it’s so omnipresent in the game industry. That’s something unique to us, which nobody else can offer.

To follow up quickly on Eyefinity – what makes it special besides the number of monitors?

Robert: It allows you to combine four, five, six screens, while as far as Windows is concerned, it’s a single monitor. So, kind of the ultimate version for games is this Eyefinity 5x1 portrait setup, which is five monitors oriented in a portrait mode, side by side by side. That’s something only we can do. We can do landscape or portrait mode, both have their pros and cons, and many games support this. This ultra-wide perspective, in an ultra-high resolution – it’s 20% larger than even these new Ultra-HD displays – that’s the kind of gaming experience that we alone are unique in providing. I see it as an opportunity for people to break into 4K style resolutions for much less than 4K monitors go for now, and I see it as an opportunity for gamers to even exceed 4K resolutions, to get performance and an experience that they can’t get with anybody else or on any other platform.

Nick: On top of the unique scale we offer, that we have hardware which is sufficiently powerful to run so many screens is something that makes us unique. If you look at all the benchmarks which measure performance of multi-monitor games you’ll see that in most cases, if not all cases, our cards come out on top at the highest resolutions. We have high bandwidth, very efficient memory controllers, which are able to drive games at a high resolution and a high framerate compared to the competition.

Ritche: Eyefinity is just another example of us leading the charge in terms of bringing new technologies to the forefront. That, again, is our commitment to the gamers and the publishers. A lot of what we do is collaborate with the industry, so when we work on new technologies it’s based on feedback and desires from our gaming partners and publishers. Eyefinity is just one example of that come to fruition. Another example would be TressFX, where we brought realistic hair into Tomb Raider. That’s technology which is an open standard; it’ll work on all hardware.

I’m glad you brought open standards back up – has AMD embraced OpenCL as its parallel computing solution?

Nick: Yes, OpenCL and DirectCompute. OpenCL is available on multiple OSes including Linux, DirectCompute is available on Windows platforms.

Robert: Both are critically important, but for different reasons.

Nick: Right. They’ve got very similar functionality; gamers wouldn’t be likely to perceive a difference, and most of the functionality that developers want is available on both. What’s most important is that developers have access to whatever they need on their platform, and that developer tools are available across the board and on multiple platforms. That’s something that our support of open standards enables us to provide.

What other open standards do you support for game developers?

Nick: We tend not to restrict the use of our technology or hardware, for multiple reasons that I’ve already stated. Instead of giving you an example I’ll give you a counterexample: Nvidia chose to restrict PhysX. It’s an API restricted to running on their hardware, but there’s absolutely nothing in that API which couldn’t be done with OpenCL or DirectCompute. That’s not the approach that AMD takes. It’s very important to us to expose new features and new ways to make games better, like TressFX for instance using open APIs so that everybody can benefit from it, including our competition.

Robert: To Nick’s point, another open standard that we have is AMD HD 3D Technology, which is our stereoscopic 3D. The approach we took was to release a quad-buffer SDK, which allows developers and hardware vendors to design equipment which is simply compatible with our 3D pipeline. Nvidia chose the alternative approach, where they do have an SDK but if there’s a monitor that needs to be compatible with the Nvidia solution, it is, as a rule, not compatible with the AMD solution. There’s no technical reason why that is, it’s a completely arbitrary decision. We like to give hardware vendors the ability to create monitors that will work with any solution, glasses that will work with any piece of hardware, games that will work with any piece of hardware. We just think it’s better for everyone if there’s more choice in the industry. Especially consumers. Giving them the option to choose from multiple glasses, multiple monitors, and to know that those things will work no matter what hardware platform they ultimately take them to – that’s very important for consumers at the end of the day.

It’s very clear how that’s to the benefit of consumers – could you say more about the specific ways it benefits developers?

Robert: So, a game developer is really interested in being efficient with their development process, both from a cost and a time perspective. Obviously they’re on really tight timetables, and they all want to keep their budgets down. To go back to the TressFX example, that was the first time anybody had given a realtime hair effects system to a character in a videogame. Lots of people have attempted demos to show people what it could be like one day, but this is the first time it’s shipped with a real game. And correct me if I’m wrong, Nick, but it’s based on a DirectCompute effect.

Nick: That’s correct.

Robert: The advantage of designing this in open DirectCompute is that anyone can take advantage of it. Anyone with a graphics card that turns on Tomb Raider can see this effect in action. So, we’ve made a conscious, obvious contribution to the fidelity of not just Tomb Raider but perhaps future games by creating this technology that anybody can take advantage of. We could have designed it in such a way that it only runs on AMD hardware. That is an option for anybody that creates a technology like this. But our decision was to create it in such a way that anybody can benefit from it. It’s a compute-based application.

Nick: From a pure technological perspective, DirectCompute basically grants full access to the graphics hardware. That’s obviously very important to developers. To this date, I think DirectCompute has been used in a fairly muddled manner, but I do think going forward there will be more and more use of this API. Just because you have full access, again, to the graphics hardware, and you have full access to the shared memory, to basically share workloads between threads. You’ve got a lot of synchronization between threads that are fully controlled by the developer. So really, DirecCompute is a programmer’s dream. It’s quite complex, to be fair, so sometimes it will take developers time to adapt, but once they’ve taken the leap of faith and start to understand DirectCompute and OpenCL and what they can help with, we’ll see more and more usage of lots of different effects based on those APIs. Especially with the new consoles. We know that the Playstation 4 and Xbox One are based on the same architecture we have on PCs. With all those console developers embracing the power of these consoles via DirectCompute, we’re going to see a lot of those games, once ported to the PC, being able to share the same benefits from research and development.

Robert: You know, two other technologies that are also compute-based which contribute to modern games pretty strongly are diffusion depth-of-field, which allows you to simulate the focal length of a camera or an eye. If you stare at something up close, objects in the background are blurred out, and vice versa. This effect is done best and most efficiently with DirectCompute. Another effect that you can count on is global illumination, a new feature we helped pioneer, which improves the quality of lighting in a scene by taking light sources and helping them realistically bounce off of objects – such as from a mirror to a metal and then onto the ground – until Dirt: Showdown, a game that came out 14 months ago now, global illumination had never made it into a game, certainly not on that magnitude. So, creating the list of objects that light sources should bounce off of is DirectCompute accelerated, and it’s only through DirectCompute that the procedure can be done efficiently. There are all kinds of effects that are making it into modern DirectX 11 games that heavily leverage the DirectCompute capabilities of a graphics card like the Radeon HD 7000 series to do the job. It’s just a powerful way to get these effects done.

AMD is big in the mobile space, as well – in terms of helping developers, are there any synergies you offer those developing simultaneously for mobile and other platforms?

Robert: Probably the biggest synergy point is the APU. The APU is now driving into the tablet space – VIZIO is a great company that just launched an awesome Windows 8 Pro-based tablet that runs a 4-watt AMD APU, and it has all the capabilities offered by much larger APUs for both desktops and notebooks. So, for the first time, there’s a common graphics architecture and a common CPU architecture that spans the PC industry from the smallest to the biggest devices. That’s going to enable developers to use a common language, a common engine. That’ll really make things easy for them, and to that point, those same architectures are also in the console. We mentioned earlier that Graphics Core Next is the fabric of every kind of device out there, and that’s going to usher in an era of commonality that reduces costs and time-to-market for game developers.

Any thoughts on Nvidia’s SHIELD project?

Robert: SHIELD is certainly innovative for Nvidia, but I think the bigger story in terms of gaming is that consoles are still very much alive. And according to the industry’s three biggest players – Microsoft, Sony and Nintendo – this form factor is still the future of gaming. So I think the biggest story with regards to platforms is that in conjunction with those three companies, we’re still raising the bar for graphics quality by bringing these DX11 class graphics solutions to console gaming for the first time, and enabling these consoles with massive Video RAM buffers for the first time. Instead of 512 MB or 256 MB with these current consoles, we’re talking 8 GB with next-gen consoles. We’re coming to a point where the overall fidelity of the gaming industry is about to take a big leap, and that’s something that’s going to be unique to the PCs and the consoles, not the SHIELD.

You feel that there will be a big leap on the PC, not just the consoles?

Robert: Absolutely. When you empower game developers with more powerful hardware, when you raise the baseline, they’re empowered with a new quality of tools that they’ve never had before. Instead of dealing with seven-year-old hardware when programming a game, they’re programming with here-and-now 2013 hardware. The graphics industry has come a long way in the past seven years, so now these developers will be able to use these modern tools that they haven’t had access to before. All these DirectCompute effects that we’re talking about couldn’t be done on the old consoles, so by bringing up the baseline in a really powerful way, you’re going to see PCs benefit from that as well.

Nick: To give another example, all of our graphics hardware in the Xbox One and the Playstation 4 are APUs, which streamlines the communication between memory, CPU and GPU. If you look at the DirectX 11.2 preview which was just released with the Windows 8.1 preview, already we’re seeing new features being exposed which basically allow the developers to leverage this computing model. That’s obviously very important going forward – it will allow new types of algorithms to be developed, and we’re seeing already how consoles and our APUs are leading the way, and how those functionalities are now enabled in an official API from Microsoft. I think it’s quite exciting – going forward, we can expect a lot of advances in this area.

I asked that question because I’ve heard some disagreement within the industry – some claim that the highest-end PCs will already offer a the most graphically advanced experience, even at the outset of this console generation. I take it you disagree.

Nick: When a new console generation is released, you’ll always have fanboys and lots of diverging opinions about which one is more powerful between the PC and consoles. I would say that right now, the consoles are in a very good position. However, you can always build a very expensive PC with multiple GPUs working in parallel and obviously achieve better performance if you have the budget for it. That’s definitely possible. So, it really comes down to a question of budget and how much money you want to spend. As I said, the current graphics APUs in the Playstation 4 and Xbox One are very powerful. The fact that they’re tied to DDR5 memory in the case of the PS4 or eSRAM in the case of the XB1 gives them additional advantages compared to the PC. And, again, the platform architectures of these consoles opens the door to new types algorithms which may not immediately be available on the PC. So, even though the hardware may be medium to high range compared to what you can find on a PC today, the additional functionalities you can find on a console make them a bit more interesting, at least on the short term, compared to PCs.

Robert: It’s really hard to judge a timeline. One claim I see often is, “it’s a PC in a box.” Well, not really. There are PC components in a console, but the way they’re pieced together, the platform architecture that they’re based on, and the software it runs are in many ways a world apart from PCs. So, judging the timeline is extremely difficult to do, because we still don’t know what secrets developers will be able to unlock in two years or in five years or eight years. So, I think it’s in many ways too early to tell.

Ritche: I think this generation is different in the sense that, as Robert mentioned, while there’s a lot of PC parts that were put into it, it’s still a unique console in itself. One thing that will help this generation, I think, is that since there is a lot of familiarity with much of the technology in these consoles that came from the PC, I think we have the opportunity to see a lot better games within the first two years of launching the console, rather than having to wait to the mid-year point where you’ll start to see a lot of that power harnessed as we did with previous generations.

Let’s shift gears a bit, and go behind the scenes a little – regardless of AMD’s specific contributions, your products at the end of the day need to work pretty much similarly to those of your competitors. So what’s competition like, in that kind of industry? How do you get the upper hand?

Robert: Well, the easiest way to put it… say you’ve got two car engines, representing our architecture, Graphics Core Next, and for example, Nvidia’s architecture, Kepler. In the automotive industry, there are many ways to skin a cat. Hyundai has a V8, Ford has a V8, Chevrolet has a V8, they all have V8 engines, and they all combust gas, and they all drive wheels, but when you get down to the brass tacks they all do it a little bit differently. They’re all ultimately running the same “code,” too: gasoline fuel, petrol fuel, but the way they do it is different. Once you take that engine and start applying it to games, you can optimize the way the game is running – say, the compression ratio or the piston design. Part of it is making sure that your architecture is ready for the way games are going to be designed a couple years out – when we sat down with this Graphics Core Next architecture several years ago, we thought that DirectCompute would become very important in gaming, and looking at 2012 and 2013, we were right. The prominence of DirectCompute has become quite high. But then you have to go a step further, and say “Okay, how can we help game develoeprs optimize for the way we do our engine, without crippling the other guy too?” Obviously we don’t want to impair performance for anybody else. That’s the basic design of how you can get an edge without speaking completely different languages from somebody else.

Ritche: Using the car analogy, outside of the engine there are other things you can do, utilizing suspension and things like that. Those are some of the things that we can enhance in terms of specific features on the cards, like the multi-monitor support, or TressFX. These are things that we’ve enhanced – we’ve enabled those features to run better and to be more easily executed. And we were able to accomplish that because of the feedback we’ve received from developers. It’s the same key point we’ve been coming back to – that’s how we work differently, that’s how we provide benefits to all our partners. When we develop technology on our hardware, we don’t just incubate it in a box by ourselves. We actually solicit feedback, we go out there and ask game developers. What is it that they lack? What is it that they need to develop their games in a better, more realistic fashion?

Nick: Exactly. We thought DirectCompute was going to be important to developers, and it turned out we were right. We optimized our architecture to be efficient when it comes to local memory access, and all those aspects of DirectCompute which are essential to achieve best performance. Another point I wanted to add is that unlike our competitors, we don’t have to rely on artificial advantages in the form of locked APIs. We aren’t afraid of direct competition, because we believe we’ve got the best hardware. We believe we’ve made the right choice when it comes to optimizing our hardware for the right features. It shows – we don’t have to rely on PhysX or any other artificially locked API.

In that case, on the note of anticipating developers’ desires – what do you see on the horizon? Where do you see graphics tech in five years?

Nick: That’s a very good question. I think we’re seeing only the very beginning of the global illumination techniques like Robert mentioned earlier. There are many ways of achieving global illumination, many of which we may not even have discovered yet. Most of them will probably utilize DirectCompute, alongside very large memory structures, like voxels for instance. We’re going to see a lot of development in that area in the next two to three years. On top of that, now that we’ve pioneered realtime hair simulation rendering in areal game, I think TressFX or similar techniques will start to appear in a lot more games. That’s something I’m personally looking forward to seeing.

Robert: I think Nick is right that lighting is something of an undiscovered country – having many complicated light sources is sort of new in the graphics space, and something that can still be explored. I also think that textural complexity, both in terms of size and quantity, is something we’re going to see a lot of going forward, especially now that consoles have these massive VBuffers to store all those textures. If you go look at a DirectX 9 game or a DirectX 10 game you’ll see a lot of the same textures repeated over and over, but there are now technologies and technological capabilities that can support greater texture diversity. Geometric complexity is also still something we can pursue. Not necessarily with characters, which are becoming quite detailed, but with the environment,. We can now make environments more interactive, more lush, and more detailed. I would definitely say that the era of hair-splitting won’t be upon us for a good while yet – obviously, TressFX hair is just one example of a feature that these new architectures allow us to explore. Lots of stuff is always around the corner.

Definitely. There are still light-years to go not only in terms of visual fidelity, but also in compute power – the kinds of situations you can simulate...

Nick: Yes, absolutely. DirectCompute is not fully restricted to the use of rendering, it can also be used to lighten the load on the renderer. For example you can use it to parse your tree structure which contains all your objects and all your lights. You can use it to do a fast CPU-side render of your scene just to determine what the occluders for your scene are, because it lets you isolate the objects that are actually visible on your screen and only optimize those. AI is another avenue where DirectCompute is going to be more and more useful. Definitely physics simulation as well, and lots of other areas which are not directly tied to graphics but help to make graphics better.

Thanks again to Nick Thibieroz, Ritchie Corpus, Robert Hallock, and to AMD in general. We’re looking forward to continuing work with you. Stay tuned for the next Spotlight!

comments powered by Disqus


We’re honored to be elected to the PC Gaming Alliance Board of Directors, said Min-Liang Tan, CEO, Razer. There is so much synergy between Razer’s core DNA – an essence of pure commitment to improving the PC gaming experience with state-of-the-art peripherals – and this organization’s drive to establish high standards and quality guidelines for the evolving industry at large. Both Razer and the PC Gaming Alliance are dedicated to addressing the needs of a maturing category and its largely sophisticated audience.