Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

The technology of Killzone Shadow Fall

Inside Guerrilla Games - a next-gen deep-dive special.

It was an irresistible opportunity. Just before the launch of Killzone Shadow Fall, Guerrilla Games offered us the chance to visit its Amsterdam studio - to meet and talk with every major design and technology discipline within its walls. As journalists we are almost always outsiders looking in, and not privy to the design process itself, so this unprecedented level of access would make for a very different article. During the course of our visit, we swiftly discovered that next-gen isn't just about better graphics and sound, although clearly those play an important part; it's about the opportunities this new level of power offers to developers, and how it allows them to more fully express themselves as creators.

A key part of this is procedural generation of in-game assets - animation and audio are two systems that benefit from this the most. Killzone Shadow Fall has seen a radical change in how these game elements are handled; instead of bespoke high-level code being generated by programmers on direction from the creative element, a more low-level system has been created, allowing designers more access to raw game data. When they sculpt new assets, they are translated into raw, procedurally generated code executed on the PS4's x86 processors.

"On this project we've tried to move away from lots of work being done by the sound programmer and to a direction where if sound engineers have an idea for how a certain system is supposed to work they should be able to get something like that up and running by themselves," says senior tech programmer, Andreas Varga.

"Previously sound engineers were the guys who make these wave files, and the sound programmer would put them in the game and make them all work and connect everything together. Now we're moving to a point where everything is state-driven. We can do all of these things in a tool and the actual sound programming now happens at a lower level."

In essence, coders now create the tools that the designers use, rather than writing specific code based on an idea that may or may not work. Killzone Shadow Fall is a remarkable audio experience precisely because of this new workflow. A key example of this is how the sound of the player's gunshots are shaped by the player's current environment, just as they would be in real life.

"Everywhere in the game you have materials - walls, rocks, different things - that for geometry purposes are tagged with what material they are... We bounce the shock waves of the gun off every surface in the game, all the time."

One step beyond 'HDR' audio. Killzone Shadow Fall's surround audio system has commonalities with DICE's acclaimed surround sound set-up, but the addition of MADDER - Material Dependent Early Reflections - which shapes the sound of your gunfire according to the surrounding environment. Watch on YouTube

"Everywhere in the game you have materials - walls, rocks, different things - that for geometry purposes are tagged with what material they are," says lead sound designer, Lewis James. "Now in the real world, when you fire a gun, the sound is just a byproduct of what what's happening inside the gun. That's the only part of the actual event that games tend to care about normally - the sound of the shot.

"But there's all sorts of things happening - a pressure wave that comes out of the gun interacts with the surfaces it touches when it has sufficient force. So that's what we do. It's a system called MADDER - Material Dependent Early Reflections. We bounce the shockwaves of the gun off every surface in the game, all the time. That defines the sound. The point is that there should be no illusion that it's reverb - because it isn't. It's real-time reflections based on geometry."

MADDER is an excellent example of the new relationship between designers and programmers, illustrating how giving lower-level access opens the door to more raw creativity.

"For the first MADDER prototype we did, we only had one ray-cast so it would give us the closest wall, the material and the angle of that wall. So yeah, we tried it, it sounds amazing and no other game has something like this but actually we need it for all the different walls," says senior sound designer, Anton Woldhek.

"It's half an hour of work for Andreas to just expose it for multiple angles. For me, it's a lot more work because I have to make the content work with all the different angles and make sure that our end of the logic makes sense. It gives you a lot more freedom to try ideas we always had but would always runs into having to convince coders that this is worth the effort when if you're not really sure if it's worth the effort, it's a lot harder to do because their time is so precious."

"The objective of the coders is to make sure that the designers don't need to talk to them. Not because we don't like designers, but if the programmer is in the creative loop, then mostly the programmer's iteration time is a lot lower than anybody else's."

In essence then, all of the assets generated by the animation designers are now processed by a low-level system, and actually run as code in-game - a system that's only possible owing to the increased CPU power available on the PlayStation 4. Procedurally generated assets also extend to animation, too, where Killzone Shadow Fall represents something of an "unseen" revolution. There's far more variety in the way characters look and move in the new game exactly because of the same procedurally driven workflow.

"What we have is a basic skeleton, then we add the procedural skeleton on top. And then we have something that is called PD - positional-based dynamics. That is basically the format that we've written now," explains senior tech artist, Daniele Antinolfi.

"You have to think of the procedural bones just like an armour for the characters, so we can share the same characters from a rigging point of view but we can add on top of it, like an armour of joints for adding better behaviour. From character to character, we add this armour to create a better, believable character. In this way we can create more robust, believable characters. The armour of the procedural joints is fast and easy."

Natural and believable

Essentially, there's a base animation rigging platform that is extremely difficult for the developers to build upon just for the sake of small additional detail for a single entity. In the past, new character animations would be shelved rather than going through the effort of adjusting and adding to that base skeleton. With the new procedural workflow, each character has an additional layer of code running, adding that character-specific "armour" to the base skeleton. Again, these are code that is executed in real-time on the x86 cores. On top of that, position-based dynamics allows for realistic close-range simulation of cloth, and other elements, reacting realistically to the forces in motion.

Technical director Michiel van der Leeuw clarifies the problem and how the new system works:

"The problem is that we have 2500 animations for the character. It's impractical for that guy to have his own animation set. We'd need to add those bones to the root skeleton the whole game uses, then re-export the whole animation set," he explains.

"And people would need to come in on the weekend when nobody else is working. If you break the animation set-up of the entire game, everything would break. Some of these things hadn't been exported for like, half a year, because the game grows and grows. We'd have these ridiculously complex discussions about things like making a heavy armoured guy with a tank on his back, but we'd need to add two joints. Very often we'd just scrap the character and go back to game design and say, 'no we can't do it - at this point in the game we can't export the animations'.

"But now with the logical skeleton being so clean, these guys can do very expressive things on top of that. It looks cool but it means we can go back to game design and say 'yes' a lot more often if they want a guy who's a little bit taller, a little bit fatter or has a special shield."

"A lot of this stuff isn't noticeable, but it makes the characters believable - and that is what we really want. Guys like me that work with characters want to make them believable."

Here we see the basic 'logical' skeleton on the far right of the screen, the procedurally generated additions in the centre, with the full model featuring position-based dynamics on the left.Watch on YouTube

It's another example of how next-gen power allows for the creation of new opportunities for the developers, creating a more natural and believable game.

"This is what a lot of people don't see when they talk about next-gen. The fact you wouldn't see it is because normally you are limited in design choices - to not have a coat, to not have all of these things. You wouldn't see them," van der Leeuw continues.

"You get characters that are all the same - you get a soldier and a slightly different soldier. Now the boundaries are removed. It's not the fact that a guy with a coat is next-gen but... to put a guy with a coat, a guy with a gas mask, a guy with a canister on his back into the same game because we have the time to do it."

"We're happy with this system," adds junior tech artist Perry Leijten. "In the beginning we didn't know how much of the position-based dynamics we could use, so we went all out, right down to doing zippers. Later on we figured out that wasn't going to work, so broke the connection with the position-based dynamics, re-export it, and it's not even calculated."

"A lot of this stuff isn't noticeable, but it makes the characters believable," concludes Daniele Antinolfi. "And that is what we really want. Guys like me that work with characters want to make them believable."

Guerrilla's pioneering spirit in exploring this level of new technology is all the more impressive given how much of Killzone Shadow Fall would have been developed on PC hardware while the PS4 technology was still work-in-progress (indeed, the Killzone engine also has a PC version - though as PS4 development scaled up, the parallel PC build got slower and slower by comparison). Combined with the pressures of producing a launch title, Guerrilla made bold choices in essentially ripping out key systems and building them up from scratch, but the approach has clearly paid off.

"It made sense to be part of the same workflow that everyone else here is using... we benefit from what everyone else benefits from and that really makes us more part of the development team than we were before."

"As designers, it made sense to be part of the same workflow that everyone else here is using," says sound man Anton Woldhek. "The sound engineers are pretty much in the centre of the studio so we can interact with all the other departments very easily. And with these tools we're part of the same workflow too which means that we benefit from what everyone else benefits from and that really makes us more part of the development team than we were before."

Michiel van der Leeuw sums up the new approach succinctly:

"The objective of the coders is to make sure that the designers don't need to talk to them. Not because we don't like designers, but if the programmer is in the creative loop, then mostly the programmer's iteration time is a lot lower than anybody else's iteration time," he explains.

"A shader artist can paint faster or a sound designer can make sound faster than a coder can recompile the engine, start the game and listen to something. Whenever a coder is in somebody's creative loop, the creative loop is relatively slow. If we make a relatively technical system that allows someone to do much more, we move the programmers out of the loop and people can experiment without having to go to a coder."

The challenges of a physically based rendering model

From the "unseen" to the complete opposite: despite the radical change in workflow behind the scenes, what is patently obvious from the first seconds of the campaign is that Killzone Shadow Fall truly is a beautiful game. There are two key elements to what makes it look so special over and above the increased poly counts you would expect from a next-gen title. Materials and lighting are quite sublime: two systems working in concert to create an aesthetic that is already being adopted by a great many other next-gen titles currently in development.

To a great extent, lighting is defined by the materials utilised within the environments. The same 'roughness' on objects that informs the MADDER audio system also helps define how light bounces off the materials, in combination with other aspects of their make-up. Creation of these materials involved a significant change in the way the artists worked.

"From the early days of making Doom 1, everyone would paint a shadow into their textures. At some point in the next generation, people would have to learn not to paint anything into their textures. They have to make a normal map and an albedo map," Michiel van der Leeuw recalls.

"Now everyone needs to learn how to draw your roughness, your albedo [the reflective power of the material], your bump and your specular intensity. You have to think about the material in its naked form. You have to remove the bias and be a lot more analytical. That was a lot of the training that people had to do."

"On the PlayStation 3, we'd have a maximum of 7000-8000 building blocks and now we're seeing that on PS4 that we're pushing 26,000 building blocks. We'd have two LOD steps for these elements, now we have seven LOD steps."

The first 35 minutes of Killzone Shadow Fall, complete with performance analysis. However, concentrating on the game's first few levels demonstrates the phenomenal job Guerrilla has done, lighting and materials in particular standing out.Watch on YouTube

Every object in the game needed to be created with the new materials-based lighting system in mind - not a small task considering the scale of the new game. Guerrilla has created much larger levels in Killzone Shadow Fall, and split them up into different scenes so that multiple environment designers can work on stages in parallel.

"For a scene - what we call a section - on the PlayStation 3, we'd have a maximum of 7000-8000 building blocks and now we're seeing that on PS4 that we're pushing 26,000 building blocks. We'd have two LOD steps for these elements, now we have seven LOD steps," says lead environment artist, Kim van Heest.

"It's become a lot easier to make levels look beautiful. We can really focus on getting as many triangles as possible out there... If you look at the square metres of stuff art did, I think it's roughly four times more than it was on Killzone 3. And that means we can build these environments - but lighting still needs the same attention."

With the materials element in place, it's now a case of how light interacts with all those different attributes, and how it affects the creation of the environments. Killzone Shadow Fall uses a system heavily dependent on realistic real-time lighting that is backed up by pre-computed backups that blend together to produce some stunning results.

"Lighting in our game consists of one key element which is basically the real-time dynamic lighting which is supported by pre-baked lighting for static and dynamic objects as well as a reflection system," says senior lighting artist, Julian Fries. "We have pre-calculated reflections plus real-time reflections but it all starts with the dynamic lighting features we have for dynamic lights and the rest is basically supporting it."

In this case, "the rest" primarily consists of directional lightmaps, localised cube-maps and a volumetric grid of light probes. But the utilisation of those light probes is part of the real-time component and is processed in a rather unique manner.

"Every pixel on-screen that doesn't use lightmaps, we search through this giant grid of light probes and choose the most appropriate lighting," says lead tech coder, Michal Valient.

"We have pre-calculated reflections plus real-time reflections but it all starts with the dynamic lighting features we have... and the rest is basically supporting it."

Real-time lighting is sampled and blended from a mammoth array of light probes dotted around the level - a small sampling of which you can see here. Each pixel draws upon a blend of the light probes in order to produce the most realistic effect.

"Previously we had a couple of thousand light probes per level and they would pick and blend the three closest ones per object. Now we have about two orders of magnitude more. We approximate and find the closest four light probes per pixel so there's no boundary," adds Michiel van der Leeuw.

"The boundaries between where the light probes end and where the light maps start is diminishing and converging - we'd like to get to get to the point where we have one or two more orders of magnitude more light probes - a couple of million per level and do away with lightmaps."

The ray-traced reflection system

Shadow Fall's reflection system also contributes to the often spectacular lighting effects work. Michal Valient previously explained the basics in Guerrilla's post-mortem of the PlayStation Meeting demo, but was on-hand to go into more depth during our visit to the studio.

"What we do on-screen for every pixel we run a proper ray-tracing - or ray-marching - step. We find a reflection vector, we look at the surface roughness and if you have a very rough surface, that means that your reflection is very fuzzy in that case," he explains.

"So what we do is find a reflection for every pixel on screen, we find a reflection vector that goes into the screen and then basically start stepping every second pixel until we find something that's a hit. It's a 2.5D ray-trace... We can compute a rough approximation of where the vector would go and we can find pixels on-screen that represent that surface. This is all integrated into our lighting model."

"It's difficult to see where one system stops and another begins. We have pre-baked cube maps and we have real-time ray-traced reflections and then we have reflecting light sources and they all blend together in the same scene," adds Michiel van der Leeuw.

The system of lighting fallbacks is exceptionally cool - by using the ray-casting technique, there are admittedly limitations. Light sources behind an object wouldn't be available to the reflection system, but the algorithm knows that - and it can drop back to pre-computed lighting data to pick up the necessary info. It won't be quite so mathematically correct, but to the human eye, it's more than good enough. Materials-based lighting is swiftly becoming the standard on next-gen, and Killzone Shadow Fall is our first taste of this new technology in action on PlayStation 4.

"If you make everything absolutely physically correct, we don't have any possibility to fake something. We're not making something photo-realistic or hyper-realistic, we're looking to make an image that's as pleasant as possible."

Performance analysis of the multiplayer section of Killzone Shadow Fall, using pre-launch footage supplied by Guerrilla. We see frame-rates between 40-60fps, averaging out in the mid-point across the run of play. You certainly feel the additional fluidity with a reduced input lag - useful for a competitive online shooter.Watch on YouTube

"I would say that this is going to become the standard for next-gen games. Everyone will do a variation of it because it's sort of like the same as SSAO [screen-space ambient occlusion]. It's a very crude approximation to a natural phenomena... it's something that's sort of plausibly looks like it's real," explains van der Leeuw.

"It is technically feasible and adds so much compared to what it costs that everybody will have their own variation of it. Everybody will be experimenting with something similar. Everybody who's doing a physically more plausible, energy preserving lighting model will have a roughness value and it naturally fits the lighting model to have a reflection on everything, every pixel - you know, the cone of the reflection."

It also opens up some interesting new challenges and opportunities when sculpting a level:

"We had one level where initially we wanted these dark materials and we found that these dark materials didn't respond that well to lighting, so we put in lighter materials to show off the lighting better," explains Kim van Heest, before revealing how the system also opens up new ways for the lighting team to improve the way a level looks.

"The way we worry about lighting is for a lighting guy to come up to us and say, 'can you open up that roof so the light will come in?' and it will look better, and you'll have nice shadows on the wall. It's more about what would look cool and less about technical tweaking. "

Global illumination, anti-aliasing and ambient occlusion

Recently, Nvidia revealed a tech demo showing full real-time global illumination - a mathematically precise model that has long been seen as the future of in-game lighting. In our recent Crytek interview, Cevat Yerli expressed concern that a full mathematical solution to lighting could see designers lose too much control of the look of their games - a school of thought Guerrilla subscribes to.

"It's absolutely true," explains senior lighting artist, Julian Fries. "If you make everything absolutely physically correct, we don't have any possibility to fake something. We're not making something photo-realistic or hyper-realistic, we're looking to make an image that's as pleasant as possible.

"So for example, you want to have extra [light] bounce on one room and you don't want it in another because of contrast and brightness, we have the possibility to bake these things as long as we are not using this type of system. Iteration may be faster but it costs more performance-wise, of course. The quality would probably drop - the benefit of pre-baking certain things is that if you don't need it for your gameplay element in this area and you need a static light that is not adjustable, it's much more efficient to bake this sort of thing because of performance or quality I guess."

Baking and faking comes with its own challenges though - pre-computation requires a colossal amount of processing before final assets can be introduced into the game, requiring an innovative solution at Guerrilla.

"At night the entire company would become a render farm, and still we had three guys pushing the renders through. The quality is most important."

You may have seen Guerrilla's profiling tools via screenshots in the past. Now you can see both the CPU and GPU profilers in action, revealing the sheer level of detail the developers have available in seeing which systems are consuming how much processing time.Watch on YouTube

"A lot of people dropped light maps in the previous generation because they are so annoying to work with and we experienced it again on this project, where we had an incredibly long turnaround time for light maps," Michiel van der Leeuw explains.

"The game is incredibly big compared to the previous game... we had to turn the entire company into a massive render farm. When people left their computer at night at 5pm it would say, 'hey, do you want to enter the render group?' and if you didn't respond, or didn't opt out, you'd opt in - and at night the entire company would become a render farm, and still we had three guys pushing the renders through. The quality is most important - the atmosphere in the game."

Adding to the quality is Guerrilla's chosen anti-aliasing solution. Back in the February reveal, FXAA was utilised with the firm hinting at a more refined TMAA solution under development in Sony's Advanced Technology Group. TMAA is now TSSAA - temporal super-sampling anti-aliasing.

"It is still very similar to FXAA. It's not FXAA, it's a similar filter but much higher quality," explains Michal Valient.

"It's still screen-space - TSSAA - we've lost the sub-sampling of depth. We had that sub-sampling as well, reconstructing edges with that but it was costly and didn't really add that much so we removed that," adds van der Leeuw.

"We collect samples from the previous frames and try to blend that into the mix on a basis that is a derivative of FXAA. It's something that blends in a little bit of temporal math, trying to get something from previous frames."

"We're very careful in what we use from the previous frames. If it's a good frame, why throw it away? We basically gather as much of the pixels from the previous frames that you can apply to this one as well and re-project them," adds Valient. "We get more samples per pixels. Normally you'd need to apply super-sampling or multi-sampling to get a similar effect. We try to get the data from the previous frame."

Re-using existing data for other systems is an approach often used by developers - Guerrilla uses this not just for anti-aliasing, but for ambient occlusion too. But this isn't your standard screen-space approach:

"It's called directional occlusion. It's a by-product of our reflection system. We use the calculation for reflections to determine how much of a space is actually visible from a certain point and we re-use the same information for AO," explains Michal Valient. "AO, if you think about it, is the same as reflection but with a much wider lobe. You want to know about the entire hemisphere around any point, how much is visible, so you can re-use that information. I wouldn't say it's SSAO because there's way too many tricks in it to make it look good."

Guerrilla Games, GPU compute - and the future of PlayStation 4

GPU compute - seen as the area where PS4 is best equipped to give long-term results - has also been explored by Guerrilla for its first game. In the PlayStation Meeting demo, only memory defragmentation was handled by compute. In the final game, colour corrections and "force-fields" are also handled by the graphics core. Force-fields are another example of a fairly unnoticeable system that helps make the game look more natural.

"So this is not a system that you can see in some of those demos, like pouring water into a tank and it makes nice waves. That's a bit too uncontrollable. We wanted something artistically controlled," explains Michal Valient.

"These force-fields are objects that artists can place. They place it around a character's feet, for example, so when a character walks through the bush, it moves a bit. We attach it to an explosion. If you throw a grenade, for a fraction of a second it creates a force-field. If you shoot a rocket, it creates a force-field. This actually allows a much better level of control for the artists and it also performs better, and from the technical point of view, what we do is that around you as a player we have this huge volume of I don't know how many thousands points."

"We still have a long way to go. We're very early in the cycle. The PS4 is very easy to work with and I think we got quite far, but whatever we do next will be even better because we always constantly push the technology."

"Every frame, we actually simulate these forces, so we gather the force-fields and for each point in this volume around you we do the simulation - and then anything can use it," Valient continues. "The plants use it in a shader, the trees use it, if characters have capes, they can use it to jitter the capes if there is an explosion. It's a small thing which you consider natural, but also one of the things that makes the world a little more believable."

It's the kind of workload that benefits immensely from running on a massively parallel processing system - something PS4 specialises in via GPU compute.

"We try to utilise as much [of the GPU] as possible. There are synchronous things which need to happen in sync with the rendering, but there are asynchronous things which can happen at any point so we try to utilise everything," says Valient.

"We've only touched the surface. We picked the force-fields, the colour correction system, memory defragmentation - that's what we use for the texture streaming. Those are a couple of things we picked as isolated systems that really work well with compute, so we tried those. Most of our other stuff works with regular pixel shaders, even some post-processing effects, so we will be improving.

"It's natural that we pick the post-processing effects and turn them into compute as that's much more efficient, so that's our next step - but we only had so much time for this game. I'm pretty sure we can improve a lot - there's a lot of room to explore."

So, what of the future? When we visited Guerrilla, the team was still on the high of having gone gold with Killzone Shadow Fall, and while there was some talk of further enhancing existing systems like its innovative solution to animation, we could only get a broad outline of where the firm is heading for its next PlayStation 4 title.

"We still have a long way to go. We're very early in the cycle. The PS4 is very easy to work with and I think we got quite far, but whatever we do next will be even better because we always constantly push the technology," says Michal Valient.

"On Killzone 2, we were thinking OK, we went really far with this. Killzone 3 showed that we could pretty much double our budgets because we got so much better with the technology that we could use more. I'm pretty sure we have much more room to improve here."

This article was based on a trip to the Guerrilla Games studio in Amsterdam. Sony paid for travel and accommodation.

Read this next