Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Inside Immortals of Aveum: the Digital Foundry tech interview

Behind the scenes on the first non-Epic game to ship with all Unreal Engine 5 next-gen features.

immortals of aveum screenshot
Image credit: Ascendant Studios

From a technological perspective, Ascendant Studios' Immortals of Aveum is a highly important game. Excluding Fortnite, created by Epic Games itself, it's the first triple-A game to ship with all of Unreal Engine 5's cutting-edge features in place. That's the microgeometry-based Nanite, capable of astonishing levels of detail, along with Lumen - a ray tracing-based global illumination solution plus virtual shadow maps. Prior to Fortnite, The Matrix Awakens was the only console-based UE5 release we saw with those features - looking amazing but with obvious performance concerns.

Not only is Ascendant using all UE5 features, it's also aiming for 60fps on consoles. It's a highly ambitious game then, but as yesterday's DF review revealed, GPU compute can only go so far, so image quality necessarily has to take a hit, with FSR 2.2.1 used to upscale from a base 720p resolution Xbox Series X and PlayStation 5.

Going into the review process, we were highly motivated to find out more about what makes the game tick, so when EA offered us the opportunity to talk with the development team, we jumped at the chance. Alex Battaglia and Tom Morgan spoke with Ascendant Studios' Mark Maratea, Julia Lichtblau and Joe Hall - and here's that conversation.

Immortals of Aveum: the Digital Foundry console tech review. We'll have a look at the PC version soon.Watch on YouTube

How long has Immortals been in development- and was the idea always to produce something that is akin to a magic first-person shooter or did it evolve during the project?

Mark Maratea: The company was formed to make this game, and it took a significant portion of time to step up... then Covid hit, so the [time spent] staffing became a very interesting time period. This game is five years old, for all intents and purposes.

Julia Lichtblau: A couple months after Brett [Robbins] actually founded the studio, he handed Dave Bogan, the senior art director, and me this 60-page pitch document, which ostensibly was a magic shooter. It had the basic premise of [protagonist] Jack and this big story arc. Obviously, things did sort of change, like the intricacies of the combat, but he knew that Brett knew he wanted to make this magic shooter game because it was something that didn't exist before, and it was a game he really wanted to play and wanted to make.

Based on the timing, I'd imagine that the game didn't start on Unreal Engine 5 as it didn't exist - so did it start on Unreal Engine 4, and did it migrate to UE5? What was that like?

Mark Maratea: The project started using UE 4.20, which came out in July, and the company was formed in August [2018]. When the company first started, everything was done in blueprints with no engine code changes, as if it was a smaller indie project to prototype it out... This could have changed; the reality of the business decisions around this is that if someone stepped up as a publisher three months in and said, "you're coming, but you're using Frostbite or whatever", that would have [required] a pivot. We are lucky enough to have a very generous funding source... an amazing partner, so that's given us the autonomy to make these decisions. We started out with vanilla 4.20, vanilla 4.21, 4.23 custom engine, 4.25, 4.26, UE5 preview, UE5 actual, UE5.1, and now we are shipping as UE 5.1.1.

So you mentioned that you started off with blueprints and minimal code change, but with 4.23 moved to custom engine, what were those larger changes that made it divergent?

Mark Maratea: You know, the beginning of all of these changes tended to be "...and then a designer or an artist asked for this impossible thing, and so we had to make an engine change." For the ruins level in particular, we had to do some magic on the UE4 version because of the way the level was laid out; we actually had to move the sun in the sky. This doesn't sound like a big deal until you remember that UE4 is [based on] baked lighting, so that actually became a huge deal and required a lot of work.

This game has always been 10 pounds of game in a five-pound bag. Our combat system has evolved and become the genesis of a lot of our engine changes, as we built our combat system on the Unreal Gameplay Ability System, which was designed for a MOBA. It's a good framework for a network-replicated ability system and we use it for every gun-type thing in the game - rapid-fire weapons, aimed and controlled weapons - so we really went outside of the bounds of that system. So the first huge set of changes was taking the game playability system and make it work the way we want combat to work.

The next big thing was world composition - how does the game stream [data], where does everything live. We made huge changes in that system in order to accommodate these extra-large worlds and transition through them and keep them within our memory budget... and then the magic of UE5 happens, and world composition doesn't exist anymore because they deprecated it. As you can imagine, that was a transition for all of us - engineering, all of our content teams, level artists and level designers in particular. That was us saying "here's the new bible of how to build a game" and them going "first off, you guys need to learn to write... and then second of all, that's crazy." It was a huge studio-level innovation project, we had to prototype what the right way was, establish the rules, figure out what Epic didn't do right, change how our game worked, rewrite the rules again, and that was an ongoing process up until even the last six months.

We took 5.1 over Christmas and I spent my Christmas vacation integrating [that version]... when we moved up to 5.1, we couldn't load any assets in the game so I had to basically rewrite the entire asset loader in two days to not do that so that we didn't lose all of our progress.

A look at how the console versions compare in identical content.

That's the cost of staying up to date then, isn't it?

Mark Maratea: Yup, but it was absolutely worth it.

Did you cherry-pick any shader pre-caching [features] from 5.11 or 5.2?

I'm actually in the middle of a Slack discussion about whether we need to pull something from 5.2 to help with this. So PSO caching in 5.1 works, but if you go into the code, the first line is a comment that says, "this doesn't work, don't use this", and it has early outs to go to other sections of code. They were trying to do an adaptive pre-cache, which caused memory leaks and eventually crashes when releasing assets - which is bad - but in 5.2 this is all fixed. Unfortunately, we're right on the cusp between the 5.1.0 version and the 5.2 version [that each] work a certain way, and 5.1.1 which they tried to make work but doesn't quite work.

So we've had to change some things around, we are using PSO pre-caching combined with the DX12 normal cache. There is a throwaway line in [Epic's] documentation that says, "when you do PSO caching, by default it will cache every PSO it can find and then load them all at once." And that "load them all at once" part [is problematic]. We have a lot of shaders in our game, we did an incredible, different system that uses dynamic branching, and we actually rewrote part of the shader pipeline which gives us 3-4ms off our rendering times, a real serious deal. The downside is that it increases shader permutations, so we have 5.7 million shaders with full permutations... that generates 563 thousand PSO objects, so when you start the game up it tries to load half a million PSOs.

Does that explain what I saw when I loaded up the game without the .ini file? [note: PC review code shipped without a crucial .ini file, causing initial issues quickly resolved] I'm using to seeing the CPU go full-on crunch [when doing shader pre-compilation], but here it was like the CPU was having trouble even reaching full utilisation.

Mark Maratea: Yeah, it's all I/O. It's kind of the worst version of I/O because you're loading the PSOs from the cache into memory, processing them, and saving them back to your C: drive. And so you're bypassing all of the optimisations because it has to go to a different drive - most people don't have their Steam library on their C: drive anymore - and you're bypassing DMA things and DirectStorage doesn't work, so you take a bit of a penalty.

You did mention DirectStorage - are you using this on Xbox Series consoles or PC?

Mark Maratea: Yes, if [DirectStorage] is there, Unreal will automatically try to leverage it. We also leverage async compute, which is a wonderful speed boost when coupled with DirectStorage as it allows us to load compute shaders directly on the GPU and then do magical math and make the game look better and be more awesome. It allows us to run GPU particles off of Niagara without causing an async load on the CPU and doesn't cause the game thread to hit.

I played the first 25 minutes of the game and didn't have any shader compilation issues, which is rare for an Unreal Engine game.

Mark Maratea: You will have no PSO hitching in the entire game. I'm saying that out loud right now, publicly, it's recorded.

What it was like implementing Nanite in the game, and how did it compare to the usual LOD authoring? Was it easy to work with?

Julia Lichtblau: On the art side, we actually started building this game in Unreal 4, so a lot of our early kits were started along that traditional pipeline. When we switched over to Nanite and UE5, there was a lot of excitement because we could pack much more detail into the assets themselves... [Originally] there was a high poly [model] done in ZBrush, which was baked down into the classic texture and material to get the shape of it while maintaining a low poly count. But when we switched over to Nanite, suddenly we were able to just bump back to the high poly assets. Figuring out how to unwrap something of that density was definitely a big kind of rethinking of the workflow, because now you're dealing with so many more polygons, in the millions... Once it was in the engine, it was pretty incredible just to start adding millions and millions and millions of polys into the engine and not have it completely crash.

There were a few times on import when sometimes the Nanite flag would not stick, and we'd be re-importing a multi-million poly bookshelf, and we'd be wondering why the engine was stuttering when we looked in that direction. You could see Unreal not knowing how to handle that but as soon as you check that box, it's just handling everything. It's doing the sort of dynamic LODs and the clusters just to the point where our workflow was sped up so significantly, and it was pretty amazing to jump into the Nanite view and see the clusters be adjusted in real time and see how it's optimised. It allowed us to build this game much faster with that small of a team.

Immortals of Aveum doesn't just leverage all key Unreal Engine 5 features, it also targets 60fps but can dip under load. Here's the lay of the land across all consoles.

How did you decide what was done via Nanite?

Julia Lichtblau: When we first switched over to Unreal [5], we turned Nanite on everything we could. I think 5.0 wasn't [compatible with] foliage, that came a bit later, but as soon as we were able to have Nanite foliage we turned it on. Then we started to pull back from that and [asked] "does this really need to be Nanite?" We were encountering some assets that [were problematic due to] their construction and the way that the UV shells were set up, so we had to rework some of those assets to make it workable with Nanite or change it up entirely and go back to the traditional method. We couldn't use [Nanite] for things that would move, like flags. We really threw everything into the Nanite bucket, to learn from it. Now we've been able to build a huge Confluence [corporate wiki] page on how Nanite should be handled going forward.

How does Nanite and virtual shadow maps translate to console like PS5 and Series X/S?

Julia Lichtblau: On the art side, we haven't really had to adjust anything, but perhaps Mark has been doing stuff on the back end to make that work so us artists don't have to worry as much!

Mark Maratea: I would say in some ways it works better on consoles, weirdly enough. Nanite virtualised geometry is very stream-heavy, it is all about disk I/O. So UE5 is rewriting the I/O pipeline and the optimisations they made for NVMe SSDs. It's designed to work on consoles. On PC, I have no idea what anybody's I/O bandwidth is... The only downside on console is when you're using Nanite, you really need to use streaming virtual textures, you really need a very large virtual texture pool. Consoles have fixed memory, but [a single graphics card] can have more memory than the PS5. So optimising for both of these is really difficult.

Nanite does a good job of using the memory it has available, but the exception to that is that virtual texture pools in UE cannot be resized - they have to be initialised at engine startup and cannot be touched again, [which provides] fully allocated contiguous memory which is wonderful from a performance standpoint but [you can have problems where, for example] there's a goblet way off in the distance, two pixels, and it needs one piece of one texture [from a 500MB pool allocation], and you don't have any of that back until the texture goes away. PC doesn't care [if you run out of memory]; worst case, it goes into virtual memory. Console goes "I don't have virtual memory, I'm done." And it won't crash, but it will cause substantial issues. This caused what was internally known as the infamous landscape bug, where you would just walk into certain parts of the game and it would like someone painted an anime landscape on the ground, because it couldn't allocate for the virtual texture pool.

Nanite seems to have made life easier on the art side; is there a point in the game you're especially proud of, a showcase moment for what you can do with Nanite you wouldn't have been able to do without UE 5.1?

Julia Lichtblau: Every level we really have to push that. There's this one level with a giant colossus, this giant mech, and it has very curved kind geometry with a lot of detail; inside, every little rivet is there and slits [in the] floor are modelled in so that you can see all of these crazy moving sort of cylindrical and high-poly assets that we were just able to just keep adding more and more detail to... Nanite has just opened the world to making levels that much more beautiful, because now that we're no longer having to rely on normal maps and that sort of fakery that falls apart [at a certain viewing angle]. That [issue] doesn't exist anymore, because the geometry is actually there... you can get up really close to it and it will still have all of that curvature and detail to it. So it's been really cool to experiment with a variety of different textures, surfaces, shapes and architecture styles. It was really fun on the art side to really push that across the game.

Joe Hall: With the VFX in UE5.1, they introduced transparency with Nanite which we definitely needed. There's a library level where this metal needs to erode away based on the progression of the player, so those are Nanite assets that actually erode away with emissives and particle effects and utilising the fallback mesh within it. It's really impressive.

Immortals of Aveum's official launch trailer.Watch on YouTube

We're going to have to capture these moments for our coverage...

Mark Maratea: To be honest, the first 10 minutes of the game after the theatre cinematic, we do the pan of the Seren Underbridge - that's all real-time, in-game, fully lit Nanite geometry world partitioned area. That thing's huge - I think a lot of us forget just how breathtaking it is; every other game would have [made it] pre-rendered. Real-time lighting, windmills on these buildings spinning in real-time, animated meshes... every time I look at that, I pick up a new detail. Our art team's just phenomenal, with what they've done with UE5. It skips uncanny valley, it goes right into "yeah, that's a normal city."

With Nanite, you can use replication, rotation and scale of the exact same object to build areas - so how do you introduce variation in the environments? Is there a wear and tear system? Is there blendable detail to add variation to certain assets?

Julia Lichtblau: We don't have a wear and tear system; we have heavily used different material instances to add [for example] different paint colour to buildings, and a whole suite of decals that allow you to have, for example, certain areas be more crumbling stucco, or dirt slashes, or graffiti or that kind of thing. A lot of it was just adding [hand-crafted] geometry - take a building and add a bunch of props around it. We did built out these slum blueprints that had a lot of interchangeable props, so these didn't start hand-placed - but we were able to go in and add that artistic detail, that sort of lived-in feel to do some environmental storytelling, so I feel like that got us to add some variety to most of these assets.

With 60fps on consoles, are you using software Lumen? And with PC, do you get software Lumen automatically, or does it choose between hardware and software Lumen based on your configuration?

Mark Maratea: Yes [to software Lumen on consoles]. [On PC] it's currently software, we have options for both. This gets back into the PSO caching issues - it turns out as soon as you turn on hardware [Lumen], it doubles your shader permutations because it builds the hardware and software versions, and I decided we shouldn't do that to people right away.

One of the things you can do with PSO caching to avoid some of these problems is [use] a masking system. So I need to go back through and recapture all of the [10 million] PSOs in the game. And I need to instrument the game with a uint64 bitflag that says, this is stage one, this is stage two, this is stage three, throughout the entire game - and then do the precache, in order, at different times, at different levels. So I need to build that little system and shim that in, then we'll be able to put hardware [Lumen] on without it causing a five-minute shader rebuild in the beginning of the game.

With hardware Lumen, with all of its distant detail, are you going to run into CPU issues?

Yes. The draw thread gets pinged very heavily. The biggest change from software to hardware Lumen is that it goes from 200m to 1000m. There's a surprising secondary problem when you start adding light sources from five times the distance away, which is you're now getting reflections of lights from five times the distance away. Lumen really, really likes to handle reflections on everything, and we have a lot of lights, so now you start playing with other parameters to stop having too many reflections on all your shiny materials. Julia's team loves shiny materials, we have a lot, so it's a constant balance between having too few and too many reflections. If you optimise in the wrong direction, it either becomes muddled or it becomes like a glitter sparkle fest, neither of which are awesome. So we're threading the needle really tightly to [balance frame-rate and visual fidelity]. That requires a lot of testing, and you can't break other things while doing that.

Our current min spec is a RX 5700 XT and RTX 2080. I assume the updated specs are somewhere in people's hands; we've actually been able to lower minimum specs a little. We've been pushing really hard to drop that down well outside of anything that supports RTX. The side effect of that, obviously, is we need to make sure that software [Lumen] on PC does not go bad as we begin to optimise for hardware [Lumen].

Coming back to Lumen, it goes up to the ultra setting for global illumination and reflections. Where do the consoles fall within these settings?

Mark Maratea: They're basically at medium, targeting 1080p 60fps with upscaling.

Ascendant Studios discuss the way they designed the world of Immortals of Aveum.Watch on YouTube

It's incredible to see the game supporting Nanite and Lumen out of the gate on consoles at 60fps. Was that always part of the plan? Was there a moment were you considered going for 30fps?

Mark Maratea: We spent the first two years chasing our visual target lock [rather than an fps target], with the art team leading that charge. In parallel, we had the combat team and [game director] Brett working, and Joe's team making sure that the visuals of combat met the design goals. Once those two things firmed up, we had a machine running consistently at 45-50fps in dev mode - and Brett was playing that and said, "OK, no, I don't like 30 anymore. This game needs to be 60, combat doesn't feel right at 30." And so as a team, we pivoted.

We've since eliminated a lot of those hitches, which is what's allowed us to move our min specs down a little bit. I don't recommend anyone do this, but if someone was to throw [a GTX] 1060 at this game, it would actually run. It's not optimised for that; it's also not an 8GB card, so you would have to be really careful. Unreal gets very mad if you don't have enough virtual shadow memory for the resolution you're running at. If you're running at 1080p, it's not such a big deal, but as a PC elite exclusive 4K gamer, I can tell you it is a very big deal when I run out of shadow memory. That's the technical side of this, but this is really an art question. Once we decided on 60, both Julia and Joe's teams went down some awesome paths to get the RT running at that point.

What about other console-equivalent settings? Are PS5 and Series X at medium for most settings, or are they able to go to ultra for anything?

Mark Maratea: Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other, which changes the GPU burden. Part of the console tuning [process] caused us to build the performance tool we have on PC. We charted out every single rendering variable that exists in the Unreal tuning system, all of the possible ranges, and we ran the game [with every combination of settings], 17,000 times. And we understood the performance and visual trade-off of all of these things. Then we sat down with the art department and got into a happy medium where we have what I consider to be one of the best-looking console games ever created that runs at a very very good frame-rate.

The PC graphics menu calculates a score for your CPU and GPU. What is this based on?

Mark Maratea: Epic built a synthetic benchmark program [that we use] and pull real [user-facing] numbers, so that's where that comes from. CPU is essentially single-core performance, GPU is full-bore everything GPU performance. A Min-spec CPU is somewhere around 180 or 200, ultra is around 300; min-spec GPU is around 500, ultra starts from around 1200 which is where a 7900 XT or a 4080 shows up.

The confession is that this came out of our tuning for consoles, which if you read between the lines means I was having a shower a couple of weeks ago and I went "it'd be really cool" to build this performance tool. Now, all this data is coming in because of that, [which lets us] expose data [on various graphics setting costs] to users... [If] somebody's got a new processor or video card, that gives us new data points and that can change the numbers. This is basically us aggregating a massive amount of data over a wide variety of hardware and then making really good guesses.

Is this future-proofed against future hardware, or does it still involve a lot of manual tweaking?

Mark Maratea: Today's version is very manual, but the version you're going to see will be substantially better. If this game sells where I hope it sells by day 60, it'll actually count for various upscaling algorithms, different resolutions, RT on and off, and make essentially ML predictions on where frame-rates are going to go.

That reminds me of the Windows Experience Index [introduced with Windows Vista]. I was always so sad that games didn't end up using it because I thought it was a really good idea. Your blog mentions using FSR 2 on console, what made you choose FSR 2 versus TSR?

Mark Maratea: Performance. We're using FSR 2.2.1, we are [on the] bleeding edge [version] from AMD. It has substantially better performance on the upscale and TSR's upscaling has significantly more ghosting than DLSS or FSR does. We're constantly talking with AMD and Intel and Nvidia about how to minimise problems, and each of their individual GPUs they have their compatibility labs running all of this, and we spent a lot of effort working with them. It's a little bit harder to have Epic release an engine change to [fix TSR problems].

What about specific resolution targets on consoles - is it FSR 2 on ultra performance targeting 4K, or is it a higher setting?

Mark Maratea: On consoles only, it does an adaptive upscale - so we look at what you connected from a monitor/TV standpoint... and there's a slot in the logic that says if a PS5 Pro comes out, it'll actually upscale to different quality levels - it'll be FSR 2 quality rather than standard FSR 2 performance.

So if you play the game on a 1080p screen, you could potentially have different performance to playing on a 4K screen?

Mark Maratea: Yup.

For the dynamic branching shaders you mentioned, can you explain the differences between base UE5 and what you're doing?

Mark Maratea: Yes, so I want to credit [this] to Joe Weyland, this is his baby. This is like a Siggraph white paper level thing. We've been working on this for about 3.5 years. We have a hierarchical system of increasing complexity as you build out your shaders - you can imagine it's like the basic [parameter] is "is it shiny", and then there's a sub-child which is "is it shiny or rough" or whatever. So with this kind of system, there are a lot of instructions you don't need; they are going to evaluate to "don't use"... but the way the pipelining works, the GPU still has to evaluate them, which seems like a waste of GPU resources.

So we built a smart dynamic branching system that allows us to pre-prune node paths that are not going to be used for a particular material. It's a run-time decision, which allows us to do a lot of things in the editor where people can immediately tweak something and see the performance and visual changes in real time. But those decisions are ultimately then run at run-time. This allows us to have different branches with IHV-specific extensions, like Nvidia-only shader extensions where we don't even run the check on an AMD card. So that nets us, depending on scene complexity, somewhere around 2-5ms. This is a bunch of very smart people spending a long period of time to build their life's work and frankly it's amazing.

Is it even possible to consider a PS4 or Xbox One version of the game, in light of Jedi: Survivor being announced for last-gen consoles?

Absolutely not. There's not a version of Lumen that works on last-gen, even in software. If someone drove up with a dump truck full of cash [and said] we want you to rip apart all of your levels, and make it work with baked lighting, and dumb down all of your textures so that you can fit in the last gen console memory footprint. That would [have to] be a big dump truck. And this is after Joe and Julie and I went on our tropical vacation for six months, then we would come back and I would make your last-gen port. I mean, you're essentially asking can we rebuild the entire game turning off a bunch of key features and cut our art budget down to a quarter?

Unreal Engine 5's cutting edge features debuted - inevitably - in Fortnite, and here's how they looked at launch.Watch on YouTube

Absolutely. I would say at Digital Foundry... we haven't seen enough next-gen releases, especially being three or four years into the generation. We're big proponents of 60fps play, but for the console versions, is a higher quality or higher resolution 30fps mode on the horizon?

Mark Maratea: We basically do that for the user; we have non-combat spaces where we have traded frame-rate for visual fidelity... you're going to get areas that dip a bit, [with] no combat there [and] they're going to look extra gorgeous. They're already in quality mode. Then as you start going into areas that are combat spaces, we begin to trade off - there are fewer dynamic lights, there is less unique stuff, we start removing ancillary characters that may show up on an RTX 4090 to maintain frame-rate.

Joe [Hall], is there anything you'd like to say about the VFX that you were really proud of, or something that you got into the game that you didn't think was going to be possible?

Joe Hall: Bringing colour back. Bringing colour back to combat, in a grounded way but also magical. This is definitely something. Keeping [the game] at 60fps, pushing for high-fidelity, high-quality moment to moment is something to be proud of. I'm proud of the team and all the effort they put in, how far we've pushed the bar. I'm definitely thankful that UE 5.1 exists. It's exciting for the players to jump in on!

Read this next