Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Digital Foundry vs. E3: Microsoft

A technical look at yesterday's games and announcements.

While Tomb Raider was a revolution compared to its predecessors, with the Mass Effect 3 demo, we were on much more familiar territory. As usual Unreal Engine 3 powers the visuals, but there are clear visual upgrades apparent in terms of the lighting. Despite BioWare's claims that the PS3 version of ME2 runs on the same, newer tech, the look of the game as seen in the demo was very different - it may simply be the case that the developer is using the newer light shaft and lightmass technologies that Epic has added to the core tech in recent times.

Kinect appeared to be the focus, however, with the player able to issue squad-based commands during combat via voice control. Kinect character interactions were also showcased with Mass Effect's multiple choice-style conversation system. Instead of selecting Shepard's response, you say it, with your words picked up by Kinect's multi-array mic and processed accordingly.

While there were some notable tech enhancements in Mass Effect 3, specifically in terms of lighting, Kinect voice control appeared to be the focus of the presentation.

In truth, it was difficult to see this as more than a gimmick. Voice controls typically don't enhance gameplay: they are slower, more ponderous and far more prone to error than simply using the existing pad-based controls and in the case of Mass Effect 3 there is an obvious disconnect between the player voice and the in-game speech. With regards voice control in general, you have to wonder if this necessarily requires Kinect - don't Xbox 360s ship with headsets? Voice control as a concept is hardly new, either. What would have been really impressive would have been an ultra-high detail capture of your face as part of the character customisation element we usually find in the Mass Effect 3 games.

This segues nicely into what turned out to be the main thrust of Microsoft's pitch this year - that Kinect has a place with hardcore gamers. The platform holder followed up the Mass Effect 3 demo with a Ghost Recon: Future Soldier showcase. The action kicked off with a pre-rendered trailer that looked to have been based on in-engine engine, albeit in full-on "bullshot" mode, and then some.

Following this, Ubisoft revealed how it is using Kinect to make weapons customisation far more intuitive, with what looked like a fast and responsive gesture-based system. Almost - dare we suggest it - akin to a Minority Report style interface with added voice control. However, the in-game implementation of Kinect looked very disappointing and a perfect example of how the system can only add latency to an action game: shooting occurs when the player opens his fist and a sniper scope is utilised by the player raising his hand into the air. Aiming is achieved via the player's left hand being tracked.

In a presentation that went on to talk about how Kinect precision has improved, Ghost Recon: Future Soldier only seemed to emphasise the shortcomings of the technology for core games. Later on, Kudo Tsunodo would talk about finger tracking being used to track a player pulling a trigger, but there was still little actual evidence that this was possible and Ghost Recon's hand-tracking seemed to demonstrate that relatively large movements are required for tracking... bad for latency and hardly immersive.

Movement in Kinect Star Wars seemed to be on rails in a lot of respects. Maybe it was because of the processing required to run the big screen, but latency between player and action also appeared to be rather noticeable.

It was also unclear how the player was supposed to move through the environment - only aiming and shooting was shown. Star Wars appeared to suggest that traversal through the environment was mostly on-rails, and though Fable's exploration looked rather non-interactive in the demo, Peter Molyneux has stressed in subsequent interviews that Fable allows full exploration, by putting you in a horse and cart, with the player navigating via the reins - an intriguing solution. (Updated: Peter Molyneux information added)

Fable definitely came across as the better game, exhibiting some imaginative uses of the Kinect interface, and while Star Wars did a decent job of conjuring up gameplay renditions of the various Force powers and lightsaber play, latency definitely looked like an issue, and the game's rendering technology didn't really impress: obvious cascade shadowmaps, really noticeable screen-space ambient occlusion (SSAO), off-putting aliasing and a variable frame-rate.

Where Kinect really worked was in new reworkings of existing concepts. Dance Central 2 did exactly what it needed to do with the inclusion of simultaneous two-player gaming, and the ability to import your existing songs into the new game. Kinect Disneyland Adventures seemed to be a spiritual sequel of sort to Kinect Adventures, with the pleasing CG recreation of the theme park working nicely as a hub between the various mini-games.

Also pleasantly surprising was Sesame Street: Once Upon a Monster. Despite a teeth-grindingly fake and borderline-sinister demo from a pretend father and son, there was a feeling that Double Fine's latest could be this year's Kinectimals: certainly the rendering tech did the job with impressive style, particularly in the recreation of the muppets themselves. While user interface interactions seemed to have some latency, playing the game looked to be fairly smooth and responsive.

Kinect Labs - in effect tech demos inspired by the PC homebrew community - proved to be one of the hits of the conference. Elements such as custom Avatars scanned in via the Kinect tech and object capture were really impressive, and the finger-tracking 3D "painting" also proved to be a really cool addition.