Ubisoft's Frederic Blais is overseeing the company's Kinect output and like Blitz Games, they too are stepping aside from using the Microsoft libraries, but for other reasons.
"The process is not so complicated: We take the video signal from Natal and we put the player to the screen on a mesh in real-time. This method is better to have smooth human animation," Blais reveals in this French-language interview.
"The other method is to take the skeleton from Natal and put it onto a 3D Avatar but it's less smooth. The principal challenge with Natal is about video signal. We have to optimise the signal to have the best result in term of movement on-screen."
For both of Ubisoft's titles - Your Shape and the forthcoming Michael Jackson dance game (and we suspect the Blitz game too) - the developers are dealing with the raw depth map, tossing out Microsoft's fully featured skeleton creation system in favour of a more simplified model that better fits the requirement of the game itself.
As Blais says, the default solution provided by the platform holder is also tied closely to Rare's Avatar system: if a developer wants to use skeletal data to recreate player movements on-screen, the visual representation has to be in the form of an Avatar. If the game-maker wants to re-map the skeleton onto another 3D object, essentially the only solution is to "go it alone" and use a bespoke solution.
Back in Face-Off 27, we talked about the fact that Tiger Woods PGA Tour 11 features Move support but thus far, no Kinect compatibility has been announced. It's an interesting example of how something you think would be "no brainer" actually poses a huge technical challenge for the new Microsoft control system, necessitating a level of engineering far beyond that of the forthcoming PlayStation Move patch.
First up, PGA Tour 11 relies heavily on its own avatar system. Sure, you can "be" Tiger Woods or any other major golfer, but there's also a powerful character creation system that is obviously completely divorced from Microsoft's own. While Move support is a fairly straightforward "drop-in", to really work well, EA Tiburon would need to develop its own system for remapping the skeleton data onto their own in-game characters. More than that, new animation systems would be required for the realisation of full 1:1 body motion. Move on the other hand can be grafted in with no such upgrades.
In short, there's no easy way to add Kinect support to some existing titles, but the potential for the system in a new PGA game is remarkable - golf is as much about the stance as it is about the swing, and Kinect can obviously provide a level of fidelity in that regard that none of the competing motion control systems can get close to.
Going it alone with custom data interpretation may well be required for other scenarios too. Kinect is a 3D camera, but it can only acquire 3D data from one perspective. If parts of the body are obscured for any length of time it obviously loses the ability to track them. So, again, PGA Kinect would require extensive, bespoke engineering. Typically the player will be facing the camera side-on, giving an effective side-on silhouette to work from, with interpolation filling in the rest.
This segues quite well into the other Kinect hot potato - whether it can handle gameplay when the player is seated. Microsoft has been quite careful in the way it has handled questions along these lines but it's clear that the skeleton recognition system will have issues with players sitting down.
When you see how the depth map works, it's obvious that it is going to be an issue. There is a default "shape" to the human form when it is standing that is easily trackable, but on the couch, it's a different matter entirely. Sitting up straight, slouching, leaning on a cushion, sitting at a desk with your legs completely obscured - this is adding a huge amount of random factors to the skeletal recognition system. Just the presence of something like an armchair (depending on its size) is going to introduce close-proximity depth data that the Kinect software is going to need to process through.
Our sources say that Microsoft is working on a solution for this, but that it's a work in progress: as Blitz's Andrew Oliver says, you can expect improved performance and basic upgrades in the system's capabilities as coders get to grips with the data, and as Microsoft rolls out improvements to its SDK.
In the here and now, Microsoft has confirmed unambiguously that entertainment elements of the front-end - for example, playing a movie or navigating the dashboard - can be achieved while seated. In these scenarios it doesn't have to track multiple limbs, only one hand/arm, presumably out-stretched, and thus easier to detect.
The level of customisation available to developers also means that the various CPU usage figures being bandied about can't be taken as gospel either. They will vary on a game-by-game basis, so it comes as no surprise to learn that we have a range of very different statements coming from very reliable sources.
This week, CVG quoted Ubisoft's Frederic Blais refuting comments made that Kinect soaks up an entire core's worth of CPU power: "That's not true at all. I don't really know how much I can talk about it but it's less than one per cent [of the CPU's power], or something like that."
On the other hand, we have one of the key technical architects from Microsoft, Alex Kipman, telling New Scientist magazine that Kinect uses 10-15 per cent of the system's power.
The truth is that for most Kinect titles, Kipman's figures are closer to the money. Two threads of a single Xbox 360 core are used, but only a relatively small percentage of that processor's available CPU time is consumed, and the actual amount of system resources both in terms of processor cycles and RAM used depends entirely on the type of game being made and the capabilities of Kinect the developer is using (in point of fact, a small percentage of GPU resources are also being used).
Using the Avatar skeletal tracking system incurs load and the chances are that tracking two players increases processor usage still further. Similarly, if the developer uses the RGB camera feed in alignment with the depth map (a process called "registration") this too adds to the burden. Kinect has a large range of capabilities built into the SDK, voice recognition being another powerful tool. These are all modular in nature - the more of these modules the developer uses, the higher the load, and the less resources available for other in-game elements.