Darren wrote:This seems to be another nVidia thing (I'd guess) but they obviously haven't heard of this limitation (at least to the unlit buffer, and from a quick play with the benchmark also maybe the lighting buffer. DoF is definitely not AA though). Tried the in-game edge detect AA thing and it didn't seem to like to show me frames, lots of them (modern card, 1080p monitor, I have expectations) and worse didn't really want to do much decent AA either. It also seemed to fail to do anything about the aliasing of DoF stuff (possibly they're getting a free blur by rendering that with double pixels which makes the aliasing so much more obnoxious).
The AA in Mafia 2 seems to be an edge-detect blur filter akin the what was used in PC GTA IV when you press P on the keyboard so it has the effect of making the image look softer/blurrier without actually anti-aliasing anything.
Had the game used DX10 or 11 then we might have had AA but DX9 + deferred lighting means it's yet another PC game without proper native MSAA. Disappointing but by no means a game killer if you're playing at high resolutions anyway. Hopefully they'll be a way to force AA in the full game once newer drivers are released.
Told the driver to force the issue, turned the in-game setting off (oh, it seems like the demo kept forgetting my video settings - not sure if they locked the settings file or something weird) and cranked up the driver enforced stuff. Still not perfect but most of the scene is getting perfect AA and the frame-rate drop is a lot easier than the in-game setting.
I also noticed that medium shadows don't look significantly different to high ones during the benchmark and was the biggest change to my framerate other than PhysX (medium adds so little shrapnel as to make it rather dull and pointless and high is too costly for me to benchmark above 30fps with). Then again I've got a C2D E8600-8700 going along with my GTX 470 so maybe Mafia II really would rather at least 4 threads to do anything interesting with debris clouds (PhysX may say GPU required on the tin but there is a lot of CPU bottlenecking from what I've seen in many situations, they really need to work on a next gen driver that only offloads to the GPU the stuff that it'll do significantly faster and recode the CPU stuff to run at a decent speed. Ideal world, auto-detect relative speed of CPU/GPU and move just the right amount of PhysX code to GPU to even the load; ideal world v2.0 does it on the fly dynamically to maximise framerates).