We were recently lucky enough to get our hands on a pre-release Kyro board for some testing here at EuroGamer. It wouldn't be fair to attempt to review it at this stage though, as the chip wasn't running at full speed (115MHz rather than the expected 120/125MHz), and the board itself does have a few issues that make it a little unstable. But it was enough for us to get a feel for how the card may stack up against the opposition... PowerVR explained PowerVR works more efficiently by not rendering any unseen pixels, whereas traditional architectures render a great deal of information which won't ever reach your monitor. These unseen pixels are referred to as 'overdraw', and it forms the basis of the effective fill rate of the Kyro chip. In order to avoid rendering these useless pixels, a display list is created before the scene is rendered. This sorts everything in back-to-front order, and applies an algorithm to determine which surfaces are obscured by others. The rendering phase can then be applied to only the visible pixels. An added benefit of this approach is a reduction in the number of memory reads/writes which are needed. As a result of this bandwidth requirements for the Kyro are greatly reduced, which means that much cheaper SDRAM can be used, as opposed to the more expensive DDR memory found on GeForce 2 graphics cards. The preview board came loaded with 64Mb of memory, but retail boards will probably ship with only 32Mb, as that will be more than sufficient for the Kyro. Performance The real question then is how does it perform? Well, having run it through a few tests on a Pentium III 600 system, the Kyro appeared to perform extremely well, and certainly provided some serious competition to the Voodoo 5 5500 and a GeForce DDR. If you want to see just how much competition the Kyro provided, take a look at the benchmark results below - As we can see from the 3D Mark 2000 scores, the Kyro is a more than capable accelerator. It gives the Voodoo5 5500 a hard time, beating it at everything other than 1024x768. This isn't too surprising, as the Kyro driver is still rather immature and is potentially holding the chip back a little. Also do bear in mind that this particular preview chip is running slower than the expected speed of the retail board. Admittedly the Kyro lags behind the GeForce DDR, but that's hardly a disgrace considering that it is virtually the fastest card that you can buy at the moment. The Quake 3 numbers also show that at low resolutions the Kryo is capable of churning out higher frame rates than the Voodoo5 5500. Once again though, at higher bit depths the Kyro's performance suffers a little. Both the GeForce SDR and DDR seem to be quite capable of beating the Kyro in the Quake 3 test, but with the Kryo3D marking VideoLogic's first attempt at a full OpenGL ICD (Installable Client Driver), it probably suffers from a greater level of immaturity than the other drivers. Conclusion The Kyro is an exceptional part, even in this extremely early 'preview' level state. I have always been an advocate of the PowerVR approach, but have been disappointed by the lacklustre performance of the actual shipping products in the past. From these early results it's good to see that the alternative architecture is finally proving itself, and proving itself well. Hopefully with better drivers and a higher clock speed the Kyro3D should get even faster. And if 3dfx's acquisition of GigaPixel is anything to go by, there seems to be a great deal of interest in 'tile-based renderers', which can only bode well for the PowerVR way of doing things. Maybe 2000 will be the year it finally takes off... - Kyro 3D preview (first look) Spring 2000 Graphics Card round-up Graphics Card Jargon Guide
We take a look at a budget graphics card based on NVIDIA's GeForce 2 MX chip - could this be a bargain?
- 3dfxPrice - £250 Voodoo Curse 3dfx have kept us waiting for over 18 months for their latest products. Probably nobody outside of 3dfx will ever know why it took them so long, but thankfully they have now been released. The real question is whether it was worth the wait, and will it push 3dfx back on to the top of the performance pile? All of 3dfx's hard work has gone into producing the VSA-100 chip, formerly code-named Napalm. The VSA stands for Voodoo Scaleable Architecture, with the 100 probably indicating that it is the first generation of the chip - I wouldn't be surprised to see VSA-150 and 200 at some point... So what does 3dfx's new chip offer? Well, it offers everything that was found on the old Voodoo 3, but also adds 32 bit colour rendering, Full Scene Anti-Aliasing, T-Buffer effects, and texture compression support through FXT1 and DXTC. Perhaps the most interesting fact about the VSA-100 chip is that it is possible to put several chips on a single board to increase performance, and the Voodoo 5 5500 features two of them. This is reminiscent of running two Voodoo 2 boards in what was termed SLI, and seeing that such a rig offered the best performance of its day, it is no surprise that 3dfx have labelled their Voodoo 5 5500 as a "64Mb Dual-Chip SLI 2D/3D Accelerator". Briefly, SLI (Scan Line Interleave) makes it possible to virtually double performance by simply adding an extra chip. The basic premise behind SLI is that one chip will draw the odd lines on the screen (1,3,5,7 etc) and the other will draw the even lines (2,4,6,8 etc). This leads to a doubling of the card's fill rate, which is directly linked to the maximum achievable frame rate, and hence in-game performance. Beauty And The Beast If there was a prize for the most ludicrous graphics card design, 3dfx would probably win it - the Voodoo 5 5500 is truly a monster. With its two VSA-100 chips, 64Mb of memory, two fans, and a power supply which requires a seperate connector to fulfil its needs, the whole card measures 9 1/2 inches (or 24.5cm if you prefer). The last quarter of the board has a few hefty components on it that help to regulate the power supply to the board, most likely for the two fans that keep the VSA-100's cool. On the hardware side of things, the two VSA-100 chips are supported by 64Mb of SDRAM. The clock speed of the board is 166MHz, which provides a fill rate of 333Mtexel/sec per chip. In the case of the 5500 board with its dual chips, this gives a total fill rate of 667Mtexel/sec. This is quite impressive when compared to the 480Mtexels/sec of the original GeForce, but when compared to the new GeForce 2 GTS card's 1600Mtexels/sec it seems somewhat lacking. But then, that's why 3dfx are also creating a four chip Voodoo 5 6000... Mr T Buffer One of the major selling points of the VSA-100 is its support for full scene anti aliasing (FSAA). 3dfx utilise a two or four sample method, which causes the scene to be rendered either two or four times (according to how you set it up), with each rendered frame being slightly 'jittered' compared to the rest. The idea is that by using a slightly offset group of images for the same frame, it is possible to lay them on top of each other to calculate the best anti-aliasing, reducing jagged lines and other rendering artifacts. There is one significant drawback to this though - it eats fill rate, and the card's performance can easily and quickly be consumed by the needs of FSAA. Without FSAA the card's fill rate is 667Mtexels/sec, but with a two sample anti-alias this is effectively halved to 333Mtexels/sec, as you are rendering two seperate scenes and then combining them. Utilising four samples reduces this to a very disappointing 166Mtexels/sec. As a result of this massive performance hit, 3dfx have said that FSAA is not something that players of 'twitch' style games such as Quake 3 Arena would want. But FSAA is appropriate for games where super-high frame rates are not required but visual detail is important, as 3dfx have shown with their many shots of Homeworld. Full scene anti-aliasing is done in the T-buffer, which also allows for effects such as soft shadows, depth of field and motion blur, which 3dfx place under the broad title of "digital cinematic effects". Having seen demonstrations of the T-buffer effects one can't help but be impressed, although I'm not sure whether I actually want any of them in the games I play, especially if they're going to create a serious performance hit. And, unlike FSAA, the other T-buffer effects must be added to games by the developers. As no other card currently supports them, they are unlikely to be widely used. Performance Performance wise the Voodoo 5 5500 is a little disappointing. It is certainly a great deal better than 3dfx's previous outings, but compared to today's fastest performers such as the GeForce 2 it is looking somewhat slow. Perhaps 3dfx will be able to do something about this with more efficient drivers. One thing that is impressive is 3dfx's shiny new OpenGL ICD. In the past there was some trouble with games that relied on OpenGL, such as Quake 3 Arena, and it would take a little bit of tweaking to obtain acceptable results. Thankfully the new drivers are perfect and run without problems. Conclusion The Voodoo 5 5500 is an interesting product. It is certainly a world better than the Voodoo 3, and 32 bit colour is a rather overdue but gladly received addition. Sadly though it is still suffering from sluggish performance (relatively speaking), and if you activate FSAA performance will only get worse. The card itself is quite a beast, but it looks tougher than it really is. At the current prices it is hard to justify spending so much on the Voodoo 5 5500 when one could purchase a GeForce 2 GTS based card for the same price, if not less. I can certainly see the Voodoo 5 5500 finding some fans, especially amongst players of games that run better under 3dfx's old Glide API, such as Unreal Tournament, Starsiege Tribes and Quake 2. Likewise for games where fill rate isn't so important, such as 3D adventure, role-playing and strategy games, where the extra visual quality that the Voodoo 5 5500 and its full-scene anti-aliasing can offer will be welcome. My biggest hope is that 3dfx will improve their drivers and lower prices to make the Voodoo 5 5500 even more competitive, as until they do it is a little under-performing and over-priced compared to other company's offerings. - Voodoo 3 3000 PCI review Spring 2000 Graphics Card round-up Graphics Card Jargon Guide 8
Think of 3D graphics chips, and those most likely to spring into your head are things like the GeForce and Voodoo. These are some of the real heavy-weights in the performance arena. There's one name though that usually gets forgotten about by most though - PowerVR. And there's a reason for that... The PowerVR technology has not been too impressive in its previous outings, and the question was, could PowerVR ever be quick enough to rival the big boys? The PowerVR by nature lends itself to being a cheap chip to produce, which makes it attractive at the lower end of the market. Realistically though, the hardcore gamers are only interested in one thing - big numbers. With the recent announcement of PowerVR's latest incarnation, the Kyro3D, can we expect to see performance more befitting the "on paper" specifications? Overdrawn? The PowerVR architecture doesn't work in a conventional manner, as David Harold (top PR guy at Imagination Technology) will be sure to tell you, should you ever speak to him. Instead of wasting clock cycles in needless memory reads/writes, and eating up fill rate by drawing and texturing pixels which you won't ever see, PowerVR's architecture only draws the pixels that you can see. This common problem of rendering hidden pixels is called overdraw. Conventional graphics cards try to overcome this problem by performing a "Z-sort", which eliminates some of the redundant information. But it isn't perfect, and still leaves substantial overdraw. PowerVR chips utilise a farily hefty depth sorting algorithm, which creates a display list of what is actually going to reach your screen at the end of the day. In this way the need for a conventional Z-Buffer is eliminated. In creating this display list the scene information must be depth sorted which, as I am sure you can imagine, is quite some task, especially when you are asking for 60 frames per second or better. Despite this time consuming task, the chip is capable of rendering so quickly and efficiently by only drawing visible pixels as described in the display list that it more than makes up for it. Bandwidth Another key benefit of the Kyro is the bandwidth requirement between the chip and the memory. As NVIDIA's GeForce has recently demonstrated, the ability to quickly transfer information between the chip and the memory is essential, and so memory bandwidth becomes extremely important when trying to achieve very high fill rates. The Kyro requires far less bandwidth in order to get the same job done. Whereas conventional chips perform many reads and writes between the memory and the chip in order to fully render a scene, because of its depth sorting the Kyro doesn't need to read and write as much, and as a result the requirement for memory bandwidth is greatly reduced. So what about the fill rate then? The Kyro's specs have yet to be finalised exactly, which makes it hard to say for sure what the exact fill rate will be. From what I have seen we can expect to see the chip running at between 125 and 150MHz. With two texture pipelines this translates to a base fill rate of 250-300Mtexels/sec. I stress that this is base fill rate though, as thanks to overdraw Kyro will be able to achieve an effective fill rate that is substantially higher. With the Neon250 they were touting an effective fill rate of 250Mtexels/sec, working on the idea of a depth complexity of 2 (in other words, it only needs to render half as many pixels to achieve the same effect as a normal graphics card). With increased detail in games, and therefore the need for more rendering passes, the average overdraw is now being listed as 3, which would give the Kyro an effective fill rate of 750-900Mtexels/sec. An impressive number I'm sure you will agree. Of course, we have seen this kind of optimism in the past with previous generations of the technology, but from what I have seen it does like a little more promising this time round... Bandwidth Kyro has some other rather appealing features too, not least of which is support for Environment Mapped Bump Mapping, or EMBM for short. This was first pioneered by Matrox with their G400 chip, and it provides a very impressive effect. Unfortunately it wasn't widely used due to the lack of hardware support, but ith ATI's Radeon 256 and now the Kyro supporting this feature as well, we may now see more EMBM enhanced games. Another feature that is worth noting is support for eight layer multi-texturing. Other chips make a claim to this feature, but the Kyro actually does it. The chip also runs at 32bit internally, which means that it should yield good 32bit colour peformance. Full Scene Anti Aliasing is also supported, and looks good enough to rival that of 3dfx's Voodoo 5. Technically the PowerVR architecture has supported this feature for some time, but now that it is becoming a major feature to have, Kyro is showing off its true potential. On the memory front, Kyro can support up to 64Mb of SDRAM, and potentially even more. It is most likely that the majority of boards will sport the standard 32Mb though, not only to keep costs down, but also because Kyro doesn't see a huge benefit from the increased memory. Bandwidth I must say that the Kyro has impressed me a great deal, and it seems to be able to offer reasonable performance at a low price. The tests we ran while at Imagination Technology (formerly Videologic) put the Kyro up against an SDR GeForce, and it was easy to see that the Kyro was staying comfortably ahead. I'm going to refrain from posting any actual numbers, as it wouldn't be fair to either card as I had no real control over the test systems, and the Kyro's drivers were not final. Suffice to say though, that for the price you are going to have to go a long way to beat the performance of the Kyro. Kyro will be one hell of a product, and while it probably won't break any speed records it will still be a real alternative to the usual NVIDIA and 3dfx products, particularly for people on a budget. Another thing to remember is that Kyro represents a whole family of chips, and there will be more versions of it in future, offering even greater performance. ST Microelectronics, who are building the chip, are also committed to a much shorter design cycle in future, which means that we could potentially have two new chips from them before the year is out! I can't help but give the Kyro a 'Caesar style' thumbs up recommendation. Obviously we are waiting for final boards and drivers, but as soon as we get them (and it will be very soon now) we will let you know. Stay tuned!
Until recently Intel had always been the dominant force in the desktop CPU market, especially amongst power users, as their chips have always been renowned for their high performance. Unfortunately these high performing chips carried a significant price tag. Intel's main rival, AMD eventually began to churn out chips that were the equals of their Intel counterparts. With the inclusion of "3DNow!", AMD chips were finally proving themselves in the hardest of all performance arenas - games. And thanks to lower pricing, AMD's K6 chips were selling extremely well, posing a threat to Intel's market dominance. Budget Power Intel's response was to release the Celeron. These budget CPUs would turn out to be one of the most popular chips available, but not for the reasons that Intel had intended... Thanks to an impeccably high manufacturing quality and the lack of the off-die Level 2 cache found on Pentium II CPUs, the Celeron was incredibly easy to overclock. The Celeron normally ran with a 66Mhz front side bus speed, but it was possible to push the chip to run at 100Mhz instead. This provided a 50% increase in core clock speed, which in turn gave rise to some astounding performance increases. Of course, the early Celerons were limited by their lack of L2 cache, which seriously hampered performance as most of the data required by the CPU had to be brought in from the main memory all the time. At 66MHz this is a very slow procedure, and it hurt the chips performance. Even at 100Mhz there was a noticeable difference in speed between a Celeron and the equivalent Pentium II. Enter The 300A Eventually Intel managed to squeeze 128Kb of L2 cache on to the processor die itself though, producing the famous Celeron 300A processor. Thanks to this on-die cache, the performance of the overclocked Celerons was now equal to, and sometimes better than, that of similarly clocked Pentium II CPUs! This was all thanks to the smaller but much faster cache on the Celeron. The Pentium II has 512Kb of cache that runs at half the speed of the CPU core. So at 300Mhz, the cache on a Pentium II is running at 150MHz. On a 300MHz Celeron though the cache is running at the full 300MHz. This difference in clock speed, along with an increased associativity (a term that describes how data is handled by the CPU), made the Celeron an extremely good performer. Coppermine It wasn't until recently that Intel made changes to its processors to redress this balance. With the move from the old 0.25 micron technology to a new 0.18 micron process, it was now possible to pack more transistors into a smaller space. The Coppermine core was born, and used in the Pentium III E. This featured 256Kb of on-die L2 cache, which had a higher associativity and a wider data bus. Instead of the old 64-bit bus width, the new Coppermine cache could use a 256-bit transfer bus, which increases the efficiency of the caching process. The Celeron II has also been forged from this process, and as we saw in our article last week, it is just as overclockable as ever. As with the Celeron 300A, the front side bus speed of the new 566MHz Celeron II can be increased from the standard 66Mhz to 100Mhz, providing a 50% increase in clock speed, in this case to 850Mhz. Strangely though, we have not seen the same kind of performance from the new Celeron II as we saw with the Celeron 300A. A Celeron II running at 850MHz can barely match a Pentium III 600E in some real world tests. Lets take a look at why this might be the case... The Guts The cores themselves are both forged on the same 0.18 micron process, and it has been said that the Celeron II core is in fact a Pentium III E core with half the cache disabled. This has been hinted at by Intel, and it would make a great deal of sense. Instead of requiring a new production line, they can just take chips from the existing Coppermine process, and modify them so that half the cache doesn't work. This may seem like a waste of money, but it is far more efficient for Intel to be able to churn out one chip and modify it later, than it is for them to have two separate production lines. Okay, so they have disabled half the cache. But the associativity and bus width of the remaining cache is unchanged. It is therefore possible to say that in the majority of tests the Celeron II should perform at a level near that of the equivalent Pentium III E, especially in applications that aren't particularly heavy on the cache. With SiSoft's SANDRA benchmarking utility, the raw performance figures come out at the same levels when comparing the two chips. This particular test has no need to utilise the L2 cache on either chip, and so proves that, excluding the L2 cache, both processor cores are essentially the same. The use of "Quake 3 : Arena" as a benchmark has shown that the Celeron II is significantly slower than an equivalent Pentium III E though. This tends to imply that Quake 3 is potentially a more cache happy application, and seems to favour 256Kb over 128Kb. Using 3DMark 2000 has also shown that there is some speed difference between the two chips, with the Celeron II overclocked to 850Mhz performing at about the same level as a Pentium III 700E. Once again this shows that potentially 3DMark 2000 is happier with a larger cache. Cache In Hand If we move away from performance orientated benchmarks towards diagnostic programs, we can see that despite this real world performance difference, there is virtually no difference between the two CPUs apart from the size of their L2 cache. Using the CacheMem benchmark, the following numbers were obtained - It is interesting to see here that there is very little difference in terms of cache bandwidth between the Pentium III E and the Celeron II. Both L1 and L2 caches are equally effective on the two chips. At 256Kb though it becomes very clear that, while the Pentium III E can still pull about 3Gb/sec from its cache, the Celeron II has run out of memory and must now go to the main memory on the motherboard. This explains why the read bandwidth of the Celeron II drops to about 750Mb/sec, and the number of clock cycles required to complete the operation increases to eight. Looking at the latency results, it is also very plain to see that there is no difference between the two chips until they reach the magic 256Kb mark. Both caches are operating with exactly the same latency until the Celeron II has to jump to using the much slower main memory, at which point the increase in latency is both obvious and expected. Conclusion It really looks like Intel have just halved the cache on the chip, and that's it. There doesn't seem to be anything more mysterious that explains the huge performance difference, at least nothing that Intel has done. And the larger the cache the better. We have recently seen that the "SETI@Home" client occupies 384Kb, which is too large for the Pentium III E cache, but not for the old 512Kb cache found on earlier chips. It is hardly surprising then that the chips with larger caches perform better at SETI, even at lower clock speeds. So does this explain why the Celeron II is slower? To some extent, yes. With programs like SETI highlighting the performance difference that exists due to cache size differences, it is entirely possible that some day-to-day apps and games also have certain minimum cache expectations, most of which seem to surpass the 128Kb of the Celeron II but not the 256Kb of the Pentium III E core. It certainly seems to be the case that the average program requires 256Kb for efficient operation. It is interesting to note the Quake 3 results though. The original Celeron 300A, when overclocked to 450MHz, managed to rival the equivalent Pentium II in Quake 2, yet strangely the Celeron II at 850Mhz can't even match a Pentium III 700E, which certainly highlights the difference in cache requirements for the two games. If this is any indication of things to come, how long will it be before 256Kb isn't enough, and today's Pentium III E processors become tomorrow's Celerons? - Overclocking the Celeron II 566Mhz DDE8 Thermal Monitoring unit reviewed
Activision Publisher Activision System RequirementsÂ Â Pentium 200 or equivalent with 3D accelerator, or...Â Â Pentium II 266 or equivalent without 3D acceleratorÂ Â 32Mb RAMÂ Â 685Mb Hard Drive spaceÂ Â DirectX7 compatible graphics cardÂ Â 4x CD-Rom Thirty Year Mission Star Trek has proven itself to be a winning formula when it comes to television and movies, with a legacy stretching thirty plus years proving testament to this fact. Why is it then, that despite the existence of the Star Trek 'universe' and its extensive 'history', we still have no really good Star Trek games? In the past we had point and click style adventurers, based on both the original series and the Next Generation ("A Final Unity"). More recently we have had the 'virtual captain' style of games, with players sitting in the big chair and dishing out orders ("Star Fleet Academy"). There was "Klingon Honor Guard", a mediocre first person shooter using the Unreal engine. There has even been a little dabble with real time strategy in "StarFleet Command". But while these games have all been passable, none of them have been really good - and certainly not good enough to live up to the Star Trek legacy. And so we come to "Star Trek : Armada", an attempt to create a 3D real time strategy game set in the Star Trek universe just after the latest film, "Insurrection", which puts it right up to date in terms of Star Trek technology. Phasers On Stun The game offers a superb single player story line, which is told with the use of in engine cut-scenes. These effectively get you from one mission to the next, and attempt to provide the fibre for the whole plot. Strangely the game is divided up between four major races - the Federation, the Klingon Empire, the Romulan Star Empire, and The Borg. This makes for a unique system of play where each race only gets about six or seven missions, but with four races this does work out to be about 25 single player levels in total, which is about right for the average RTS game. There is also an extra episode which appears after the end of the Borg campaign, and must be completed in order to see the story through to its conclusion. Armada is well put together, and despite only having a few missions for each race, they are all engrossing. The cut scenes are well written and even quite entertaining on occasions. Resistance Is Futile Multiplayer is also covered, and offers the chance for up to eight players to duke it out in one of the 30 or so multiplayer maps supplied with the game. The online multiplayer is covered by Won.net, and they provide a reasonable service. I was able to easily set up an account from inside the game, and then connect to their servers with the minimum of hassle. Ping times weren't bad, certainly good enough for a real time strategy game. One thing that must be commented on is the AI, and the seeming lack of any "I" in it. The computer controlled ships in the single player campaign are remarkably predictable, and they also seem to utilise the same tactics ad infinitum, regardless of how successful it has proven to be. The classic example is one of the wormholes in the game. The computer may send through some ships, but instead of sending them through as a large battle group the AI makes the decision to send them one at a time. This gives you the chance to fortify the wormhole exit and destroy anything coming out, as you will never be outnumbered. On Screen, Number One Visually "Star Trek : Armada" is very nice, and it certainly makes good use of its graphics engine. The space is nicely textured, and contains its fair share of electrified nebulae and other such spatial phenomena. The ship models are also extremely well done, and the developers have obviously spent a fair bit of time getting the models and textures just right. The highlight has to be the Romulan Warbirds, which shine appropriately in the cosmic aura. Sadly the Borg vessels are the worst in the game, but this is mainly due to the difficultly involved in creating accurate 3D models of the Borg ships as shown in Star Trek. The action takes place in 3D space, but there is a distinctly 2D grid overlaid on top of it. This is in order to help the player navigate his ships from the overhead view. Should you want to get closer to the action you can zoom in on the individual ships, or slip into the unusual "Directors Cut" mode. This focuses more on controlling the camera so that you can see the action going on, and there is virtually no control over your ships in this mode. It is not recommended for close call battles, but it is quite enjoyable, and this mode is sure to please any Trekkie that likes to see their battles. Open A Channel The sounds are also extremely well done, with samples from the actual shows and films. If you have been longing to hear the sound of phasers firing or vessels cloaking, then Armada won't dissapoint - it is a veritable feast for the ears. Thankfully all the voices in the game are provided by members of the original cast as well. It's great to hear Patrick "Picard" Stewart's "Make it so", and Michael "Worf" Dorn's "Today IS a good day to die!". Even Denise Crosby makes a return as the Romulan Admiral Sela, and J.G Hertzler reprises his role as Chancellor Martok. It's perhaps a personal thing, but the voices really do help to put you into the right Star Trek frame of mind, and add an extra level of depth to the game, certainly more so than if the voices had been provided by generic voice actors as in most games. Final Analysis Armada is an entertaining Star Trek game, and it is certainly a great deal better than most of those which have "boldy gone" before it. That said, as a real time strategy game it is a little lacking, particularly in the AI department. If you are a serious RTS player, perhaps Armada is one best left to the amateurs. But if you are a serious Trek head, then Armada will certainly entertain thanks to its stunning audio/visual details. One thing is certain though, Armada definitely does its heritage proud. Release Date - available now 7
- Cryo System Requirements - Pentium 166 or equivalent 32Mb RAM 4x CD-Rom drive Full duplex sound card 28.8K modem FireTeam FireTeam is a multiplayer only title, putting the emphasis on getting people to play together in teams. Playing on your own isn't really possible, and certainly isn't a good idea if you want a good game. The game comes with four basic play modes - Base Tag: Protect your own base stations while destroying your rivals' station(s) Capture The Flag: Capture and hold the flags on the map. Hold more flags for more time to win Gunball: A kind of American football with guns - get the ball into the opposition endzone to score Team Deathmatch: Deathmatch with a twist. Kills in the first phase generate generate life tokens, these are then spent as extra lives in the final phase While this isn't a huge list of game types, they are all quite enjoyable, and certainly the atmosphere generated by playing with your other team members makes the experience all the more fun. Traffic Lights Since Fireteam is designed for internet play and is aimed at all types of connection, Cryo have obviously had to work hard on their net code to ensure good results. When the game was shown at British computer games trade show ECTS last year, there were connection problems from the company's stand, which didn't bode too well. Thankfully they seem to have worked them all out. The game will initially require you to register your game on their website, at which point you create your player profile. This can then be tracked and logged on their game servers so that your stats are available for all the world to see. Teams also have their performances logged and available to view, which means that you can check out the quality of the opposition in advance! Once you are signed up, the client can be started. This connects you to the game server, and presents you with a window from which you can see what's going on and chat to other players. The usual join or create game options exist, as does an indication of your ping in the form of a traffic light system. Green is good, orange implies you might experience some lag (which I didn't), and red indicates that it just isn't worth trying... Radio Transmission Once a game is agreed upon by all the respective players it can be started. The game will then run and connect to the client to determine what it should be doing. Once the game has started, a very special feature of Fireteam kicks in .. full duplex real-time voice communication. Voice comms software for games has been around a while, and products like BattleField Communicator and Roger Wilco have done a great deal to make real-time voice comms a definite possibility. In fact Shadow Factor (the company behind BattleField Communicator) have been bought out by Microsoft, who will use their technology in DirectX 8. Cryo have done a great job with their own voice software for Fireteam, and even with my 56k modem the quality was good and didn't really impact on the speed of the game. Thoughtfully Cryo have also included a headset in the box with the game, to ensure that anyone playing Fireteam can talk to their team-mates. Sound and Vision Seeing that Fireteam is designed with lower minimum system specs in mind, the level of graphical intensity is quite low. The game features an isometric view of the world, which does make life a little more interesting when you walk behind something, but in the main this viewpoint is okay. It does make it a little difficult to aim your character, which is why the mouse is used in a point and click style to target and shoot your enemies. Detail wise the levels aren't bad, and the regeneration pods look relatively cool. It is perhaps a little disappointing to find a game that won't really tax your expensive new Pentium III and GeForce rig, but the focus is more on gameplay than graphics, and on the gameplay front Fireteam is more than successful. Sound is another slight disappointment, as there is no use of any 3D positional audio. In an isometric world it might have been useful to identify where sounds are coming from relative to the player, but unfortunately you're just going to have to have your wits about you. Conclusion Fireteam was extremely fun to play, even as a newbie, and I would certainly recommend that you check it out if you fancy some team based action without having to resort to the domain of the first person shooter. It's also worth trying if you don't have a high end PC and super fast net connection, as Fireteam isn't too demanding on the hardware front. One of the greatest things about the game though is the sense of community. When you start off your status is listed as new, and instead of taking flak from more experienced players, they actually help you! I didn't read the manual, which isn't the best idea, and so I didn't know how to join in the fun. Thanks to the other players, I was still playing in no time. Fireteam is great fun, and a definite case of gameplay over graphics. With low system requirements and only needing a 56k modem to achieve results, Cryo have certainly done a good job. Fireteam is one to check out if you fancy something different. Release Date - available now 8
: Dreamcast Developer : Capcom Publisher : Virgin Interactive Price : £34.99 Capcom are the guys behind one of the most popular beat 'em up games ever: Street Fighter. The Street Fighter series has been extremely popular and spawned many sequel games, and even a movie. In general the games have been really rather good and Street Fighter 3 on the SNES will go down in gaming history, while the movie just marks a blot on the career of some of the fine (?) actors that starred in it. With the advent of the super consoles, Capcom have suitable produced new variations of the original Street Fighter theme. The latest of which is Marvel Vs. Capcom on the DreamCast. Throwing the First Punch The game itself is ripped straight out of the arcade, and it certainly recreates the feeling of playing a classic arcade style beat 'em up, fortunately without draining a constant stream of pound coins out of your wallet. In fact as soon as you load up the game, there is a distinct arcade look and feel to the menus which is both good and bad, but more on that later… Sadly I think Capcom have skimped a little with this title, and they have leant far more heavily on the Marvel side. As a result many of the 20 characters available in the game are the Marvel superheroes, and there are only a few of the classic Street Fighter's left. Old favourites like Ryu, Chun Li and M. Bison are in the game, but other than that it's a rather poor turnout for the Street Fighter crew. While this is a shame, it does make way for the more well known superheroes and quite a few which I had never heard of. Classic characters like SpiderMan, Venom, The Hulk, War Machine and Wolverine make up some of the more well known characters while Captain Commando, Jin and Morri make up some of the more unknown champions of peace. One thing that the game does bring to the mould is a tag team style of play. Instead of selecting just one character, you get to pick two. This brings with it some merits but it can make the game a little more tactical (and possibly more difficult) to play. While it can be a little confusing knowing who to pair with who in order to create an unstoppable duo, it is certainly a great deal of fun pairing some of the guys together. For example who would have thought of SpiderMan fighting with his arch nemesis Venom as his partner? Of course there is no limit (well almost no limit) to the combinations and putting the diminutive Mega Man with the dominating Hulk does tend to yield some interesting confrontations! Switching between the two characters is fairly easy, and can be done at any time during a fight. It's probably not too surprising to know that changing characters mid match can actually help you get the win, which is probably why the computer does it every now and again. Admittedly it does turn the game into a bit of a WWF fest, but that's not necessarily a bad thing. While there are 20 main characters to pick, there are a whole heap of other which take the form of 'support characters'. After you've picked your fighters the computer will randomly choose your 'celebrity' support. Once again these are drawn from another pool of superheroes, most of which are b-list Marvel stars. There are a few notable support characters like Cyclops and Iceman, and it is a shame that two such awesome fighters have been relegated to a support role. Oh well… The game has a few play options, ranging from the classic arcade style to the must have 'versus' mode, where you get to actually play against your mates - should you have any that want to play against you! One worthwhile inclusion is the training mode. This allows you to check out all of the players against any other computer controlled duo. There is no time limit to this and there is no real health restriction as after any damage is sustained the health bars of both the player and computer are restored to full. You can also change characters mid fight, which allows you to experiment with which characters do better against the others. On the graphical front I felt more than a little let down by Marvel Vs. Capcom. I said earlier that the menu's had a very arcade feel. Unfortunately they look rather dated and would certainly not look uncommon on a SNES cart. Also the whole game looks distinctly low res and certainly isn't pushing the DreamCast to its limits. It is a shame, as in the face of the new breed of 3d style beat 'em ups Marvel Vs. Capcom looks extremely old due to its graphics. While it is very colourful it is a little grainy and some of the textures could have done with a little more work. On the sound front you can expect a cacophony of arcade style audio. With some classic samples of the characters such as Ryu's 'hado-yu-ken' adding a nice touch to the game. The rest of the samples are also very good, and as always can add a much needed bit of comic relief - especially when a character explodes into one of their awesome special moves. Music wise you can expect to be drowned in a sea of electronic music that harks of classic Japanese arcade classics, and after a while the 'tunes' can get extremely annoying, but that is hardly a rare occurrence with console games. Conclusion Ultimately Marvel Vs. Capcom is eminently playable and thoroughly good fun. I often find myself playing it if I'm going out and just want something to occupy me for a short while before I go. It's also good against your mates, and in that respect it owes a lot to its ancestors which always managed to make playing against your friends extremely entertaining. I have criticised the game, and I will stand by those criticisms. It is fun, it is good but with a poor turnout from the Street Fighter gang and graphics that no-one would call cutting edge it has its flaws. After all is said and done Marvel Vs. Capcom will undeniably find many fans amongst both hard core fight-fans and console beat 'em up newbies. What The Scores Mean - Out Now
- EpoxPrice - about £120 IronGate Ever since its release the AMD Athlon has suffered from poor motherboard support. While the original AMD IronGate chipset wasn't particularly bad, the design specifications for boards based around it were rather demanding, especially for a standard desktop PC. One of the hardest requirements was that the motherboards needed to be constructed from a six layer printed circuit board. This led to interesting problems, not to mention the added cost caused by the process. Thanks to this extra cost (which pushed board prices way beyond that of the average BX based board) and some of the initial teething problems with the IronGate chipset (namely poor AGP support and the Super Bypass feature), Athlon uptake was perhaps a little slower than it might have been. Because of these problems, many held off awaiting the arrival of the KX133 chipset from VIA. Unfortunately, due to delays and a fierce legal battle between VIA and Intel, the KX133 arrived a little later than expected. It finally appeared a few months ago, and one of the first boards to hit the shelves was Epox's 7KXA motherboard... Enter KX133 Thanks to the KX133 chipset the specifications that the board needs to meet have been slightly relaxed, so that a more standard 4 layer PCB can be used. This allows for not only more stable and less power hungry boards, but also cheaper boards. The KX133 feature set is suitably large for a new chipset, and it certainly makes Intel's ageing BX design look like a poor competitor, with the now standard Ultra ATA 66 drive interface, AGP 4x, support for asynchronous memory speeds, and four USB root hub ports. The most interesting of these is the memory speed support. As the Athlon runs on a 100MHz DDR CPU bus the rest of the system operates at the standard 100MHz. The memory is also run at this speed, but there is an option to increase the memory bus speed (and only the memory bus speed in this case) to 133MHz. This allows for greater memory bandwidth, and is most important when it comes to AGP 4x. To highlight this, at 100MHz and with a standard bus width of 64bits (8 bytes) the maximum bandwidth is 800Mb/sec (100*8). At 133MHz this becomes 1.06Gb/sec (133*8). This happens to match exactly the maximum transfer rate of AGP 4x. As an additional note, the board also supports VCSDRAM (Virtual Channel SDRAM), although currently this type of memory is vary rare, and more expensive than standard SDRAM. The general consensus is that if you're going to spend more money on memory, it would be better spent on 133MHz capable memory than VCSDRAM. The Board The board itself has the standard compliment of 3 DIMM slots, 5 PCI Slots, 1 ISA slot, 1 AGP slot, and the new (but seemingly useless for now) AMR slot. The standard set of connectors (serial, parallel etc.) can be found at the back of the board, along with three audio connectors and a joystick / midi port, which is connected to the onboard AC97 compliant codec. Which is useful, as the board is capable of producing some sound out of the box. Epox have thoughtfully included a UDMA 66 capable cable, which will certainly save the average user a few pounds if they want to connect up a new UDMA66 capable drive and achieve the best performance possible. Also included in the box is a copy of Norton AntiVirus 5 and Norton's brilliant Ghost utility. Both of these are worthy inclusions, as in this day and age no one should be without protection from viruses, while Ghost allows for easy cloning of your old drive on to a new one, ready for use with your shiny new Athlon system. The board also supports some overclocking features, which will interest some users as it may allow them to get more out of their chip without having to instantly resort to one of the many Athlon overclocking devices now available. These features could also augment such devices. Among the features supported by the board are voltage selection, from 1.5v to 1.8v, and selectable front side bus speeds which allow for acceleration of the Athlon chip. Conclusion The board is well specced, well priced, and should perform well with the Athlon. It should also alleviate any problems that previous upgraders have experienced with power supply issues, in that the board is less power hungry than earlier boards based on the IronGate. Despite this the Epox board isn't perfect - it might have been nice to see a fourth DIMM slot and a sixth PCI slot, and perhaps a better audio codec, as the one on the board is unknown. Even with these slight niggles though the Epox 7KXA is a well featured board that is certainly not going to hinder the Athlon as a very viable desktop alternative to an Intel processor. The only problem that I forsee with the KX133 chipset in general is that AMD are planning to release another chipset that supports DDR SDRAM, which will help alleviate bandwidth problems even more. Even VIA are working on a chipset that supports this new memory type. It is therefore only a matter of time before KX133 based boards like this are superseded, but this is the way of things in the computer world. 8
- ElsaPrice - £180-200 DDR The GeForce 256 is the single most powerful graphics chip to exist for the desktop PC market today. This is a well known fact, and has been true since its release in September last year. While the performance of this chip is undisputed, there are two factions that exist within the GeForce camp. These relate to the type of memory used to partner the chip on various manufacturers' boards. While most companies are happy to produce both boards, they do charge a premium for one, and it is because of this that people may be confused by the whole idea of there being two apparently identical boards, one of which costs a significant amount less. The difference in the memory types is that they are either standard Single Data Rate (SDR) or Double Data Rate (DDR). Just to explain briefly the difference, data can only be transmitted across a bus in time with a clock pulse. Traditionally data could only be transferred on only one edge of this clock pulse (the clock pulse is a wave), but with recent advancements data can now be transferred on both the rising AND falling edges of the clock pulse. This is very much how AGP 2x functions, and the DDR front side bus of an Athlon CPU. The Board Elsa are one of many companies that have produced both the SDR and DDR variants of GeForce board, and here we will be looking at the cheaper SDR flavour, the Erazor X. The board itself breaks with the standard tradition of following a reference design. Elsa are known for their esoteric boards, and the Erazor X doesn't disappoint. When looking at the board it wouldn't be surprising for someone to wonder where the rest of it went. Elsa's unconventional design involves taking a fairly substantial chunk out of the board, toward the 15 pin connector edge of the card. The board doesn't feature any TV-Out capabilities and, as a result, it makes sense for Elsa to have taken this chunk out, as why waste board materials when its only going to be empty space. It is a shame that there are no other output features other than a standard 15 pin monitor connector, but seeing that this is a very under-used feature it makes sense not to include it just to keep the price of the board down. One very good thing about the Elsa board is that the supplied heatsink and fan on the GeForce chip is rather nice and solid, and has a large surface area from which to dissipate the copious amounts of heat generated by the chip. My only annoyance with this is that while the heatsink is attached using a fairly standard thermal compound, which is more than good enough for the task, the actual contact between the chip and the heatsink isn't particularly good, as there is a slight gap between some of the chip and its heatsink. This isn't good, and it may only have affected the board that I saw, but seeing that I experienced no problems with it in testing it should be sufficient. Drivers Elsa have once again done a sterling job with the drivers, and have based them on the latest Nvidia reference set in order to achieve the highest performance possible. It is nice to see that they have also added a few extras to the drivers, as all too many companies just churn out the reference drivers but replace NVIDIA company graphics with their own. While this isn't too much of a problem, it is nice to see that some (like Elsa) go a little further. The extra driver features come mainly in the form of the monitor set up, with an extensive control app which allows for very precise tuning of the monitor. Thoughtfully they have also included a large set of pre-defined monitor scripts, although creating your own personal one isn't particularly difficult. One very interesting feature that has popped up is something called chip guard. The idea behind this is that, should you overclock the chip or have a problem with it overheating due to a fault, chip guard will monitor this and attempt to prevent you from doing any real damage to the chip itself. Chip guard will either forcibly stop the application, crash the machine, or apparently turn the computer off. During testing I didn't encounter any problems, and so never got to see chip guard in action, but there is certainly peace of mind knowing that chip guard is there. Elsa may not have come up with this idea (I believe Asus first came up with this solution for their own GeForce boards), but kudos to them for actually including such a program. I have yet to see anyone else implement such a feature... Performance The Erazor X was compared to a DDR based GeForce in order to highlight the performance differences that exist between the two. It is clear to see from the benchmarks that the DDR GeForce is consistently faster than its SDR cousin, especially when running at higher resolutions and bit depths. It is for these reasons that DDR memory was brought in. Now For The Science Bit... The available memory bandwidth is the main reason for the relatively poor performance of the SDR Erazor X, in comparison to the DDR variety at least - the performance is still excellent compared to any other graphics card currently on the market! It is easier to demonstrate this with a few simple calculations... Both varieties of GeForce have a 128bit memory bus, but while the memory runs at 166MHz for SDR memory, it reaches 300MHz for the DDR variety. Therefore the SDR memory's bandwidth is (128/8) * 166 = 2.6Gb/sec, whereas the DDR memory's bandwidth is (128/8) * 300 = 4.8Gb/sec Due to this rather impressive bandwidth increase, a card using DDR memory is capable of transferring more data, which the chip is all too happy to supply, and so the card doesn't lose as much of its performance at higher resolutions or bit depths. In fact, the SDR memory becomes a major bottleneck due to a simple lack of bandwidth between the chip and the memory. Do bear in mind, both chips are running at the same clock speed, and so the ONLY factor affecting the performance is the memory bandwidth. Conclusion The Erazor X is a good board that performs well. It comes with a good set of drivers which are constantly updated by Elsa, and it also includes Chip Guard, which is a useful feature for anyone wanting to overclock the board. Unfortunately the performance is being held back by the memory used, and so you will never see the full power of the GeForce released on to your screen. Now, one would have thought that by using SDR memory the Erazor X would be cheaper, but when compared to other offerings the Elsa board is still quite expensive, which is a shame as performance wise it can't offer anything more than any other SDR based GeForce board. It is for these reasons that the ErazorX doesn't receive a higher score, but nonetheless this is another good product from Elsa, if a little expensive. Release Date - available now 7
- IntelPrice - about £550-£600 History Lesson In the world of desktop computing there are many battles being fought, but there is only one real war. The eternal struggle of the chip makers like Intel, AMD and the late Cyrix (which is now part of VIA) has led to huge leaps in performance and speed in order to claim the accolade of "fastest x86 CPU". Until recently Intel were undoubtedly the chipmaker of choice for gamers, as their all important FPU ("Floating Point Unit") was more powerful than anything the competition could throw out their doors. The Pentium MMX was a powerhouse at the time, and would put a K6 or Cyrix 6x86 to shame when it came to games. Its successor, the Pentium II, widened the performance gap even more. Ultimately this high performance was to start the downfall for Intel. With such performance Intel chip pricing was equally high, making their processors a fairly unrealistic proposition for anyone building a system to a budget. As a result the cheaper AMD processors become more popular, causing Intel to respond with the "budget" Celeron. A Question Of Complexity Early models of the Celeron did not include any level 2 processor cache (L2 cache), and as such their application performance was extremely poor. Games wise they were very good though, as the all important Pentium II core with its powerful FPU formed the basic design. Shortly after it was announced that new Celerons would feature 128Kb of L2 cache, which sounded reasonable for a cheap chip. The special feature was that this cache would be running at the full speed of the CPU (as opposed to the half speed PII cache), and it would be on the same silicon as the CPU core. This yielded the Celeron 300A, one of the most complex chips ever attempted by Intel. With 19 million transistors in the core it represented a massive leap from the 7.5 million of the Pentium II. It also had the unique feature of being extremely overclockable. As a result the Celeron was capable of performing as well as the more expensive Pentium II in both games and applications, and the on die cache was making all the difference. More recently Intel released the Pentium III, and with it came SSE (Streaming SIMD Extensions). Unfortunately a Pentium III behaves in exactly the same way as the Pentium II, except that due to changes in the actual CPU die the chip can achieve higher clock speeds. As a result we have seen the Pentium III reach 600MHz. Copper Mine It is now that Intel have made another radical change to the Pentium III architecture in order to better respond to the threat of AMD's Athlon. Intel's latest incarnation is the "CopperMine" core which, like the Celeron 300A, introduces on die L2 cache to the existing Pentium III core. This time Intel have decided to add 256Kb 'advanced transfer' cache to the chip, which makes the cache more efficient, and so helps to boost performance even further. The chip itself is built on Intel's brand new 0.18 micron production lines, which allows for higher frequencies, lower operating voltages, and less thermal problems. This 0.18 process is also what makes it feasible to put 256Kb of cache on the chip die. Another new design feature is the orientation of the silicon. Instead of being at the 'bottom' of the chip, and therefore not being cooled directly by the heatsink and fan, Intel have effectively flipped the chip over so that the silicon is at the top most surface of the chip. This has been designated 'flip chip' technology and allows for better cooling of the chip, which is a major factor when running as high as 800MHz. Performance The extra clock speed of the Pentium III 800 gives it a proportionate performance increase, and the faster cache is certainly helping to increase this lead slightly, though isn't any non-CopperMine Pentium III 800 it is impossible to know exactly what kind of performance boost is offered by the slightly smaller but faster L2 cache. Conclusion The Pentium III 800 is a super chip. Intel have reached a destination that 18 months ago nobody could really have foreseen them achieving so quickly. Of course it was inevitable that we would see CPU's reaching 800MHz and beyond, but with Intel finally getting some competition in the form of the Athlon we must wonder whether if left to its own devices Intel would have come this far so soon... In terms of the two chips, the Athlon core is technically superior to that of the Pentium III (excluding the on die L2 cache), and it does still offer better performance on a clock for clock basis in most cases. But Intel have worked hard on creating a much more stable platform of core logic for their CPUs - I need only mention the age old but rock solid BX chipset, which has helped keep Intel at the top. The Pentium III 800 is a good chip, and it would be very easy to build a very competent computer around it. It doesn't quite offer the fastest performance on the block in every case, especially given the recent announcement of 1GHz processors from both AMD and Intel, but most of the time you wouldn't be disappointed, and with the dependable BX chipset behind it you will have a machine as stable as money can buy. 9
- 3dfxPrice - about £100 3dfx Slip 3dfx have lost their way in the 3D market place recently, and competitors such as NVIDIA have all eagerly jumped in to take the crown that at one time deservedly belonged to 3dfx. Without any major new product 3dfx continues to take a second place in the 3D speed stakes. Despite this potentially crushing defeat, 3dfx have had the foresight to hedge their bets. Many chip makers have wisely utilised the resources available through AGP, and as a result have produced some very impressive products. They have, however, forgotten about the lower end of the market. Strange as it may sound, not everyone is endowed with AGP, and this AGP divide isn't just limited to older systems. A case in point is Intel's AGP-less 810 chipset - boards based on the 810 (and its associated flavours) are perfect for the most budget conscious of users, but should a user want to upgrade from the rather weak i752 graphics controller they have a somewhat limited choice. In step 3dfx. V3 3000 PCI Despite not being 'the fastest card on the market', the Voodoo 3 3000 PCI is still a reasonable performer, and certainly a good card for lower end platforms. The V3 3000 PCI slightly varies from the original AGP version as it is significantly larger, which is undoubtedly a result of it being converted to PCI. It is also missing the TV encoder chip, which (on the AGP version) allows you to plug the card into a TV for some big screen gaming. Ultimately many users would never use this feature, but for the sake of a couple more dollars... Other than that the board is virtually the same though. The same 16Mb of SDRAM occupies the board, as does the rather substantial heatsink attached to the chip. It's curious that 3dfx don't have a fan in addition, as the board does get very hot, and after only minutes the heatsink becomes literally too hot to handle. On the driver side, 3dfx have implemented their ubiquitous control panels, so any one used to other 3dfx cards will feel right at home, and even those who have very little experience will have no problem tweaking the options. One thing I must note though is that installing the drivers isn't the easiest thing in the world to do, as 3dfx seem to have made their CD slightly less than user friendly. This is a shame, as they do include a very nice video on the CD which shows the user how to install the card itself. Oh well... 3dfx have also included some extra software in their bundle - EA's FIFA '99 and Epic's Unreal, both of which are good games and would certainly show off some of the prowess of the Voodoo 3 chip. There is also a voucher to redeem a copy of Unreal Tournament, although this is likely to be replaced soon by the actual game, now that it has been released. Performance One question that has to be asked is whether the transition from AGP to PCI has harmed the performance of the board. Due to the way in which 3dfx have designed the Voodoo 3 chip, it doesn't use any of the advanced features of the AGP bus, such as AGP 2/4x, Side Band Addressing, or DME [That's enough - jargon Editor]. It effectively acts like a PCI 66 device anyway, so there are no real issues to deal with when creating a PCI version of the card. Due to the fall in bus clock speed there is the potential to lose some performance, but because of the way which 3dfx chips work there isn't quite as much of a penalty as one could expect. As you can see from these results, the PCI version of the Voodoo 3 3000 is not noticeably slower than its AGP cousin. In some ways this is a good thing, but it does show the total lack of commitment to AGP in general from 3dfx... Conclusion While it is no speed demon, the Voodoo 3 3000 PCI can at the very least hold its own at the lower end, and it does represent reasonable performance for a good price. Of course, while the user is getting a fairly cheap graphics card they are also buying one that is missing some useful features, most notably the lack of 32 bit colour rendering. 3dfx don't have the only PCI 3D graphics card available, and it may be worth investigating some of the others, such as the PCI version of VideoLogic's Neon250 product. Overall though, the Voodoo3 3000 PCI is a great little card for those who require a PCI graphics solution. - Creative Labs Annihilator Pro review Elsa Erazor III Pro Video review Neon 250 review 8
- MicrosoftPrice - about £35 Mouse Abuse When most of us buy PCs, our put together our own, we usually have to decide on what input devices to buy. Any home builder might have to skimp on the mouse and keyboard in order to maximise on the hardware that they buy, as they are undoubtedly on a budget. They may well end up with a 'regular' keyboard and a fairly innocuous mouse that covers all the necessary bases by simply having buttons on it. The off the shelf PCs are more likely to come with slightly better input devices, especially mice. Ultimately though, any PC owner interested in gaming is going to notice just how bad their mouse is. Enter Microsoft. The Big M Steps In Many people may regard Microsoft as being a software house, but they do also make a number of input devices. They have in fact been making mice for almost a decade now, and if you buy an off the shelf PC the chances are that you are using a rebadged Microsoft mouse. Since they have been making mice for so long, they have become rather adept at this little known art. The first mouse they made was exceptionally ergonomic (a buzzword at the time) and extremely well weighted. It was this design that really started to make such a name for Microsoft rodents. At the time though there were very few games that would really require fast, precise mousing action, and for those that did (such as Doom) there was no real difference drawn between one mouse and another. Years passed and Microsoft redesigned the mouse, at the cost of many millions, in order to provide a shape that was yet more ergonomic, and suited to both right and left hand users. This became known as the MS Mouse v2.0. By this time gaming had progressed, and gamers were beginning to take a little more note of the quality of their mice. After hitting on such a winning design Microsoft have updated their mouse with a scrolling wheel, which not only aids scrolling in Internet Explorer and other apps, but also helps with fast weapon switching in games like Quake. This mouse became the choice for many gamers thanks to its comfortable shape, good weighting, smooth action, and precise control. The MS IntelliMouse was really making a name for itself in the gaming world... With the advent of USB, sampling rates increased, yielding greater input accuracy, and making the MS IntelliMouse a must have for any gamer. Of course, other companies like Logitech have tried to keep up, and to some extent they have with some preferring the different shape and feel that Logitech provide. But now Microsoft have jumped ahead again, with an improvement so sophisticated that it rivals the invention of the mouse itself! Advanced Mouse technology Since the 1960s the mouse has utilised two sets of rollers connected to grooved wheels that translate to x and y co-ordinates. When these grooved wheels spin they cut a beam of light that registers the movement. By this method the mouse translates the movement of the rollers into instructions to move the mouse pointer in the desired direction. In most cases these rollers have been moved by a hard rubber mouse ball. It is here that Microsoft have made their change. With their new mice they have replaced the entire mechanism with a fully optical set up. Instead of a moving ball and internal wheels there is nothing. There is only a glowing red light that emanates from the bottom of the mouse, from which all the movement is detected. The added advantage of using an optical mechanism is that there are no moving parts that require cleaning, which will banish the endless comments regarding dirty mice and poor play. To be fair, we have seen optical mice before, but in the past the they have required a special mouse mat with precise tracking dots in order to function. Should you wear out or lose it you would have to fork out whatever the company wanted to charge you for a new one. Microsoft's new mouse will work on more or less any surface though. So How Does It Work? The new mechanism, dubbed IntelliEye by Microsoft, works by scanning the surface at a rate of 1500 images per second. By using a powerful 16Mips processor it can then determine not only the direction of movement, but also the rate. This in theory means that the IntelliMouse with IntelliEye should be able to track just as fast, if not faster, than any other mouse on the market. It should also be able to do this more accurately too. Unfortunately there is a problem, as there always is. While yielding super accurate tracking and immense response most of the time, under certain conditions the mouse can become as useless as a doorstop. Microsoft themselves say that while the mouse will work on almost any surface, one should avoid reflective or transmissive surfaces like glass or a mirror, and also avoid certain types of repeating patterns like wood grain. When one looks closely, it does appear that while it doesn't require any special mousing surface like previous optical mice, in order to achieve good operation a sensible mouse surface still needs to be used. It's very easy to see when the mouse isn't tracking properly, as the pointer will begin to move in a random fashion or refuse to move at all. I myself went through half a dozen mouse mats before finding one that provided a good surface to track on. In Game Most importantly though, how does the IntelliEye technology fare in games? Well, in slower RTS style games, for which I used my personal favourite HomeWorld, the mouse action is smooth, which translates to fine control over both the camera and units. But how does it fare in a much faster paced environment, such as Quake 3 Arena? Regrettably, I must inform you that while in general it provides a high degree of accuracy, which is great for precise camping with the railgun, under moments of extreme combat where lightning fast mouse response is mandatory for survival, the IntelliEye started to show its limits. At key moments the tracking failed resulting in random movement that proved very disorientating in a deathmatch. It is possible that this was due to the mouse surface used, but under other circumstances no glitches occurred, which does tend to imply that there is simply a problem with tracking very rapid movements. Overall IntelliEye is a great technology, and it is very nice to use. It takes a little while to get used to a mouse with no ball, but once adapted it becomes a joy to use. Graphical apps like PhotoShop become a joy to use, as the tracking is just that bit more accurate, which in turn allows for much easier graphical manipulation. Slower games will also benefit from the smoother mouse action, but when it comes to a serious hardcore deathmatch the IntelliEye can't quite touch the regular mechanical mouse. It is sad, therefore, that I cannot recommend the IntelliMouse with IntelliEye wholeheartedly, as it will not suit everyone's needs. If however, you do not play many first person shooters, and will therefore not require such rapid mouse response, this mouse is extremely capable and certainly deserves consideration if you are planning to replace your old mouse. While it isn't an "absolutely must have" product, it is highly commended for bringing such a new approach to an old technology. 8
- Creative LabsPrice - £218 ( direct from Creative Europe) In The Beginning In the beginning, 3dfx based hardware ruled the 3D acceleration scene. Many companies sold products utilising their technology, with Creative Labs being just one. But with 3dfx's acquisition of STB, Creative et al. had to turn to other chips to use in their products. Enter NVIDIA. NVIDIA's first real entry into the 3D accelerator market came with the Riva128 chip. This supported a maximum of 8Mb of RAM and performed admirably for the time. It was also one of the few chips that actually took advantage of any AGP features. It was, however, plagued by driver and performance problems, and therefore did not find itself the choice of the serious gamer. Since then NVIDIA have seriously improved their technology and drivers and we have seen the TNT, TNT2 (with an Ultra flavour), and now we have their newest chip - the GeForce 256. May The GeForce Be With You The GeForce builds on NVIDIA's previous chips by not only adding superior performance, but also a very powerful new feature .. but more on that later. The chip itself contains four rendering pipelines, which is twice as many as the previous generation. This allows for up to four textured pixels to be produced in one clock cycle, or alternatively to produce quad textured pixels, which in theory allows for far more detailed scenes with little performance hit. This 'quad rendering engine' helps to push the fill rate of the chip up to 480Mtexels/sec while not requiring any increase in clock speed. In fact, due to the design the chip runs at a lower clock speed than previous chips, a measly sounding 120MHz. So what is this new feature? For those of you who have had your heads under a rock over the past month or so, NVIDIA have decided to offload a significant chunk of the 3D processing normally done by the CPU on to the graphics chip itself. This is commonly referred to as geometry processing, or Transform and Lighting (T&L), and is an extremely computationally expensive process. By shifting this off the CPU they have freed it up to allow more processing power to be used on other tasks, such as better physics or AI. The net result of this is that the GeForce 256 chip is fairly substantial in size, as it contains over 22 million transistors. In comparison to a P3-500 which only has 9 million, this is quite some chip. Professional Annihilation Creative have harnessed the power of this new chip in their new Annihilator Pro graphics board, and to achieve maximum performance they have made a couple of changes to their original Annihilator design. On their initial release, GeForce boards performed slower than expected, and their fill rates where severely compromised. It was discovered that it was actually the 166MHz memory that was producing the bottleneck that was keeping down performance. Fortunately using faster memory solves the problem, but being highly expensive to produce another solution had to be found. Enter DDR (Double Data Rate) memory. Standard memory can only be written to and read from on either the rising or falling edge of the clock pulse signal. DDR changes this to allow data to be written to and read from memory on both the rising and falling edges of the clock pulse, similar to the way in which AGP 2x functions. This has the net effect of doubling the bandwidth available. The Annihilator Pro uses this type of memory, and therefore suffers none of the performance loss experienced with SDR (single data rate) memory. In order to differentiate between the memory clocks, the memory on the Pro is referred to as running at 300MHz. This is double the standard 150MHz and adequately represents (in simple terms) the benefits of DDR memory. Creative Labs have done a sterling job with the Annihilator Pro, and have complimented a well built card with drivers of equal quality. Apart from an error in the installation process (which should be corrected by the time you read this), the drivers installed easily enough, and provide a full set of useful tweaks. There are a lot of standard options that we have seen with previous NVIDIA drivers, such as options for setting the image quality vs. speed trade off, but with the GeForce being able to perform anisotropic filtering without suffering any performance hit, there is also an option to enable this as the default filtering method. As expected there is an overclocking option, although this is only for the memory clock speed, and not for the core clock. The default setting is 300MHz, but the slider allows for a maximum of 330MHz. The memory itself is rated at 6ns, which means that it will support a maximum REAL clock of 166MHz. Now applying the DDR idea of doubling the speed, the memory should be able to hold up even as fast as 332MHz. That said, I don't recommend any overclocking activities, but should you try there is potential for a visible performance increase. Performance The performance benchmarks really speak for themselves, as it is easily visible that the GeForce is significantly faster than a TNT2 Ultra, which is really the only real rival in performance terms. It is certainly interesting to see that with DDR memory the GeForce is achieving its target fill rate. This is really down to the higher bandwidth made available by the DDR SGRAM. In game the Annihilator Pro is extremely fast, and while most games may not support T&L acceleration directly yet, there is a performance increase due to the increased fill rate of the GeForce. It is even possible to turn on every feature at high bit depth and resolution and still maintain a reasonable framerate in games like Quake 3 Arena, although no hardcore gamer would want to play at such a low frame rate. It is interesting to note that the test system used is only a P2-450 based machine. It is surprising to see such a performance increase despite this CPU limitation. Hopefully we should be upgrading the test platform next year (fortunately only days away now), and it will be interesting to see how the Annihilator Pro performs with a "real" CPU. Conclusion It is clear to see that NVIDIA have successfully kept their crown as King of the 3D world. The GeForce 256 is an impressive step forward, and the Annihilator Pro is a great card that takes full advantage of this chip. Creative Labs have done a fine job on both the card and the drivers, which is to be expected from such a massive company. They have also updated their box design and dropped the orange (at least for the time being) and packaged the Annihilator Pro in a very sexy black box. The supplied software is reasonable, but certainly nothing special. There are a selection of NVIDIA tech demos, which show off some of the features of the card, as well as a preview of Evolva and a copy of WinDVD. I was a little disappointed that there were no more games (or even demos) included, but such is the way of bundles. Now I know what you are thinking. What games actually use hardware T&L acceleration? At the moment very few, but the game developers will be taking advantage of these features, and when they do we will see the full power of the GeForce. We may have to wait a short while, but they will appear. So should you get one now? Considering that it is the latest and fastest accelerator available, it is certainly worth looking at if you are a total performance freak. Seeing that 3dfx's new cards have been delayed until about March or April, the Annihilator Pro will be on top for some time. We should also be seeing some use of hardware T&L in games by then, and with 3dfx's next generation chips not supporting T&L, the Annihilator Pro might be in for quite a run at the top of the performance chart. The Annihilator Pro is a great card, with great potential that hasn't been fully realised yet. I only hope that you find one under the Christmas tree, as at about £200 they aren't cheap. Release Date - available now 9
- ElsaPrice - £120 for Erazor III Pro Price - £160 for Erazor III Pro Video Chip Off The Old Block? Elsa are not a newcomer to the graphics card market, as they have been producing cards for many years now. They have, however, been rather insignificant in the more global marketplace. Until now that is. With their recently released Savage4 and TNT2 range of boards, Elsa have placed themselves further into the spotlight. The Erazor III Pro is their latest card, and uses a new spin of the TNT2 chip which not only yields faster clock speeds but also much cooler running hardware. The new chip is called the TNT2 Pro, and represents a step forward in chip manufacturing, as instead of being made on a 0.25 micron process it is instead manufactured on a 0.22 micron process. This has the net effect of allowing companies like Elsa to produce boards that run faster than the original TNT2 chip, but carry no significant cost penalty. It is certainly nice to see this kind of evolution in the world of graphics chips, as once the die has become smaller it becomes far easier to pack more features on to the chips and run them at greater speeds, which is always good news for all concerned. The Card The card itself is a typical Elsa board - it is somewhat smaller in size than one might expect, but other than bearing the Elsa logo it isn't anything too extraordinary to look at. The 32Mb of memory is comprised of four 8Mb chips, which means that the card isn't littered with memory chips as has been the case with other manufacturers' boards though. The TNT2 Pro chip itself sits under a rather unassuming looking heat sink / fan combo. At first sight it appears that the fan is actually inside the heat sink, but on closer inspection the heat sink part of the assembly is more for supporting the fan than to actually dissipate any heat. There are some small fins on the heat sink that I'm sure must have some effect, but bearing in mind that the chip doesn't actually generate that much heat there should be no problems with the supplied heat sink and fan. The card we tested was an Erazor III Pro Video, and this has one other feature that makes it stand out a little from the crowd - an extra output port on the back of the card. I say output port, but in fact it covers both output and input. The card will accept either a composite video or S-Video input, and is capable of producing a display on all three outputs at once, which comprises two composite video outs and one S-Video out. This is certainly an impressive feature and is bound to be used by some, although it will be a feature lost on many as the processes involved in editing and recording video on a PC are not entirely simple tasks. Thankfully the supplied software is reasonably good, which is certainly a step in the right direction... What's In The Box? In traditional style, Elsa have decided to bundle a few little extras with the card. In order to promote the gaming aspect of the card they have included EA's Need For Speed 4. Although it isn't such a great title, it certainly does help show off some of the power of the TNT2 Pro chip. Corel Draw 7 is also included, although seeing that this version is somewhat dated (the current one being 9!) it may prove to be somewhat of a token gesture. Admittedly it is still a good piece of software, and should you be in need of such a package it may well make a worthwhile contribution to the bundle. Thankfully Elsa have also included a program that is aimed at the video side of the card, which means that you will be able to take full advantage of the video features out of the box - Elsa's own MainActor software, which doesn't have the power of larger packages of this type, such as Adobe's Premier, but still manages to provide a relatively efficient way of editing video on the PC. Ultimately though the most important thing about the card is the driver, and it is here that Elsa have excelled. It is not unusual to see manufacturers making only slight cosmetic tweaks to reference drivers, but Elsa tend to be much more thorough than just changing a few graphics here and there. The drivers install several tabs to the advanced display settings, which allow the configuring of not only the standard NVIDIA settings (such as image quality, mip-map levels etc) but also much finer details, including a very precise monitor set up function which allows truly fantastic fine tuning. I haven't seen anything quite so detailed since Matrox's original PowerDesk drivers, and it is always nice to see greater flexibility through better drivers. Thumbs up to Elsa! Conclusion The benchmark results show the Erazor III Pro competing quite evenly with a TNT2 Ultra card. This isn't too surprising when one considers that there is only about a 5% difference in core clock speed between these two chips - the TNT2 Pro runs at 143MHz, with a TNT2 Ultra clocking in at 150MHz. The results reflect this relatively small clock difference, but there is also the possibility of some CPU limitation with the test machine used. I would therefore be forced to conclude that given more CPU horsepower the TNT2 Ultra would be capable of creating a bigger performance gap over the Erazor III Pro. Despite that fact, the Erazor III Pro would still be capable of producing more than adequate results. Looking at the card in general, it certainly represents good value for money, and considering that one not only gets a superb 3D accelerator card but also the ability to input and output images the card becomes an even more attractive package. The only possible bad point to make is that with the advent of the GeForce256, the TNT2 is no longer the fastest chip on the block and so may not be quite so attractive to the dedicated gamer. Fortunately (depending on your perspective) GeForce cards are rather expensive at the moment, and support for T&L is rather limited, so there is still plenty of room for the venerable TNT2 on the performance tree. The Erazor III Pro is another great product from Elsa that has certainly helped place them firmly on the 3D accelerator map. Release Date - available now 8
Tribes Extreme (a seperate product based on the original Starsiege Tribes game), which was set to include lots of new features including a single player game, has now been canned. It would appear that the games developers (Dynamix) weren't happy with the single player aspect, and have decided to concentrate on Tribes 2. This might just give them the time to make Tribes 2 a great game .. time will tell. In the meantime there is a gorgeous shot of Tribes 2 over at Tribes Players.
- VideologicPrice - £235 for Sirocco Crossfire Price - £250 for DigiTheatre Price - £150 for DigiTheatre Decoder Introduction Videologic are currently redefining the face of multimedia. We already know of their PowerVR technology, which has met with mixed results, but many of you maybe unaware of their relatively new series of speaker systems. They have attempted to cover, and I think very successfully met the demands of, each entry point in the market. For the hard core gaming elite they have the Sirroco Crossfire, and for the more movie mad users out there they have created the Digitheatre. In this review I shall be looking at both products in their own separate contexts in order to help you make the decision of which set would be better for you... Sirroco Crossfires First up we have the new Sirroco Crossfires. The original Videologic Sirroco speaker system comprised a 3 piece set of two satellite speakers and one sub woofer, as well as a truly monstrous amp that drives each of the speakers individually (in fact there are two amplification circuits for EACH speaker). With the arrival of such 3D audio API's as A3D and EAX, it was clear that a two speaker set up would not be able to do them full justice. As a solution they came up with the innovative four speaker system that is the Crossfire. They use the same amp, which is not a bad thing as it does provide a level of power and response only seen in traditionally much higher end systems, and instead of driving each part of the speaker individually (one amp circuit for the woofer and one for the tweeter) they now drive both parts of the speakers from one amplification circuit. This may seem like a cop out, but in doing this they have reduced any extra cost and are able to provide four speaker sound with only minimal design changes. In order to reflect this the speakers themselves are much smaller units, yet overall the volume level is similar to the original two speaker system. The subwoofer used is the same as in the standard Sirroco system though. The speakers themselves, while deceptively smaller, don't have a significantly lower quality or volume output and still have a truly phenomenal frequency response range. Being as small as they are they don't have much of a bass response though, but that is the reason for the separate subwoofer. This also helps separate the multitude of frequencies in any given sound better across the speakers, allowing each one to perform at its best. The amp, and specifically the inputs, are where the real 'gaming' implications exist. The amp itself has 3 separate input channels which are selectable from a nice big knob on the front. The primary input has four input channels - front left, front right, rear left and rear right. When coupled with a soundcard that can output separate front/rear left/right outputs (such as the Vortex2 based Sonic Vortex2 from Videologic), each speaker outputs only the specific audio information sent to it, thereby providing true 3D positional audio effects. With more and more games taking advantage of 3D audio technology, titles will sound increasingly more realistic and authentic when heard on the Crossfire system. As a side note, there is also one other connector on the back of the amp which has a very special function, but I will come back to that later... DigiTheatre The Digitheatre is a product aimed squarely at the DVD-Video market. The basic premise behind Digitheatre is a system capable of reproducing the full 5.1 Dolby Digital (also known as AC-3, which is the compression type) signal that exists on DVD-Video discs. It is unsurprising to find then that the system consists of five speakers, four for front/rear left and right, one centre channel, and a dedicated sub woofer. Also included is the 'black box' (more formally known as the DigiDecoder) that takes the AC-3 feed and converts it into the full 5.1 Dolby Digital sound stream. The connectivity of the system is fairly impressive, and at first the new owner may be thoroughly confused. Not to worry, Videologic supply a very well written manual that explains the entire system and how to get the best out of it. To go into a little detail, the output from the DVD decoder (whether it be from a dedicated SP/DIF or an optical output) is plugged into the DigDecoder. From there the signal is processed and fed out through five RCA style jacks which connect directly into the amp that is housed in the sub woofer unit. It is from these five separate signals that each speaker is driven in order to recreate the Dolby Digital effect. The speakers themselves are of reasonable quality but do not compare to those found in the Sirroco speaker sets. They are good, especially for a system of this price, but they cannot fully compare to those used in the Crossfire system. The upshot is that an extra speaker can be added without significantly impacting on the price. The amp used in the sub woofer is also of lower quality than the Crossfire amp, yet will still pump out thumping bass to really make those explosions feel real. It truly is a relative scale when comparing the two products as both are really very good in comparison to the current market offerings, and it seems almost terrible to have to describe either of them as being "not quite as good". The final component is the DigiDecoder. This little black box is based around the latest Zoran DSP, which co-incidentally can be found in many higher end AC-3 decoders that are used in cinemas worldwide, and cost many times more than the cost of both speaker packages put together. The decoder accepts co-axial style SP/DIF, TOSLINK optical and standard 2 channel RCA inputs, although the SP/DIF and TOSLINK inputs can not be run at the same time, as the system will only accept one digital input signal. From these inputs the audio can be accepted as either a digital or analogue input (depending on the connectors used), and appropriate processing is performed. There are a range of output modes including Dolby Pro-Logic (the analogue precursor to Dolby Digital), 4.1 surround, and even 2.1 surround (where the other channels are virtualised into the other speakers). You can also adjust the volume and delay settings for each speaker, thereby customising your experience to your own tastes. The DigiTheatre is another impressive system, and after spending many hours watching DVD movies I have found it hard to listen to the audio tracks on anything less that the DigiTheatre. Conclusion Both speaker sets are truly magnificent and deserve a great deal of praise. There is however one interesting solution that I have kept back to surprise you... As I have stated, the Crossfire system is extremely good for listening to music and true 3D positional audio, and the DigiTheatre is suited for those who want to get the best out of their DVD's. Wouldn't it be cool then to be able to create a hybrid system that took the best parts from each? I certainly think so .. as do Videologic. Do you remember that I mentioned there was another input connector on the back of the Crossfire amp? What Videologic have cleverly done is to allow a user to utilise the four inputs on the Crossfire amp and correspond them to the four outputs of the DigiDecoder. Now what about the fifth and the sub? Well thanks to the DigiDecoder the fifth (centre) channel can be virtualised, thereby creating 4.1 surround. Okay but that still doesn't explain the sub. The extra input on the Crossfire amp is a direct feed to the sub woofer, and can therefore provide the .1 required to create 4.1 surround. This is certainly clever, and it is this way by design rather than accident, and to save you buying both sets in order to create this hybrid they also sell the DigiDecoder as a separate unit. It seems that one could now have the best of both worlds, should your budget stretch to it. If not, then merely analyse what you listen to most on your computer. If you would rather enjoy playing games in true 3D positional audio and listening to music on your PC then the Crossfire is more the product for you. If you tend to be more fanatical about your DVD movies, but still want a rich audio experience from everything else, then the DigiTheatre will be more suited to you. 8 for DigiTheatre9 for Sirocco Crossfires
In today's world we tend to forget the true meaning of "multimedia", and instead concentrate too highly on the visual part and not enough on the audio part. It is true that the visual aspect of any multimedia package, whether it be a game, DVD, or whatever else, is very important, but where would any of these be without the sound? Multimedia In a bid to catch up to the fast paced world of 3D acceleration, companies such as Aureal and Creative Labs have attempted to create 3D audio systems that convey as much in terms of audio experience as the top end graphics cards are now capable of for the visuals. They have done a great job, and no one would ever claim that APIs ("Application Programming Interfaces") like EAX and A3D are any less convincing to the ear than graphics APIs like Glide, Direct3D and OpenGL have been to the eye. But many users concentrate far too much on their graphics chips and the monitors they connect to them rather than striking a balance between the audio and the visual. Obviously no one would want to play graphically simple games with truly realistic positional audio, but on the flip-side of the argument would anyone really want to play extremely complex, realistic games that utilised only simple stereo? I would assume, and I would hope this to be a fair assumption, that most users would want the best of both worlds. It would also appear that developers also tend to agree with this logic. With titles like Unreal Tournament and Quake 3 Arena, developers have gone not only for the superior visual impact but are now also trying to give the user a supreme audio experience too. Under Pressure Unfortunately this isn't always the case, and John Carmack (top dog at Quake 3 Arena's developers, id Software) had previously stated that there would probably be no "special" use of 3D positional audio. Under pressure though, he has seemed to succumb to the wishes of the online users who wanted the inclusion of 3D audio effects. The lastest Quake 3 demo test now incorporates A3D, but the support isn't exceptional, and in the demo it is prone to causing unpredictable crashing (at least in my experience). Unreal Tournament also suffers from problems on the audio front, which will ultimately lessen the experience. There are titles that do fully use 3D audio though, and benefit greatly as a result. Freespace 2 is one example, and if any of you have experienced the truly awesome positional audio you will know what I mean. Developers are slowly supporting both A3D and EAX, and games are benefiting greatly from their inclusion. It seems that finally 3D positional audio is being taken as seriously as 3D graphics acceleration, which has to be a good thing. This brings me nicely on to my next point... What do you listen to your wonderful 3D audio on? You will probably have at least a 15" monitor, and most off the shelf PCs now come with 17" monitors. It is also likely that the most hardened gamers will have spent a fair bit on even better quality monitors in order to do their expensive graphics cards justice. Audio Environments So what about the sound setup? The sound cards available today are relatively cheap in comparison to 3D graphics acceleration technology, but that doesn't mean they deserve any less to be spent on speakers than you spend on monitors for your graphics cards. It is a shame then that most speaker systems available suffer from poor output quality and are unable to accurately output really good 3D audio. True there are companies like Altec Lansing and Boston acoustics that have gone some way to provide relatively cheap speakers that convey a great deal of audio detail, and some manufacturers bundle these speakers with their higher end PCs. Even these are limited though, and while they will provide a reasonable solution there are far better speaker packages available. One such company is Videologic. While best known for their PowerVR technology, they have recently moved into the field of speakers, and they have done a tremendous job of it. Their range isn't cheap, but for those who wish to go the extra mile and have an audio set up that reflects the quality of their video set up they are certainly worth considering. Videologic are only one example, but they have products which are ideally suited to both gaming and DVD movies. And a bit of foresight on their part even allows for some of their speaker packages to be connected together in order to provide the ultimate gaming and DVD soluion. Now if one company can do this, then why can't more? Conclusion Eventually people will start to take more notice of 3D audio, and it will become as important as 3D graphics acceleration. I'm sure in time we will see more developments in affordable high quality speaker packages that will do the world of 3D audio justice, but for now there seems to be very little interest in this area. It seems that we have forgotten exactly what multimedia means, and have mainly concentrated on one aspect thereby creating more of a unimedia environment. I hope that we will soon see not so much of a reversal in this, but certainly see audio systems catch up and once again regain their place firmly alongside the graphical world.
Today seems to be a big day for graphics cards, what with announcements of Voodoo4 and 5 and Abit's new SILURO range so why not make room for one more! I have just received from Videologic a press release detailing the forthcoming Neon250 PCI. Retailing at £106 UK RRP (ex VAT.) it is similarly priced to the AGP equivalent, and according to the press release offers similar performance. Being a PCI card it isn't intended to be the fastest solution on the market, but it will certainly help people who don't have any AGP slots on their motherboard (boards based on the Intel 810 chipset spring to mind).
It would appear that Abit have gone the way of many other motherboard manufacturers and have recently announced their upcoming graphics products. Launched under the new name of SILURO, Abit will be releasing boards based on NVIDIA's awesome TNT2 Ultra and GeForce256 chips. Specific product information (including pricing) has yet to be announced, but expect news soon.
- VideoLogicPrice - £120-130 System Requirements - P133 or equivalent 32Mb RAM What Is PowerVR? Before I jump into the review, I'm just going to recap on the history of PowerVR technology. Most modern 3D graphics cards draw all the triangles (polygons) in a scene, and then pass them through the rendering pipeline in order to texture them. In doing this they perform a depth calculation (utilising the z-buffer) that helps to reduce the number of triangles rendered by the chip, by removing the triangles that cannot be seen. Unfortunately if a triangle is partially visible, it cannot just discard it. Instead it must be rendered as if it was totally visible. This process of rendering polygons which are never seen is called overdraw. The PowerVR's architecture works in a very different way, and in fact thrives in a high overdraw environment. I hear you asking "Why?". Simply put, the rendering engine on the Neon 250 only draws polygons that are actually going to be seen on the screen. It divides the screen into a series of smaller squares called tiles, and then by a process of depth sorting, the tile is rendered with only the pixels that are directly visible. The render engine then passes on to the next tile. Overdrawn This process is called deferred rendering, and thanks to the depth sorting (which is done on the fly in hardware) there is no need for a z-buffer. As no z-buffer is needed, less memory bandwidth is required, or alternatively the memory bandwidth available can be used more efficiently - which is the case here. Videologic have quoted a fill rate of between 200 and 500 MPixels/sec, which is not strictly true. The Neon 250 has a base fill rate of just 125MPixels/sec, as it can render 1 pixel per clock and with a clock speed of 125MHz this becomes the base fill rate. But as soon as you introduce overdraw in a scene, the "effective" fill rate increases. So with an average overdraw of 2 the fill rate is effectively 250MPixels/sec. If overdraw is an average of 4 then fill rate effectively reaches 500MPixels/sec. This may seem like a cheat, but it isn't really bearing in mind that any standard chip like a TNT2 will have to render a lot of information that is never seen. In order to keep the framerate up, cards like the TNT2 must increase their fill rate. The Neon 250 on the other hand will only draw what is visible, and with most games containing significant overdraw, it should have no problem matching other chips on the market... What You Get For Your Money The card is a relatively plain green board that features the PowerVR series2 chip and four memory chips. And that's it. Other than a heatsink there is nothing more on the board, and it looks rather unassuming when compared to the likes of a TNT2 Ultra board. Unlike a lot of other boards on the market there is also no TV-out which is a shame, although in reality how many of us actually play games on our TV? Admittedly there is the potential of playing DVDs on the TV, but usually the TV-outs do not provide a picture that can rival a dedicated decoder, whether it be a PC based one or a stand-alone unit. One interesting thing to note is that originally Videologic had planned to market the Neon 250 with only 16Mb of SDRAM, but thanks to the fluctuating memory prices they decided to go with a 32Mb solution. This doesn't actually add too much of a performance increase, as the memory is used more efficiently than with other designs, but at least it allows for more texture storage which is always a plus. Software On the software side, Videologic have supplied a CD filled with not only the essential drivers, but also some technology demos that show off some of the more advanced features of the architecture. One of the more impressive is the overdraw demo, which really shows PowerVR coming into its own. Thankfully these can (in some cases) be run on other hardware which allows you to see how the performance of your old card matches up to the Neon 250. Also on the CD is a selection of game demos that help show off the card further. These include the memorable Incoming, the not-quite-so-good-but visually impressive Klingon Honour Guard, the oh-so-tedious Thief (well in my opinion anyway) [EDITOR - take that man out and shoot him!] and the great-for-a-laugh Rollcage. As I have said there are others, and these will all work very nicely on the Neon 250, which means that should you own the full versions of any of them you can expect some very nice fluid graphics. All in all the bundle is pretty good, and the supplied demos are all fairly recent (which is a bit of a change) which helps make the whole deal more attractive. Quality Control Unfortunately everyone will no doubt be more interested in how the card performs on forthcoming titles like Quake III Arena and Unreal Tournament. To this end I have included some shots of these games running on the Neon 250 to show that it looks every bit as good as a TNT2 based card. You can find these scattered throughout the review, with a few more on the last page. The picture I have painted so far seems to be a very rosy one, and certainly puts the Neon 250 in a good light. It is a good card, but I wouldn't be doing my job if I didn't report on any issues that affected the Neon 250. "What issues?" I hear you cry. Due to its tile based nature, some graphical glitches do occur. Some titles throw up problems for the Neon 250, and one of the more important ones is Unreal Tournament. Now from the screen shots (which were taken at 800x600x32) there are no noticeable effects, but when the resolution is pushed to 1024x768 there are a few graphical glitches which, while they do not affect the gameplay in any way, certainly detract from the overall experience. Conclusion That said, the guys at VideoLogic are constantly working on new drivers, and they have certainly come on a long way in both quality and performance. They are also working with the game developers in order to ensure that patches will be created for existing titles, and that new titles will have no problems. At this price point the Neon 250 is a very competent card (as can be seen from the benchmarks below), and certainly deserves consideration if you are intending to purchase a new graphics card and not spend the earth on it. Benchmarks Tests were done on the following system - P2 450 Abit BX6 rev2.0 motherboard 128Mb RAM 9.1Gb Seagate Cheetah 10000rpm Ultra2 HDD All drivers - latest downloadable versions Eye Candy
It has come to my attention that 3dfx will be showing off their 'new' product - codenamed Napalm - at this years Comdex. From the press release it seems that only a 'select' few will be able to get in to see it. Is 3dfx running scared of nVidia and allowing only pro-3dfx guys in to see it? Who knows. Hopefully there will be plenty of sauce on Napalm after the event which will show us just what 3dfx has in store for us! For more info take a look at the press release here
Flanker 2.0 is the sequel to the popular "Su-27 Flanker", and it certainly looks like it will be an impressive title. The basic premise of the game is that you fly for the Russian air force and take part in a series of missions against any and all of Russia's enemies, including the USA. There is also a mission editor which will allow for user generated missions and campaigns, which is always a plus, providing it is easy enough to use without a degree in AI... Graphic Violence Since the original game, the computing power available on desktops has increased dramatically, as has the power of 3D acceleration. With such improvements in technology it is only fair to assume that Flanker 2.0 has been designed to take advantage of it, and take advantage it will. Graphically the game looks top notch, with all the aircraft and vehicles constructed out of very high polygon models. The texture maps used are also very detailed, and will certainly add a lot of realism to the game. Another impressive graphical feature is the landscape. The hills and valleys roll gently by, and the tops of mountains can be seen poking through the clouds, all of which furthers the realistic look and feel of the game. This may not be a ground breaking feature, but the implementation is certainly very impressive. And as expected, the cockpit is (presumably) an exact copy of the real Su27 cockpit - it certainly looks the part, with all the indicators labelled in Russian! Better Than The Real Thing? So far the flight model seems to be fairly impressive, and no doubt accurate .. though since I've never flown a real Su-27 I can't really comment. ;) With a good joystick / throttle combination it should be an armchair pilot's delight. When playing a game of this genre there is a rather indefinable feeling about emerging from a loop, pulling a high G turn, and dropping in behind your enemy before releasing a salvo of missiles and watching him explode in a lethal ball of fire. Very few flight combat games can give you a buzz from that kind of manoeuvre - Flanker 2.0 is one of them. Look out for this one when it hits the shelves soon!