Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Geek Culture / is AGEIA PhysX value for money?

Author
Message
warship45
20
Years of Service
User Offline
Joined: 24th Jul 2004
Location: uk
Posted: 2nd Nov 2006 16:51
i am thinking about getting a physx card are they realy as good as they make them out to be??

dbpro plugins
www.0z0.co.uk
El Goorf
18
Years of Service
User Offline
Joined: 17th Sep 2006
Location: Uni: Manchester, Home: Dunstable
Posted: 4th Nov 2006 01:27
well as with any hardware, the price wil go down over time. talking as an economist, i'd say technically no they arent.

ageia are a monopoly atm, so they can charge any price they want, if you want value for money, you need to wait until competitors enter the PPU market, driving the prices down and improving the quality.
Codelike
18
Years of Service
User Offline
Joined: 13th Dec 2005
Location: DBP - Scouseland
Posted: 7th Nov 2006 00:29 Edited at: 7th Nov 2006 00:38
I'm holding back on the PhysX at the moment. I'm not sure how it's going to hold up against nVidia's G80 GPU-built-in 'Quantum' Physics engine (not really anything to do with quantum physics at all, as it happens! ). M$'s decided to behave themselves again with their WPA component-swapping malarkey so I suppose we'd be best seeing how everything shapes up for Vista/DX10 (January 30th 2007 consumer release definitely a go, now) before making any rash decisions!

Personally speaking, there's little use to me for Physics hardware at the moment apart from the possible handful of games. Nevertheless, it's something I'm definitely after, hardware-wise, for the future. I'm not likely to get into DB Physics code until I can kick myself back into gear on the vanilla stuff first! Too much 'Guild Wars'...

Anyone, is nVidia's Physics coding likely to be similar or interchangeable with Ageia's?

I have an XP3000+, 1.5gb DDR333, a 6600GT and I'm programming 3k text-based exe's?!
Kenjar
19
Years of Service
User Offline
Joined: 17th Jun 2005
Location: TGC
Posted: 7th Nov 2006 00:42
No it isn't. No first gen tec is ever worth it.

I lay upon my bed one bright clear night, and gazed upon the distant stars far above, then I thought... where the hell is my roof?
Miguel Melo
19
Years of Service
User Offline
Joined: 8th Aug 2005
Location:
Posted: 7th Nov 2006 01:29 Edited at: 7th Nov 2006 01:33
... Just look at Creative's Game Blaster.

[edit] actually I think I meant "3d video blaster" or something. Anyway, that first 3d accellerated video card from circa 1995.

I have vague plans for World Domination
Steve J
18
Years of Service
User Offline
Joined: 22nd Apr 2006
Location: Vancouver, Washington
Posted: 8th Nov 2006 02:52
Look at the first Sound Card! It pwned! Proved you wrong=P

http://phoenixophelia.com

Steve J, less, and less Controversial!
warship45
20
Years of Service
User Offline
Joined: 24th Jul 2004
Location: uk
Posted: 8th Nov 2006 15:59
ok thanks

dbpro plugins
www.0z0.co.uk
David R
21
Years of Service
User Offline
Joined: 9th Sep 2003
Location: 3.14
Posted: 8th Nov 2006 20:31
Also, I did not realise this, but (it would appear) that any app using DarkPhysics requires the Ageia PhysX 'framework' to be installed in order to run (whether using a PPU or not)

A definite off-putting aspect

Codelike
18
Years of Service
User Offline
Joined: 13th Dec 2005
Location: DBP - Scouseland
Posted: 9th Nov 2006 01:41 Edited at: 9th Nov 2006 01:47
Article:
http://theinquirer.net/default.aspx?article=31385

The other interesting thing is that Intel Core 2 Quads (& higher) will be able to use one (or more) of the cores as a dedicated hardware PPU. Presumably the situation with AMD will be identical for the end user. Software-wise, Microsoft's planning a 'Direct Physics' and ODE/OPAL is also available. Although they're along the right lines, Ageia could likely be a dead end in some regards as their API is not likely to be the one that ends up on most physics-enabled desktops in the next 2-5 years.

For software, I'd put 'Direct Physics' as the choice most likely to count. Hardware, either utilise multicore or nVidia GPU+PPU.

I have an XP3000+, 1.5gb DDR333, a 6600GT and I'm programming 3k text-based exe's?!
Kenjar
19
Years of Service
User Offline
Joined: 17th Jun 2005
Location: TGC
Posted: 9th Nov 2006 02:18
And of cause both nVidia and ATI are working on their own intergrated PPU's. The next gen nVidia cards are supposed to have duel cores. I've never noticed anyone say that both would be dedicated GPU's, it's entirely possible that one will take the role of PPU.

I lay upon my bed one bright clear night, and gazed upon the distant stars far above, then I thought... where the hell is my roof?
warship45
20
Years of Service
User Offline
Joined: 24th Jul 2004
Location: uk
Posted: 12th Nov 2006 12:02
also with the ppu on board with the garphics card u will get better profomence as the singels hab=ve less disence to travle they get there faster and all in all faster procesing power that is why dule cored processers are used more than a system with say 2 process on each end of the board.

dbpro plugins
www.0z0.co.uk
Raven
19
Years of Service
User Offline
Joined: 23rd Mar 2005
Location: Hertfordshire, England
Posted: 12th Nov 2006 15:26
The Inquirer is like a tabloid for the computer industry.

Let's pick apart what they've said shall we...

Quote: "The other interesting thing is that Intel Core 2 Quads (& higher) will be able to use one (or more) of the cores as a dedicated hardware PPU."


A CPU isn't capable of the same sort of calculation speed as a dedicated PPU or GPU with specific operations.

Intel Core 2 Duo E6700 is capable of ~650MFLOPs compared to the Ageia PPU of ~1.2GFLOPs on physics operations. Why do you think when you're doing nothing but physics operations on the CPU you can achieve 10,000 objects at once at ~30fps but on the PPU you can achieve ~90fps. So sure letting a single Core take on the work of a dedicated RISC processor is obviously going to be a good solution.

Quote: "Presumably the situation with AMD will be identical for the end user."


Yeah, perhaps whoever wrote the article should read up on the differences between AMD and Intel processors.

Quote: "Software-wise, Microsoft's planning a 'Direct Physics' and ODE/OPAL is also available."


True, Microsoft have been thinking of adding a Physics API to their DirectX package; however this isn't for a while. They are also creating all of the packages new, to provide better cross-platform support.

Quote: "Although they're along the right lines, Ageia could likely be a dead end in some regards as their API is not likely to be the one that ends up on most physics-enabled desktops in the next 2-5 years."


Ageia, bought out Mequon who have been stable game physics for the past 5years that I know of. Although they haven't been the API of choice for PC games, sorry to say but the Playstation3 has the Ageia PPU in it. To make it easier for cross-platform games, it is without a doubt a fact that Ageia will only grow in terms of how many games are developed for it.

PhysX also will support Effect Physics (GPU Phsyics through Shader 3/4) just as HavokFX currently does, within the next few releases.
Performance enhancements from GPU over the PPU are minimal, WHEN we are talking about the 7800, X1900, or 8800.
But consider this:

Ageia PPU £180
7800 GPU £250-300
X1900 GPU £250-300

Add on to this you will NEED a 2x PCI-E 16x Board + PSU capable of running 2x GPU @ 240w just for those cards!!

Compare that to the PPU which require a PCI v2.1 slot plus 60w

Quote: "For software, I'd put 'Direct Physics' as the choice most likely to count. Hardware, either utilise multicore or nVidia GPU+PPU"


NVIDIA and ATI when they talk about Multi-Core mean it in the same sense that Sony do when they talk about the Cell processor.

Single Processing Core, Multiple ALU/VMX Units.

This is completely different to what Intel, AMD and IBM mean when they talk about mutli-core is a REAL additional processing core.

Quote: "And of cause both nVidia and ATI are working on their own intergrated PPU's. The next gen nVidia cards are supposed to have duel cores. I've never noticed anyone say that both would be dedicated GPU's, it's entirely possible that one will take the role of PPU."


They're not adding integrated PPUs, as part of the Shader 4.0 specification there is more dynamic branching. This allows SM4 Shaders to run identical to C programs rather than requiring a specific ASM design.

NVIDIA G80 Cards have dual-GPU, one for fixed function and one for unifed shader processing.

Intel Core 2 Duo E6400, 512MB DDR2 667MHz, ATi Radeon X1900 XT 256MB PCI-E, Windows Vista Business / XP Professional SP2
David R
21
Years of Service
User Offline
Joined: 9th Sep 2003
Location: 3.14
Posted: 12th Nov 2006 16:13 Edited at: 12th Nov 2006 16:17
Quote: "A CPU isn't capable of the same sort of calculation speed as a dedicated PPU or GPU with specific operations.
"


Yes, but surely if you have a completely free-CPU, it can do the exact same job as a PPU [at the same speed, as] a giant great dedicated calculator. That's effectively all the PPU is.

A CPU isn't as fast as a PPU in general cases, is because it is doing other things besides the calculations; but if it is solely dedicated, it could definitely equal or even exceed the capabilities of a measly PPU

EDIT:
Quote: "They're not adding integrated PPUs,"


Just out of interest, how the hell do you know? I mean, yeah, such power is a required piece of the 4-based model for a GPU, but this doesn't rule out the fact they could be developing a PPU-enabled GPU - what you've said is pure assumption and speculation.

Kenjar
19
Years of Service
User Offline
Joined: 17th Jun 2005
Location: TGC
Posted: 12th Nov 2006 16:43
All becoming mute now, nVidia has just released it's 8 series cards, which does indeed have physics built directly into the card. All in all, I'd say the shelf life of the seperate PPU card is coming to an end. Hopefully Ageia will team up with a GPU company and make it's living that way, if not, it's either moving out of PC's and into consoles like Power VR did, or it's going to die a slow death. Either way, I can't see people spending £200 on a dedicated PPU card anymore, when the series 8 cards come down to around the £200 as they should towards the end of 2007 and earily 2008, the humble PPU will have pretty much had it, unless they find a way to make the Ageia PPU card compiliment the nvidia or ati engine substitutes.

I lay upon my bed one bright clear night, and gazed upon the distant stars far above, then I thought... where the hell is my roof?
Codelike
18
Years of Service
User Offline
Joined: 13th Dec 2005
Location: DBP - Scouseland
Posted: 12th Nov 2006 17:53 Edited at: 12th Nov 2006 22:08
Quote: "read up on the differences between AMD and Intel processors"


There isn't any, particularly (apart from speed differences & that can go for chips from the same manufacturer, as well) to the 'end user' (i.e. People who haven't got a clue how it does it, just that it does do it!).

Quote: "the Playstation3 has the Ageia PPU in it"


PS3 = not a PC!
I use a PC & want to know which version of PC physics will be the choice between 2008-2011.

Multicore vs. Cell - point taken, as is RISC vs. an Intel (or AMD) core. I wasn't suggesting a core was as capable of as many FLOPS as a dedicated GPU/PPU RISC chip, speed for speed. Just that it'll be another option for the end user.

nVidia Phys hardware? Seeing as they've developed their Quantum software, it'd make sense for them to integrate GPU+PPU (P+GPU?) on the same low-end 'graphics card', in some form, as they'd put their entire package on one board for the end user.

What NVIDIA is announcing today is an engineering partnership with Havok to develop physics simulation that runs on NVIDIA GPUs and is amplified with SLI
This suggests that the independent PPU will be redundant, as the GPU will be doing physics.

I have an XP3000+, 1.5gb DDR333, a 6600GT and I'm programming 3k text-based exe's?!
Kenjar
19
Years of Service
User Offline
Joined: 17th Jun 2005
Location: TGC
Posted: 12th Nov 2006 18:22
I hope TGC re-write DarkPhysics to take advantage of the series 8 physics to be honest. With luck will get away from that god aweful licence agreement that gives Ageia the right to add extorinate fees to any game they feel fit too. While it is highly unlikely from a PR point of view for them to do so, I certainly don't want to base my game on a technology company with rules like that. DBP has a longer history of operation with nVidia, so I'm keeping my fingers crossed. Apart from anything else it's more likely that ATI and nVidia will end up, in the end, using cards with a centralised system so that physics will work on either nVidia or ATI PPU's rather than 1 single releitively unknown developer.

I lay upon my bed one bright clear night, and gazed upon the distant stars far above, then I thought... where the hell is my roof?
Codelike
18
Years of Service
User Offline
Joined: 13th Dec 2005
Location: DBP - Scouseland
Posted: 12th Nov 2006 19:40 Edited at: 12th Nov 2006 19:46
Quote: "it's more likely that ATI and nVidia will end up, in the end, using cards with a centralised system so that physics will work on either"


'Direct Physics' (DX10b/DX11?) or ODE/'OpenGL Physics' - when they're fully up & running!

Note that SLi/Crossfire, also, still have to be incorporated into DirectX.

I have an XP3000+, 1.5gb DDR333, a 6600GT and I'm programming 3k text-based exe's?!
Raven
19
Years of Service
User Offline
Joined: 23rd Mar 2005
Location: Hertfordshire, England
Posted: 12th Nov 2006 21:28
Quote: "There isn't any, particularly (apart from speed differences & that can go for chips from the same manufacturer, as well) to the 'end user' (i.e. People who haven't got a clue how it does it, just that it does do it!)."


Actually, yes... there is a BIG difference.
If you were to say do a test to see the FLOPs (FLoating-point OPerationS) a processor can do and they achieve the same score on Whetstone, doesn't mean that you can run the same code for rotating a cube in DirectX and expect the exact same performance on the exact same GPU.

There are some very fundimental differences with the processors and how they do things, which makes a very large difference in what they can achieve in an R|T situation.

Quote: "PS3 = not a PC!
I use a PC & want to know which version of PC physics wil be the choice between 2008-2011."


No, the Playstation3 isn't a PC (although technically with the software provided Sony will probably get it classed as one in Europe) .. but given the Playstation 3 has an Ageia PPU built-in to the system, this means that games WILL support the PhysX API in order to take full advantage and allow graphics and AI to be processed independantly of the physics.

When games are then translated between systems, so that they don't have to recode the engine on several platforms; again PhysX will be used. On the x86-Compatible and PPC-Compatible platforms such-as Windows, MacOSX and AmigaOS4 platforms you can expect to see greater performance enhancements from having a PPU installed.

Now take in to account that while HavokFX can use Shader 3 which all the new consoles use, unless developers are willing to seriously sacrifice graphics in order to have physics .. which honestly do you believe many developers will do this.

The only platform HavokFX makes sense on are SLi or CrossFire systems with 1 or more GPUs available. Costs aside, there is still a minority of gamers who have such systems... more to the point is that there are even few that have Quantum Effects, which are possible to be programmed without ANY physics API; however these are done using a new technology, which will no doubt be different to ATI.. so they will not be used in a mainstream fashion for a good while yet.

So basically the Playstation 3 if it becomes even as popular as the Xbox360 is currently, will mean that PhsyX will become one of the most used Physics APIs.. although yes Xbox 360 and Windows games using DirectX will possibly use the Physics API Microsoft are rumoured to be working on this will only be likely with titles that are exclusive to the DirectX-Capable platforms.

Quote: "Multicore vs. Cell - point taken, as is RISC vs. an Intel (or AMD) core. I wasn't suggesting a core was as capable of as many FLOPS as a dedicated GPU/PPU RISC chip, speed for speed. Just that it'll be another option for the end user."


There's nothing to say it can't achieve the same FLOPs speed.
For example, here are two made up chips.

R-x86 CISC 10MHz / 8 VMX Registers / 4-Cycle Per Clock (CPU)
R-PZX RISC 7.5MHz / 32 VMX Registers / 1-Cycle Per Clock (PPU)
(Both will have 2D Floating Point Vector Units to make this easier to explain through example)

Note: Every Op takes 4-Cycles.
CPU @ 10MHz = 40 Ops per Register per Loop = 240 Ops/Second
PPU @ 10MHz = 7.5 Ops Per Register per Loop = 240 Ops/Second

So we've established that they have the exact same performance.
(before anyone quotes this all and picks faults, remember this is overly simplified to make a point)

Now both have Floating-Point Units, with a Basic 4 Operations.
Add, Multiply, Subtract and Divide.

However, because the PPU is specialised for physics operations it also has Dot Product, Cross Product, Square Root, and Normalise Operations as well. This is because it's specialised, and not designed for general purpose tasks.

So while, they can both do 240 operations per second; in order for the CPU to do what the PPU does.. say Dot Product for example, this would have to be done with the existing operations.

REG[0] = REG[1][1] + REG[2][1] * REG[1][2] + REG[2][2]

So while the PPU only takes 1 Loop to achieve this, the CPU takes 3 Loops. So let's assume that your compiler automatically transfers FP operations entirely to the PPU when present, this code:



Would take 4:240 Ops on the PPU, while on the CPU it would take 8:240 Ops. While both are well within the operation limit they have per second, so you wouldn't see any performance difference; you'll notice it takes the CPU twice as many resource to do the same code. This means that it has less space so it can't do as many operations as the PPU can, despite technically having the same FLOP performance.

Obviously for this subject is actually far more complex than my example above, but in essence it is the same principal behind the performance differences and why specialised hardware will ALWAYS perform better than generalised hardware in a given area.

I'm not going to say if I honestly believe the PPU is worth it, nor will I say that 'yes, this is how the industry will go' ... because no one truely knows what technology will be used in years to come or to what degree.

What I will say is that PhsyX and the PPU are not just fads, that will come and go. Thanks to the Playstation 3, the technology is only likely to grow; but only if the Playstation 3 itself performs well and provided that Sony don't force too many games to be exclusive to their console.

Also something else to remember, is that the graphics processor will always focus on graphics first, and other FP operations secondry. Just as the central processor will always focus on providing a broader processing ability and not specialising in anything.

It's the colmination of how all of these technologies work together that honestly will make a difference. Remember that for PhysX it doesn't offload extra work to the PPU, but runs ALL of it's floating-point operations on it freeing up the CPU entirely.
Same goes for GPUs and Shaders, however this doesn't mean that it will accelerate ANY FP-Ops that are done outside of these APIs.

Hense why it's so important to actually have chips that compliment each other rather than bottlenecking. This is why no matter how powerful RAM, GPU, CPU, or PPU gets; unless they can all perform together, one aspect will always mean that the others won't be able to perform to the best of their abilities.

Have you ever noticed that when you've upgraded your processor, that your graphics cards seem to also improve in performance? this is because on lower resolutions your processor is actually forcing your graphics card to slow down to it's level to keep up; this is also why you experience a jitter-frame effect in a number of games.

Personally I'll be happy when each aspect of the industry sits down and designed their hardware around the other industries to provide the best performance solutions rather than just trying to improve their own despite the lacking abilities of others. It's just honestly segregating each aspect; forcing them to more and more be performed seperately in order to get the best out of each hardware.

Intel Core 2 Duo E6400, 512MB DDR2 667MHz, ATi Radeon X1900 XT 256MB PCI-E, Windows Vista Business / XP Professional SP2
Kenjar
19
Years of Service
User Offline
Joined: 17th Jun 2005
Location: TGC
Posted: 12th Nov 2006 21:43 Edited at: 12th Nov 2006 21:44
What's all this talk about playstations for? I mean I assume the vast majorty of us are DarkBASIC users, certainly PC users or they'd not be posting here. Ageia is going to be squeesed out of the PC industry for 2 reasons.

First, it's hardware is very expensive. It costs more than most graphics cards.

Secondly, with physics offically being on the books for the next gen GPU's, why would anyone go out and buy a PPU board at all? It moves physics processing from the CPU/PPU entirely, and onto the graphics board.

So, seeing that Ageia is on the playstation 3, which will make them a ton of money, there's little point in them sticking it out in the PC industry unless they team up with another big name, and lets face it the only other big name for gamers (which the PPU is aimed at) is ATI. So it's pretty much bye bye Ageia as far as us PC Programmers are concerned.

I lay upon my bed one bright clear night, and gazed upon the distant stars far above, then I thought... where the hell is my roof?
Codelike
18
Years of Service
User Offline
Joined: 13th Dec 2005
Location: DBP - Scouseland
Posted: 12th Nov 2006 21:55 Edited at: 12th Nov 2006 22:00
Quote: "Actually, yes... there is a BIG difference."


Between AMD & Intel CPUs, yes, however, my original point was...

Quote: "Intel Core 2 Quads (& higher) will be able to use one (or more) of the cores as a dedicated hardware PPU. Presumably the situation with AMD will be identical for the end user. Software-wise, Microsoft's planning a 'Direct Physics' and ODE/OPAL is also available."


In other words, any version, for instance, of Direct Physics that Microsoft introduces will be designed to work the same on both Intel & AMD platforms.

I have an XP3000+, 1.5gb DDR333, a 6600GT and I'm programming 3k text-based exe's?!
Raven
19
Years of Service
User Offline
Joined: 23rd Mar 2005
Location: Hertfordshire, England
Posted: 12th Nov 2006 22:08 Edited at: 12th Nov 2006 22:10
Quote: "What's all this talk about playstations for? I mean I assume the vast majorty of us are DarkBASIC users, certainly PC users or they'd not be posting here. Ageia is going to be squeesed out of the PC industry for 2 reasons."


I'm just saying that don't count out how the industry will sway based on the Console market. Havok was popular due to the IPs that used it, i.e. Half-Life 2, Unreal Tournament 2003/4

This generation Ageia have Sony backing their hardware, which is possible the biggest name of them all right now.

Quote: "First, it's hardware is very expensive. It costs more than most graphics cards."


Ageia BFG PPU - £180 (from Watford.co.uk)
SLi GeForce 7800 GT (using HavokFX) - £240each (from Watford.co.uk)

Performance wise you will REQUIRE a GeForce 7800 or Radeon X1900, or better in order to achieve the same performance as the PPU. More to the point you require 2-Cards not one, so that your performance hit doesn't affect the graphics you can do.

Sure, running physics on my GPU would be fine; only using HavokFX enable Half-Life 2 patch (available in the HavokFX SDK) my X1600 PRO has to run at the same resolution as my 6200 Ultra other wise it's FPS drops like a brick. More to the point the physics itself becomes far more glitchy, meaning it's actually quicker and better running it on two card... so test 2 I nicked the X1600 PRO I gave my brother (given both have CrossFire ), ran it again sure I saw an improvement and could run the game one again how I wanted; but the physics were still not quite up to the same standard as my CPU was achieving.

Quote: "Secondly, with physics offically being on the books for the next gen GPU's, why would anyone go out and buy a PPU board at all? It moves physics processing from the CPU/PPU entirely, and onto the graphics board."


The next-generation of GeForce is already here, with the Radeon soon to follow. Again I wouldn't expect miricles from these cards.

Sure the 8800GTX is capable of 2x the performance of the 7800GTX however, something you have to remember is...
a) 8800GTX is £600 entry-level price (for the cheapest one)
b) It only enhances games using HavokFX (very few use this)
c) If you used it for Physics and Graphics you can expect a graphics performance drop of 60% and above. You NEED a secondry GPU in order to perform the tasks, because the Unified Shader Core actually means EVERYTHING uses the exact same Unit to run and it provides priority to the task that requires the most processing power.

In short, until the Physics is a seperate unit on the GPU; it makes it more of a way of providing physics within your effects but not better performance than dedicated hardware.

So things like fire, smoke and water will see a definate advantage; so will destroyable mesh. It won't however improve the number of objects that can interact with physics in a given scene.

Also NVIDIA are keeping quiet for the moment about what the Quantum technology actually adds.

[edit]
Quote: "In other words, any version, for instance, of Direct Physics that Microsoft introduces will be designed to work the same on both Intel & AMD platforms."


Work the same, yes. Perform the same, no.
At the end of the day no one cares apart from developers about it working the same; end-users only care about the performance.

Intel Core 2 Duo E6400, 512MB DDR2 667MHz, ATi Radeon X1900 XT 256MB PCI-E, Windows Vista Business / XP Professional SP2
Codelike
18
Years of Service
User Offline
Joined: 13th Dec 2005
Location: DBP - Scouseland
Posted: 12th Nov 2006 22:22 Edited at: 12th Nov 2006 22:37
Quote: "8800GTX is £600 entry-level price"


Dabs has the price as £440.

So if the 8800GTX does do some form of physics, then you'd be getting the non-physics part of the card for an effective £260 (i.e. nVidia's £440 & subtract Ageia's £180) - a pretty good deal for an all-in-one package at the gaming top end, physics aside. Even better when software takes advantage of the physics & the hardware price comes down.

Quote: "end-users only care about the performance"


Yup, & non-standard equipment will detract from that overall performance - not speed or power, but breadth & ease of use on PCs. Hence the reason we're looking for what will be 'standard'.

I have an XP3000+, 1.5gb DDR333, a 6600GT and I'm programming 3k text-based exe's?!
Kenjar
19
Years of Service
User Offline
Joined: 17th Jun 2005
Location: TGC
Posted: 12th Nov 2006 23:41
Quote: "a) 8800GTX is £600 entry-level price (for the cheapest one)"


The entry level price for the 8800 series is actually £319.88 for the GTS, which is still boating 640 DDR3 RAM, and of cause all the next gen bits and bobs. Until I find out the replacements for graphics pipelines (shader processing units) I'll not comment on the speed difference. Again this is if the card is bought from DABS.com

anyway, assuming you grab a Ageia card from DABS.COM for £144.88 and a new 7950 for £184.99 you are still spending £329.87. So you might as well get a 8800GTS for £319.99 and sit pretty for a couple of years knowing you're card won't go out of fasion for that period, it's direct X 10 compatible, and has physics built right in.

Quote: "
c) If you used it for Physics and Graphics you can expect a graphics performance drop of 60% and above. You NEED a secondry GPU in order to perform the tasks, because the Unified Shader Core actually means EVERYTHING uses the exact same Unit to run and it provides priority to the task that requires the most processing power."


Well the graphics card has several stages per gpu cycle doesn't it? It's got texturing processors, shader processors and raster processors. There are all different functions with it's own hardware processing layer. I assume physics in merely included as a layer, but I won't pretend to understand the technology. All I can say, is that the water effects tech demo I've seen, and the smoke particle demo (which had a cloud of smoke bouncing around inside a box, complete with whisps, and the mixing of a full 3D object with the mouse pointer into a cloud of expaning smoke, with again, full collision detection within the box object) I am sure that nothing Ageia have demonstrated comes even close. So there are obviously benefits to having physics onboard the graphics card. Also, I read an article a while ago saying that nvidia cards did have duel core technology. I don't know if this is true or not, there's not enough published information that I've been able to find. But if it is, while there are overheads, it's still entirely possible that their method is faster that Ageia's.

All in all, nVidias intergrated physics has managed to excite me far more that Ageia's did. I beleived from the get go, that Ageia's technology was a short term thing, and would not last. I've said on several occasions TGC was unwise to put their faith in the technology. They'd have been better off developing ODE, and waiting to see what the next generation of physics would be. There was alot of people who put their faith in Power VR as well, they went the console route when bigger PC companies pushed them out. That's exactly what ATI and nVidia are doing right this moment.

I lay upon my bed one bright clear night, and gazed upon the distant stars far above, then I thought... where the hell is my roof?

Login to post a reply

Server time is: 2024-11-17 20:26:43
Your offset time is: 2024-11-17 20:26:43