Quote: "Lol no offence dude, But what year are you living in? Im just about to buy a 24Inch TfT, With a resolution of 1900 x 1200."
That's just not possible, the maximum possible physical resolution of a 24" TFT is 1600x1200 Sq or 1600x1000 Wi... well technically it's a little more but the extra pixels are usually used as an overscan buffer zone.
To physically display 1900x1200 you would need atleast 28" TFT display. It isn't TFT displays that have changed, because they haven't in over 8 years; what has changed is the more wide useage of built-in Video Signal Processors (or Embedded GPUs) that allow the displays themselves to scale to a given display.
In-fact you may have noticed they've been improving their perform quite a bit over the last few years, given monitor timings have been reduced from 8ms to 2ms in performance displays.
Unlike CRT, TFT can't physically be improved without making the technology smaller; something that is much more difficult than you'd imagine. Business' get around that by simply making larger displays capable of better picture quality because well they're actually physically displaying a signal rather than needing to process it first.
Have you never wondered why your monitor has a "Smoothness" option?
Quote: "SLI vs Crossfire- As I understand it, Nvidia SLI currently outperforms ATI Crossfire solutions. In addition, using more than 2+ cards while utilizing SLI is still going to get you more of a performance boost than Crossfire, so you really have nothing to gain from using a Crossfire solution yet."
NVIDIA SLi doesn't out-perform ATI CrossFireX, not by a long shot.
SLi vs CrossFireX X2 (plus CrossFireX X4)
ATi have a commanding lead, when it comes to the performance of multiple GPU.
Quote: "Nvidia PhysX- As you know, each Nvidia card in the 200 series (and a few cards below that) all have an included PhysX processor to boost your games dramatically. This is something that ATI cards lack, and won't be getting anytime soon"
Is PhysX-Enabled GPU better for gaming?
Well it all depends on what sort of game you're playing doesn't it?
If you can afford a SLi NVIDIA GPU, and play quite a few PhysX-based games then yes... it is an incredible peice of technology for physics solutions.
If however you want to do it on a single NVIDIA GPU, sooner or later you'll have to choose between Graphics or Physics.
The real issue is that very few PC games actually rely on PhysX, in-fact only Console games using the Unreal 3 Engine heavily rely upon it. So question is, will it ever really be needed?
Quote: "PhysX Scalability- One of the amazing things with PhysX in respect to Nvidia is scalability. When some people think of multiple PhysX processors, they think it's 2X as powerful as one. While this is somewhat true, there's really more to it than that. The PhysX physics engine is built to be dybamically scalable, this means that if you are playing a game with a single PhysX enabled GPU, imagine an explosion that blows a building up into 1000 pieces. We can "oooh and awww" at that thinking it's the best we get, until we utilise a second PhysX enabled GPU. With more PhysX processing ability, you will see that the same explosion that you jut saw using the single GPU, now scales up in quality, possibly giving you 5000 pieces of exploding rubble. This scaling effect is something that really sets PhysX apart from other engines, the more you give it, the more it gives you!"
Yeah this is all "ooh and ahh" but realistically GRAW and GRAW 2 didn't actually look that much more impressive when PhysX-enabled.
In-fact I've yet to see a game where not having PhysX-capable hardware has made me actually really feel it changed the game dynamic or visual quality far beyond that of what was going on.
In-fact only CellFactor and the NVIDIA Tech Demos actually made me get excited about PhysX. Since then, I've seen regular hardware utilised to achieve similar results. To me that makes me thing that PhysX is deliberately inhibited when running in software-mode in order to make the Hardware version far more appealing.
Don't get me wrong, I'm sure you can do more with hardware physics; especially on a dedicated GPU... but the point is that past NVIDIA's own tech demos that can be achieved via shaders, there is nothing I've seen that has been as impressive.
In-fact if anything it keeps showing me time and time again, that Havok is a far better physics engine because it is a damn sight quicker without hardware acceleration; plus the GPU Physics they offer works regardless of the GPU you use, and can be directly linked to Shaders without having to pass anything back to the main CPU run application to then output to Shaders which you have to do with CUDA.
Quote: "Nvidia CUDA- CUDA is something that I see being a big proponent of GPU sales in the future. the ability to write programs to directly utilise the GPU is something that has always been inevitable. This is something we are going to see in the future, it's a great way to use your GPU to the fullest all the time!
If you are interested, here is the link for the GTX 295: (from Evga) http://www.evga.com/articles/00446/
It looks like it's currently going for around $509.99, but keep in mind that this GPU was just released very recently (just got the email notification this afternoon) so expect lower prices in the futore as well as some cheap overclocked editions from Evga. For reference, when the GTX 280 was released the superclocked edition averaged for around around $450, and now the superclocked editons are selling for $420. thats a good $30 for something that was pre superclocked. I suspect that the GTX 295 superclocked editions (if there will be any) will eventually sell for around $490, which in my oppinion is DEFINITELY worth the money."
CUDA looks awesome from a developers perspective, but ultimately pointless. Not being funny but it's just one more damn thing to learn that frankly, we don't need to.
Yes it's impressive, but until ATi adopts it or some form of standard is reached between ATi and NVIDIA concerning a common GPU ASM. Then there is no point in it for real-world game development applications. Sorry but there just isn't.
CUDA is definately a step in the right direction, but as far as the game development is concerned; in a market rife with piracy, supporting it means either twice the work to make two different versions of the same program for different hardware... or not supporting ATi hardware.
It'd be business suicide to support right now.
Your signature has been erased by a mod because it's larger than 600x120...