http://www.youtube.com/watch?v=GOpBlYx2H1o&feature=player_embedded#at=100
I cant remember how to do the youtube thing...
maybe this works... {youtube}GOpBlYx2H1o{/youtube}
I came across this while looking into whether or not I can run 4 x GTX590 cards urm as octi 8x SLi... I have only just started researching it as i figure I will be able to purchase an adequite system within 6 months time..
the specifications I am considering is depending on when the 600 series comes out but I want to use a dual gpu core system so the 590 is perfect, I know I can couple 2 of these together even in my current rig, but I want to buy the P6T7 WS SuperComputer
http://www.asus.com/Motherboards/Intel_Socket_1366/P6T7_WS_SuperComputer/
which can seat 4 dual slot cards, but as these cards are dual gpu cards, making it essentially urm 2x2x2x2 SLi I am concerned with it working or not.
Another solution I had in my head was instead of a single computer configuration, I figured why not go for true gigabit networking with server class network cards ensuring the bandwidth is precise and full flow and perhaps have 4 single gpu erm gpu's in maybe 2 to 8 systems networked as a render farm essentially giving me 4x8 512 cores which in simple maths = 16384 CUDA Cores, however the question lies in, which configuration is cheaper both in initial purchase vs, power consumption over multiple systems and multiple Corei7's... this is going to be an interesting build, remember I have roughly 6 months on the planning for this...
Oh and I currently use Roxio for my video edits which makes use of my GPU CUDA Cores which currently is just the single 216 core GTX260 Extreme+ which does... OK, however I would like to also make use of the massive processing capabilities that CS5.5 will bring coupled with the capabilities of render farming... the beauty of it is, I only need to buy one licence of Premiere CS5.5 which is cheap... and install the render farm applets on each machine so I will just need multiple windows 7 licences 64bit cheap, not a problem and just heaps loads of DDR3 RAM Memory as well as roughly 4x2TB or 4x3TB RAID hard drives in each machine to cope with the data flow... however I am open to suggestions... I am even considering going the full nine yards and go for SAS hard drives and ECC memory...
Before I forget to mention this is solely for video editing in FULL HD and in some cases beyond 1080p HD. and some live streaming possibly.
I hope that some interesting suggestions get brought onto this, and aside from being a video rendering farm I will make the services of the processing available to Research including when the systems are not in use perhaps stick Folding@Home on them.. and Seti@Home as well as lease the render time out to local universities for students doing animation etc and short film productions etc, the possibilities are really endless...
I also found this while looking up that SETI stuff
http://www.nvidia.co.uk/object/gpuventures_uk.html
I may be able to participate in this and get my GPU's direct from Nvidia...
Fire Away guys and please do not post silly comments about OMG thats going to cost XXX amount of money, Cost is not of concern here, just excellent planning and detailed system design.
also i want to avoid getting servers as theyre expensive and not so flexible, I know I can just get a full blade server with hundreds of CPU's but they will be out powered by just one CUDA system so... lets get thinking.
Please Please Please avoid the silly finance comments, however I would appreciate the consideration of cost of system components vs value and quality.
Thanks in advance Please feel free to ask me more details.