I was wondering if anyone knows the best way to achieve this (or indeed, any way)
I have an object shader that outputs both color and a normal/depth texture, but how to pass both to a full screen shader?
i.e. the full screen shader performs edge detection on discontinuous depth and normal, which I then want to overlay onto the colored render. (Trying to amalgamate them both wound up performing edge detection on the color as well, which, while surreal, wasn't really what I am after)
Is it better to get the object shader to export a structure (i.e. two sets of float4) and if so, how would I go about conning the screen shader to read them?
The other way I am trying is to split the shader, and set up two cameras, one to render the color image, one to render the edge detection, and then merging them afterwards (in yet another full screen shader), (but a: seems like a convoluted way of going about it, and b: haven't made it work yet!)
any assistance or references would be awesome
Edit: yeah just realised i mistyped "deferred" when i was doing a forum search, have now found what i was after... Thanks
This is not the sig you are looking for....