Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Geek Culture / Modern shader programming

Author
Message
TheComet
17
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 10th Jun 2013 22:38
Everyone who has used DBP or anything more advanced will probably be aware of the two shader types, vertex shader and fragment shader (or "pixel" shader, as some Microsoft like to call it).

RenderMonkey is a great tool for developing vertex and fragment shaders, but it's almost three generations behind the newest technology now, and there doesn't seem to be a replacement tool for it. AMD gave up on it in 2008.

The graphics pipeline has changed a lot over the years, there are now many more shader stages. Here is a list of them (they are executed in this order):

*Vertex shader - Takes a single vertex and is able to adjust it, transform it into the proper space, as well as manipulate vertex properties (normal, binormal, tangent, UV coordinates, etc.), then output them as control points.

*Geometry shader - Can generate additional geometry from the output of a vertex shader (subdivision, and tessellation). A geometry shader can consist of the following three sub-shaders:

Hull shader - The first of three stages for creating tessellation. Transforms a set of input control points (from a vertex shader) into a set of output control points. The number of input and output points can vary in contents and number depending on the transform. A hull shader also outputs patch constant information, such as tessellation factors, for a domain shader and the tessellator.

Tessellator - The tessellator stage is the second of three stages that work together to tessellate or tile a surface. The first stage is the hull-shader stage; it operates once per patch and configures how the next stage (the fixed function tessellator) behaves. A hull shader also generates user-defined outputs such as output-control points and patch constants that are sent past the tessellator directly to the third stage, the domain-shader stage. A domain shader is invoked once per tessellator-stage point and evaluates surface positions.

The tessellator stage is a fixed function stage, there is no shader to generate, and no state to set. It receives all of its setup state from the hull-shader stage; once the hull-shader stage has been initialized, the tessellator stage is automatically initialized.

Domain shader - A domain shader is the third of three stages that work together to implement tessellation. The domain shader generates the surface geometry from the transformed control points from a hull shader and the UV coordinates. A domain shader is invoked once for each point generated by the fixed function tessellator. The inputs are the UV[W] coordinates of the point on the patch, as well as all of the output data from the hull shader including control points and patch constants. The output is a vertex defined in whatever way is desired.

*Rasterization - The rasterization stage converts vector information (composed of shapes or primitives) into a raster image (composed of pixels). During rasterization, each primitive is converted into pixels, while interpolating per-vertex values across each primitive.

*Fragment shader - The final stage of the graphics pipeline. The fragment shader operates per pixel, and is used to calculate the final colour of each pixel on the screen.

Further, there is now something called the GPGPU ("General Purpose Graphics Processing Unit"), which allows the computation of arbitrary information on the graphics card in parallel. DirectX refers to this as the compute shader. This will be game changing (literally), considering how much more powerful the GPU can be when used correctly.

-------------------------------------------------------------------

The point of this post was to inform everyone about this, and possibly start a discussion centered around the future of shaders and parallel processing in general.

I, for one, feel that the artistic value of the GPU is decreasing. It's being utilized for things it was never designed for (general processing), and this is beginning to dominate everything (java is now even moving to the GPU. God help us, now they can garbage collect in parallel).

The CPU is crying tears of boredom now, because all it can do is pass information between RAM and the GPU, and perform I/O operations. The CPU has become the GPUs bitch. There is no need for more powerful CPUs.

What I also see is that there are no tools or any helpful tutorials for this new technology. Currently, you have to interface with OpenGL 3.0+ or DirectX11 directly in order to do all of this cool stuff, and use notepad to write your shaders.

Ogre3D is planning on implementing the above in Ogre3D 2.0, which is scheduled to be released in about a year from now. http://www.ogre3d.org/forums/viewtopic.php?f=13&t=77133

Those are my thoughts on that. Discuss.

TheComet


Yesterday is History, Tomorrow is a Mystery, but Today is a Gift. That is why it is called "present".
mr Handy
17
Years of Service
User Offline
Joined: 7th Sep 2007
Location: out of TGC
Posted: 10th Jun 2013 23:07
Quote: "There is no need for more powerful CPUs."

That's a very false statement! You still need hi-end CPU in addition to hi-end GPU to run some games at ultraHQ.

«Just because you’re unique, doesn’t mean you’re useful»
«If you contributed to the reason for locking, you may now find yourself on moderation, or in extreme cases in the grave»
The Zoq2
15
Years of Service
User Offline
Joined: 4th Nov 2009
Location: Linköping, Sweden
Posted: 10th Jun 2013 23:55
Quote: "What I also see is that there are no tools or any helpful tutorials for this new technology. Currently, you have to interface with OpenGL 3.0+ or DirectX11 directly in order to do all of this cool stuff, and use notepad to write your shaders."


I agree to that, I have recently gone into shaders myself and the only thin I have found is some web apps which can only do fragment shaders. If you are interested anyway look here [href]shadertoy.com[/href]
TheComet
17
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 11th Jun 2013 00:16
Another good resource is http://glsl.heroku.com/

That, and the link you posted use WebGL, which uses OpenGL ES 2.0. There's a difference between that and OpenGL 2.0.

There aren't any tools for writing geometry shaders anywhere though.

TheComet


Yesterday is History, Tomorrow is a Mystery, but Today is a Gift. That is why it is called "present".
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 11th Jun 2013 05:14
Quote: "The CPU has become the GPUs bitch. There is no need for more powerful CPUs."
Sadly it's becoming true! To be honest, at this rate the CPU is possibly going end up being integrated into the GPU instead of vice versa.
Jimpo
20
Years of Service
User Offline
Joined: 9th Apr 2005
Location:
Posted: 11th Jun 2013 07:09
Quote: "Currently, you have to interface with OpenGL 3.0+ or DirectX11 directly in order to do all of this cool stuff, and use notepad to write your shaders."

You can run GPGPU kernels without dealing with OpenGL or DirecX by using OpenCL or CUDA. Though maybe OpenCL needs OpenGL? I haven't worked with it

Quote: "The CPU is crying tears of boredom now, because all it can do is pass information between RAM and the GPU, and perform I/O operations. The CPU has become the GPUs bitch. There is no need for more powerful CPUs."

It's interesting to see modern CPUs becoming more and more parallel and modern GPUs supported more and more general purpose computing. It's almost as if they are becoming one. I think it's anyone's guess where our technology is heading and I'll be very interested in seeing where computers end up!

But GPUs aren't the super computers people make them out to be. There are many problems out there that don't run well in parallel. And for many that do, the problem can be computed on the CPU in less time than it takes to transfer the data onto the GPU. When I wrote my first GPGPU kernel, the algorithm ran 1000 times faster than the CPU version, and I was amazed at the performance gain. That was until I timed how long it took to transfer the data being computed on from CPU to GPU.

The GPU is also picky with what it wants, and unfortunately, it is far too easy to write GPU code that performs worse than CPU code, even when the potential is there.

I'll stop here because I can probably keep writing about this for a good hour!

Login to post a reply

Server time is: 2025-05-15 23:55:21
Your offset time is: 2025-05-15 23:55:21