Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

DarkBASIC Professional Discussion / Shader Tutorials 101 - Ground Zero

Author
Message
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 20th May 2014 22:03 Edited at: 20th May 2014 22:25
(Note @ moderators: Originally I was going to host these externally, but decided it would be better to embed them into TGC forums. I hope the two threads aren't too much of a hassle, if so, you may delete this one and I can append it to the other thread instead.)



TheComet's Shader Tutorial 101

Ground Zero - Understanding The Graphics Pipeline


Synopsis

You will learn the following in this chapter.

* The basics of what a graphics card is, what it does, and how it does it.
* What parts of it we can programmatically manipulate through shader programs.
* When it makes sense to use the GPU and when it makes sense to use the CPU.



What is a grahics card?

A Graphics Processing Unit (GPU) is a specialised electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display.

Unlike a CPU, the architecture of a GPU is highly parallelised. A single GPU contains thousands of cores, with the ability to reach a total processing power of multiple Tflop/s (10^12 floating point operations per second).



You may ask yourself: Why hasn't the CPU been replaced by a GPU yet? The GPU is obviously over a million times faster. The answer is quite simple: Some mathematical problems cannot be solved in parallel efficiently, while some can. The CPU is designed to solve sequential problems, while the GPU is designed to solve parallel problems.

For example, think about the code you wrote in your latest project. Each command you typed in needs to be processed sequentially. One after another. It wouldn't make sense to try and texture an object that hasn't been loaded yet. It wouldn't make sense to try and render an object when you haven't even opened the window yet. It wouldn't make sense for your enemy to search for a path and at the same time try to follow the path, because the path data hasn't finished calculating yet. The path needs to first exist before the enemy can follow it.

There are many things in a program that have to happen in a particular order, and there's really no way around this.



There are some things that can be parallelised, though. Lets say your game has thousands of bullets that need to be simulated at the same time, because you're writing the next MMOFPS. The bullets don't have to know about each other, all they have to do is travel a certain distance every loop. This makes it possible to update the position of each bullet in parallel, because they are all completely independent of each other.



Of course, you'd still have to make sure you check for collision after all of the bullets have been updated, because again, it doesn't make sense to check for collision at the same time you're updating the bullets, because it's possible that some were update while the others aren't yet. So when looking at it from a higher level, you begin to notice that programs are really just a mixture of parallel problems that have to happen in sequence.



With that said, it still wouldn't make sense to calculate the bullts on a GPU. It costs a lot of time to communicate to the GPU, so even though the GPU could potentially simulate a thousand bullets instantaniously, the time it takes to upload all of the data to the GPU, let it calculate its stuff, and download the results again would take longer than simply doing it directly on the CPU.

We need millions of tasks before it makes sense to use the GPU. That's where drawing objects comes in, because rendering graphics is highly parallelisable.



A Journey Of A 3D Object To The Screen

There are a number of sequential operations required to get an object from memory to the screen. Here, we will examine how that exactly works.

I want you to meet, for a lack of a better name, Bob. He is the cutest cube ever created, and was just loaded into memory using the following code.




Well, at least that's what he should look like, but the VRAM isn't concerned with that. All it cares about are the vertices and their attributes, along with maybe a texture lying around somewhere in video memory (if at all).



In VRAM, the object is nothing more than 36 vertices (on an unoptimised object consisting of independent triangles). They aren't even connected with each other, all they have are certain attributes. One such attribute is the position, which is stored as 3 floating point values and tells us where the vertex is located in object space. Other attributes are the vertex normal, diffuse, and UV coordinates. These, however, are all optional, and are defined using the object's Flexible Vertex Format (FVF).



You can try this right now if you like. DBP provides you with some tools to access and even edit these vertices after loading an object. The following is an example demonstrating just that. This example is located in the folder 01-fvf-format in the examples.


When running the above program, you will notice the Flexible Vertex Format (FVF) will have a value of 274. This number tells us what attributes are being used, and basically means that each vertex has the attributes position, normal, and UV coordinate. I won't go into more detail on FVF since it's not important yet, but you can check out this link if want to know more.

Since the position attribute requires 3 floats, the normal attribute requires 3 floats, and the UV coordinates require 2 floats, and each float consists of 4 bytes, the total memory size of a single vertex amounts to 32 bytes. This will be the value of vertSize.

Lastly, vertCount tells us the total amount of vertices that are composing the object, which, as mentioned earlier, will be 36 on an unoptimised cube.

When it's time for an object to be drawn, all of its vertices and its attributes are passed to the vertex shader. At this point, bob is still just a bunch of points located in object space. Getting him to the screen requires the following steps:
* Vertex shader takes all vertices of the object
--> Vertex shader transforms all vertices into world space
--> Vertex shader transforms vertices from world space into view space
--> Vertex shader transforms vertices from view space into projection space
* Rasterizer takes vertices from vertex shader and fills in all of the surfaces. A list of pixels is generated.
* Pixel shader takes list of pixels from rasterizer
--> Pixel shader samples from textures, and applies them to the pixels
--> Pixel shader outputs all pixels to a render target (in this case, the screen)

Here are some illustrations for each of the steps mentioned above. Beginning with the vertices:



The first thing that happens is the vertex shader will transform all of the vertices into world space. This effectively places bob into the 3D world at the position the programmer placed him, which is determined by the DBP commands position object, rotate object, and scale object. Those three commands generate what's known as the world matrix, which is also uploaded so the GPU knows how to transform bob into world space.



In other words, the vertices in VRAM never change. Even when you position the object, rotate the object, etc. you aren't actually moving the vertices. You're only telling the GPU how the object was transformed. And if you think about it, that's a good thing, because if you were to actually change the vertices in VRAM, the model would begin to distort the more you reposition it, because floating point datatypes only have a finite accuracy.

Next, the GPU will transform all vertices into view space. This effectively places bob relative to where the camera is located and positioned and pointing, which is determined by the DBP commands position camera, and rotate camera. Those commands generate what's known as the view matrix, which is also uploaded so the GPU knows how to transform bob into view space.



When in this space, the position 0, 0, 0 is exactly where the camera is, because the object now uses a coordinate system relative to how the camera is positioned and rotated.

The GPU now does another transformation on all of Bob's vertices, placing him into projection space. This effectively places bob into the projection space of the camera, and has the effect of scaling Bob according to how far or how close he is to the camera (in the case of a perspective projection).



You can think of projection space as being the computer screen, but with a depth buffer.

Here's the entire process of the vertex shader, again:



At this point, the vertex shader has done its job. It outputs the new positions of all of the vertices, and the GPU will do some clipping, discarding any primitives that fall completely outside of the camera's view frustum. This is an optimisation so the pixel shader doesn't have to do as much work.

Then the GPU rasterises the vertices. Here the vertices are finally connected together to form actual shapes, and the correct resulting pixel values are determined.

This is accomplished by sampling the 3D surfaces with a grid (a raster), where the grid has the exact dimensions of the render target:









Now Bob consist of a bunch of pixels, but their colour isn't defined yet. These pixels, just like vertices, have attributes. To list the most important, each pixel has a colour and a UV coordinate.

These pixels are passed to the pixel shader.

The pixel shader will go through every pixel and try to determine the final colour. This can include sampling from a texture by using the UV coordinates, or simply generating a colour on the fly.

The pixel shader outputs the pixels to a render target, which is a buffer located in video memory. After that, the render target can be directly output to the screen, or can be used again in another render pass.



And thus, Bob has made it to the screen!

As a DBP programmer, you have the ability to write your own vertex shader programs, which changes how vertices are transformed, and you have the ability to write your own pixel shader programs, which changes how pixels gain their final colour.

There are hundreds of thousands of vertices and billions of pixels in 3D games. A CPU just would not be able to handle it.



Summary

* The GPU is optimised to solve parallel mathematical problems (such as vertex transformations and pixel calculation).
* We can change the way an object is drawn through vertex and pixel shader programs.



Links

Proceed to the next tutorial: 02 - Writing Your First Shader
Proceed to the previous tutorial here: Master Post

TheComet

Your mod has been erased by a signature
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 20th May 2014 22:05 Edited at: 20th May 2014 22:14
TheComet's Shader Tutorial

Ground Zero - Writing Your First Shader


Synopsis

You will learn the following in this chapter.

* The basic syntax of HLSL.
* The structure of a shader program.
* Writing an absolute minimalistic shader, which will draw an object as a single colour.



Getting used to the syntax

HLSL is an abbreviation for "High Level Shader Language", developed by Microsoft for DirectX. It has what I like to call a "simple C-like syntax". There is no support for pointers or anything fancy, making it a very simple language to pick up.

Just like DBP has its fundamental data types, so does HLSL.

There are scalar datatypes (note: Only the most important ones are listed)


But there are also fancier types, such as vectors. After all, the GPU is designed for 3D math (note: Only the most important ones are listed):


Note that you can basically hang a number from 1 and 4 onto the end of a scalar datatype and it becomes a vector datatype, i.e. there's also an "int3" or a "double2".

Of course, there are also matrices:


The last important datatype is the struct, which allows us to group together primitive datatypes to form a custom type:


You can think of a struct as the equivalent to DBPs "User Defined Types" (UDT):


In HLSL, vector components can be initialised with curly brackets, and can be accessed with dot notation:


Sometimes, you might want to assign a float3 to a float4. This can be done by using a constructor:


One last special, and very handy feature of HLSL syntax is the ability to use multiple components via dot notation:


Note that it's also possible to write the components in any order, i.e. test1.xz, or test1.yz, or even test1.zx This will produce a float2 datatype containing the two values in the order specified.



The structure of a shader

So let's examine the bare minimum required to write a functioning shader. For this, example files have been included. If you haven't downloaded them yet, I urge you to do so here.

Go into the folder 02-simple-shader, open the DBPro project and compile and run the program. You should get something like the following:



Go ahead and open the file simple-shader.fx with a text editor. I prefer using Notepad++, with the HLSL syntax highlighting plugin.
I
At the very top of your shader are various shader constants. Some of these are user-defined, and can be set through DBP by using the commands set effect constant float or set effect constant vector. This gives you a nice way of controlling parameters from within DBP. Other shader constants gain their values from what's known as semantics.

In Tutorial 01, we discussed how the vertex shader transformed Bob into world space, then into view space, then into projection space by using matrices. Where do these matrices come from?

Fortunately, they are actually generated automatically by DBP, and there are a bunch of pre-defined semantics for accessing them, one of them being the following:

Make sure to try and implement the stuff below in your PLAYGROUND folder on your own, so you really understand how it works.



As the name implies, the world, view, and projection matrices have all been multiplied together to form a single matrix, unsurprisingly called the world view projection matrix. If you multiply a vertex by this matrix, you transform it from object space directly into projection space, skipping all intermediate projections.

By writing the code above, the variable matWorldViewProjection will automatically be assigned the world view projection matrix, because WORLDVIEWPROJECTION is the semantic for said matrix.

For a complete list of semantics, you can look at the official MSDN documentation here. But don't worry, 95% of the time the world view projection matrix is the only matrix you will ever need.



The next thing we need is to consider the data going in and out of the vertex and pixel shader programs.

We know the vertex shader "does things" with vertices. For instance, it can transform vertex positions into different 3D spaces, like it did with Bob.

In our case, all we really need are the position attributes of each vertex as input:


As you can see, we're using the pre-defined semantic POSITION0, which automatically reads the position attribute from the current vertex being processed and assigns it to the variable position located in our struct VS_INPUT.

From our vertex shader, we'll want to output the new position of the vertex after transforming it. Again, we'll make a struct for handling that:


As you can see, we're using the pre-defined semantic POSITION0, which automatically assigns the position attribute of the current vertex being processed.

You'll notice the semantics have a leading "0". A vertex has various stages. The position of the vertex is written to stage 0 of the object. Theoretically it is possible to have a vertex with multiple positions, but frankly, that's retarded, so we only read from stage 0.

Next, the input of the pixel shader program. Since this is the simplest of shaders, there is nothing to input, so we'll just leave it blank:


Last but not least, we need to define the output values of the pixel shader program. In almost all cases, the only thing you'll ever want to output is the final colour.


Again, note the use of the COLOR semantic, which assigns the output colour attribute of the render target to the variable colour.

Now it's time to write the vertex shader program. Here it is.


This little section of code is where all of our vertex manipulation happens. In our case, we transform all vertices into projection space, as discussed in Tutorial 01, by multiplying each vertex by the world view projection matrix.

Very important to understand: The vertex shader is executed once for every vertex of the object. This means that if your object has 36 vertices, vs_main is called 36 times, and every time it's called, the variable input.position contains the position of the active vertex. You may have guessed it: Yes, all 36 instances of vs_main are executed in parallel, one on each core of the GPU. Since the GPU has thousands of cores, even an object with tens of thousands of vertices will only take a fraction of a microsecond to compute.

Next up, we need a pixel shader. Here it is.


This little section of code is where all of our pixel manipulation happens. In our case, we simply set every pixel to have the colour green.

Very important to understand: The pixel shader is executed once for every pixel. This means that ps_main will be called once for every pixel on the screen that is part of that object. If you had a 1920x1080 display, and the object were close enough to the camera to cover it entirely, ps_main would be called 1920x1080=2073600 times. You may have also guessed this one: Yes, all 2073600 instances of ps_main are executed in parallel, one on each core of the GPU.

Obviously, the GPU may not have 2073600 cores, in which case the programs are simply queued up so there are always a maximum number of them instantiated. The order in which this happens is undefined.

The very last thing to do is to tell DBP how to compile and execute the vertex and pixel shader programs. This is done by defining a technique and a number of passes.


Here, you are looking at a technique containing a single pass, which compiles the vertex and pixel shader programs above using shader model 1.1. Basically, the lower the shader model version, the more hardware you'll be able to support, but the less shader features you'll be able to use.

DBP supports up to shader model 3.0.



Summary

* A vertex shader is used to transform an object from object space into another space (most commonly projection space), and can also be used to manipulate vertex attributes.
* A pixel shader is used to manipulate the colour of an object's surface at a per-pixel basis.
* A semantic can be used to access important shader constants such as vertex attributes or transformation matrices, and assign them to variables.
* Vertex and pixel shader programs are executed in parallel: A vertex shader program for every vertex, and a pixel shader program for every pixel on the screen.

Here is the entire shader from above:




Links

Proceed to the next tutorial: 03 - Vertex Shader Coordinate System
Proceed to the previous tutorial here: 01 - Understanding The Graphics Pipeline

TheComet

Your mod has been erased by a signature
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 20th May 2014 22:07 Edited at: 20th May 2014 22:15
TheComet's Shader Tutorial

Ground Zero - Vertex Shader Coordinate System


Synopsis

In tutorial 02 you wrote your very first shader from scratch, and have a basic understanding of how it works. You will learn the following in this chapter.

* Why coordinates use 4-dimensional vectors and not 3-dimensional vectors



Coordinate system in the vertex shader

You will have noticed that the POSITION0 semantic was assigned to a variable of type float4. Why? We're working with 3D coordinates, so why are positions 4-dimensional?

Shaders actually don't use Cartesian coordinates (x, y, z), because they are flawed for a number of reasons. For example, what happens when you project an object using a perspective projection matrix, but the object is exactly 90 degrees to the left of the camera? The resulting vertex positions would be placed into infinity. As we know, though, computers can't handle infinite numbers, and you'd get unpredictable behaviour (such as the object drawing over places it shouldn't).

To solve this problem, mathematicians came up with an ingenious, alternate coordinate system for handling infinite numbers with finite components. This coordinate system is what's known as the homogeneous coordinate system, which has one extra component w:


Now don't get scared, it's really quite trivial to understand. In order to convert a homogeneous coordinate back to a Cartesian coordinate, all you need to do is divide its x, y, and z components by its w component:


In fact, to make things even easier, the POSITION0 attribute of the vertex will always set the w component to 1.0. And we all know that dividing anything by 1.0 won't change the value, so you can effectively ignore the w component, and pretend that homogeneous.xyz is cartesian. Pretty neat, huh?

But what's the point then? Isn't that the same as Cartesian?

Not exactly, because this makes it possible to define points in infinity. For example:



Oh oh, we've set w to 0.0. If you look back a bit on how to convert to Cartesian, you'll notice that we're dividing x, y, and z by 0. Believe it or not, this is actually a valid coordinate. It defines a point located infinitely away, and we're doing that without using infinite numbers. This is how the GPU handles correct projections without causing undefined behaviour.

One experiment you can do to prove this is to add the following to your vertex shader. The example 03-homogeneous-coordinates demonstrates this behaviour. If you haven't downloaded the exmamples, you can do so here.
Make sure to try and implement the stuff below in your PLAYGROUND folder on your own!


So now, instead of dividing each component by 1.0, it will divide by 0.5. This will cause your object to scale to twice of its original size:





Summary

* Vertex shaders use the homogeneous coordinate system, which allows the GPU to define and handle points located in infinity.



Links

Proceed to the next tutorial: 04 - Vertex Normals
Proceed to the previous tutorial here: 02 - Writing Your First Shader

TheComet

Your mod has been erased by a signature
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 20th May 2014 22:07 Edited at: 20th May 2014 22:16
TheComet's Shader Tutorial

Ground Zero - Vertex Normals


Synopsis

* What normals are.
* How to have fun with normals.



What are normals?

You may have heard of these "normals" here and there. A surface normal is a unit vector, that is, a vector with the length of exactly 1, perpendicular to the surface.



The vector tells us the surface angle of the object by pointing directly away from the surface.

Since vertex shaders process vertices and not surfaces, each vertex is given a pre-calculated normal based on the average of the surface normals surrounding it. This new normal is called the vertex normal, and can be accessed via the semantic NORMAL0.



Normals allow you to do some cool lighting effects, but that's something for a later tutorial. For now, just know that they exist.



How can I have fun with normals?

Open the example 04-making-models-fat. If you haven't downloaded the examples, please do so here.
Make sure to try and implement the stuff below in your PLAYGROUND folder on your own!

One cool thing you can do with vertex normals is use them to evenly change the surface area of an object. That is, you can make your character fatter/thinner.

Try changing your vertex shader to the following.

Add this to the very top of your shader, underneath where the projection matrices are declared:


We can gain access to the object's normals through the NORMAL0 semantic by adding this to the vertex shader input struct:


And now change your vertex shader to the following:


Then, in DBP, simply load and apply the shader to a more complex object:


The results can be quite funny. The following shows the same model with different fatness factors:



As you can see, manipulating vertices with shaders is extremely easy and fast.

If you don't understand how this works, let me give you some help. input.position is the location of the current vertex. We also have access to a directional vector, the vertex normal, which tells us which direction is "away" from the object (perpendicular). If we multiply fatness with input.normal, all we do is we change the length of the normal vector. By adding input.position and input.normal*fatness together, we're moving the vertex "away" from the object by exactly the distance fatness specifies.

By doing this to every vertex, the skin of the model can be shrunk or grown.



Summary

* A vertex normal is a directional unit vector, which is the average of all surface normals it connects. In layman's terms: "It poins away from the object's surface".
* A vertex normal is a unit vector: This means it has the length of 1.0.



Links

Proceed to the next tutorial: 05 - UV Coordinates
Proceed to the previous tutorial here: 03 - Vertex Shader Coordinate System

TheComet

Your mod has been erased by a signature
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 20th May 2014 22:08 Edited at: 20th May 2014 22:17
TheComet's Shader Tutorials

Ground Zero - UV Coordinates


Synopsis

You will learn:

* What UV coordinates are.
* How they are passed to the pixel shader.
* How colours are encoded and why are they also 4-dimensional.



What are UV coordinates?

In tutorial 01 we discussed vertex attributes. Another attribute a vertex can have is what's known as a UV coordinate.

When texturing an object, we have to somehow remember how the texture was "wrapped" onto the object. This is done by saving where a vertex was located on a texture as an attribute of the vertex itself.

In DBP, you usually work with pixel coordinates. If you load a 256x256 image, you'd have to use the coordinates 128,128 to draw to the very middle of the image.

GPUs don't do this because textures can have varying resolutions. Instead, the GPU defines the top left corner to be at 0.0, 0.0, and the bottom right corner to be at 1.0, 1.0. If you wanted to draw in the very middle of the image, you'd have to do it at 0.5, 0.5, which is exactly half of 1.0, 1.0.



In order to texture a 3D object, it needs to be "unwrapped" so it becomes 2-dimensional. The following is an example with a sphere:



This makes it easy to slap an image onto it.

So when rendering an object, every vertex knows where it was located on the texture, and stores this its UV coordinate attribute.

You can access a vertex' UV coordinate with the semantic TEXCOORD0.

Notice the "0" in "TEXCOORD0". You can apply more than one texture to the same object, and every texture can be mapped differently to the object. The second texture could be accessed through TEXOORD1, and so on.



Let's see some shader code!

Go ahead and open the folder 05-uv-coordinates-and-rainbows. If you haven't downloaded the examples yet, do so here.
Make sure to try and implement the stuff below in your PLAYGROUND folder on your own!

Texture coordinates are a little special. They are an attribute of vertices, but they aren't used by the vertex shader. The pixel shader is what needs them. However, they still need to be extracted by the vertex shader and passed on to the pixel shader, because only the vertex shader has access to vertices.

In order to do this, we modify the vertex shader input and output structs to include the new semantics:




Additionally, the pixel shader input struct also needs to read the information from the vertex shader:


With these structs, the data flows as follows:
vertex UV attribute -> vertex shader -> rasteriser -> pixel shader

Now, modify the vertex shader to read the texture coordinates from the vertices and output them for the pixel shader. This is as simple as copying the values from input to output:


Awesome!

Now the pixel shader has to make use of the new input values it can receive. Right now, let's just set the colour of the object according to the UV coordinates:


Run the code and you should get something like this:





Colours on the graphics card

So why did that happen? Let's examine. What do we know?

1) UV coordinates will be between the value of 0.0 and 1.0.

Good. And now you can probably guess that the colours on the GPU are also defined between 0.0 and 1.0 instead of 0 and 255 *gasp*. What a surprise!

That's right. For the GPU, a value of 1, 1, 1 is completely white, where 0.5, 0.5, 0.5 is grey, and 1, 0, 0 is red, etc.

In the example, we map the x,y of the UV coordinates directly to the r and g colour values.

Since the top-left corner of the cube is located at UV coordinates 0,0, this will tell the GPU to colour it (0,0,1), so blue. The bottom-right corner is located at UV coordinates 1,1, which tells the GPU to colour it (1,1,1), making it white. We can see that for all of the pixels in between the vertices, it interpolates the UV coordinates, resulting in nice, smooth gradients.

This proves that all values the pixel shader reads in from the vertex shader are smoothly interpolated between one another.

One small detail is that colours are also 4-dimensional. The last value defines the alpha channel, where 0 is totally transparent and 1 is totally opaque.

And just like with coordinates, each component can also be accessed via dot notation:

coordinate.xyzw
colour.rgba


Where r is red, g is green, b is blue, and a is alpha.



Summary

* The vertex shader needs to pass texture coordinates to the pixel shader.
* During rasterisation the UV coordinates are linearly interpolated between vertices for each pixel.
* UV coordinates are always between 0.0 and 1.0.
* Colours on the GPU are handled as four floating point values, each between 0.0 and 1.0.



Links

Proceed to the next tutorial: 06 - Sampling a Texture
Proceed to the previous tutorial here: 04 - Vertex Normals

TheComet

Your mod has been erased by a signature
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 20th May 2014 22:09 Edited at: 20th May 2014 22:17
TheComet's Shader Tutorial

Ground Zero - Sampling a Texture


Synopsis

Up until now, we've only generated pretty rainbows out of our objects. Here you will learn the following.

* How to declare texture types.
* What a sampler is.
* How to use samplers.



Declaring texture types

This tutorial is demonstrated in the example 06-ambient-shader. If you haven't downloaded the examples, you can do so here.
Make sure to try and implement the stuff below in your PLAYGROUND folder on your own!

In order to make use of a texture, two things need to be done.

1) The shader needs to know it exists.
2) A sampler needs to be set up so pixel information can be read from the texture.

In DBP, if you use multiple texture stages, the order in which you declare your textures is the order in which the stages are used.

To declare a texture, we simply have to use the texture datatype:


An empty string for the resource name tells the shader compiler to use the resource name of the default texture.

A convention I like to follow is to prefix everything with what it is. If you don't do this, things can get pretty confusing and messy further down the road. You may have already noticed the world projection matrix to have the prefix "mat" for "matrix". Here I use "tex" for "texture".

So again, if in DBP you were to texture your object with the following:


The following applies to declarations of textures in shaders:




How to set up a sampler

Now that the shader knows about the textures, we have to set up a sampler so we can read pixel information from them.

A texture has a limited number of pixels, so what happens when a UV coordinate tries to get information from "in between" the pixels in the texture? This is where samplers come in.

A sampler has the ability to convert a texture into an infinitely large resolution by interpolating the pixels of the texture when trying to access information "in between". The resulting pixel is an average of all surrounding pixels.

There are different types of samplers, and different ways to configure a sampler. We'll be using a 2D sampler (because our texture is 2-dimensional), and the default sampling function uses linear interpolation.

Here's how to declare the sampler:


NOTE: It's possible to have multiple samplers sampling from the same texture. Usually you'll want to have one sampler for every texture declared.



Using the sampler

The sampler is used in the pixel shader with the command tex2D, and requires the UV coordinates we calculated in tutorial 05.

Try modifying your pixel shader to look like the following:


As you can see, the tex2D command helps pass the UV coordinate to our sampler sampDiffuse. This causes it to look at the texture its referencing and sample a colour value from it, at the exact location the UV coordinates specify. The result is saved in diffuse.

After that, the value is simply directly written to the screen through the output struct. You should now get something like the following:



Important: Sampling textures is an expensive process, and should be used sparingly.



Summary

* Textures need to be declared in the shader in the exact order they were applied to the DBP object.
* In order to use the texture, a sampler needs to be set up to reference the texture.
* Samplers interpolate the pixels of a texture, making it possible to access "in between" pixels of the texture.
* Samplers are expensive to use.

Congratulations! You have successfully completed the ground zero tutorial series, and have written one of the most basic shaders, which does what's known as "ambient shading".

The next series will introduce you to some fundamental lighting techniques to make your objects look a lot sweeter. Please do continue!

Links

Proceed to the next tutorial here (WIP, not yet released).
Proceed to the previous tutorial here: 05 - UV Coordinates

TheComet

Your mod has been erased by a signature
Chris Tate
DBPro Master
15
Years of Service
User Offline
Joined: 29th Aug 2008
Location: London, England
Posted: 21st May 2014 22:15
Wow thanks for this, I have learned a thing or two today.

I am sure a lot more people will be implementing shaders with their games from now on; there is no excuse anymore with a tutorial thread like this... no excuse!

Ortu
DBPro Master
16
Years of Service
User Offline
Joined: 21st Nov 2007
Location: Austin, TX
Posted: 30th May 2014 08:13
Awesome write up man, this is what the learning to write shaders sticky should have been. Really looking forward to the next one.

Green Gandalf
VIP Member
19
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 30th May 2014 15:06
Quote: " this is what the learning to write shaders sticky should have been"


When that thread was started the original poster and various participants (including myself) were all learning about shaders which is why it never had a clear tutorial structure. So it could never have been like the present one which is a welcome addition to our shader resources. In fact this thread probably ought to be become a sticky and the old Learning to write shaders thread could be listed on the Old Stickies thread.



Powered by Free Banners
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 30th May 2014 15:18
Thanks for the feedback!

There's still a lot to improve on, but I'm fairly happy with the way these turned out.

The next series is about half way done as of now.

Quote: "this is what the learning to write shaders sticky should have been"


Seconding what GG said.

The original Learning Shaders thread may have not contained a tutorial structure, but excellent help was provided by people with the latest knowledge and expertise about shaders. It helped a great deal in perpetuating everyone's general understanding of the dark art. It might be a good idea to revive that thread and delegate all discussions relating to shaders to it?

If these tutorials are to be stickied, the master post should be the thread to do so with.

Your dungeon has been arrested by a signature image because it tried to be a mod
MrValentine
AGK Backer
13
Years of Service
User Offline
Joined: 5th Dec 2010
Playing: FFVII
Posted: 30th May 2014 20:33
Some interesting and clear information here, will look at it closer soon...



Ortu
DBPro Master
16
Years of Service
User Offline
Joined: 21st Nov 2007
Location: Austin, TX
Posted: 31st May 2014 21:15
Quote: "
When that thread was started the original poster and various participants (including myself) were all learning about shaders which is why it never had a clear tutorial structure. So it could never have been like the present one which is a welcome addition to our shader resources. In fact this thread probably ought to be become a sticky and the old Learning to write shaders thread could be listed on the Old Stickies thread."


It is a great resource, agreed, and I don't mean to devalue or dismiss what it has to offer, but it always felt like intermediate level help and discussion for intermediate level shader users with only limited info for absolute beginners in shaders looking to get started.

I guess I've just always felt the thread content was a bit higher level than the title would seem to suggest. So really I'm just nit-picking the title's wording, and am greatful that there is now more entry-level info available.

don't mind me

Chris Tate
DBPro Master
15
Years of Service
User Offline
Joined: 29th Aug 2008
Location: London, England
Posted: 24th Aug 2014 22:01
Please forgive me for being bold, I am bumping someone else's thread to stop it from getting locked; because I like this thread.

Libervurto
17
Years of Service
User Offline
Joined: 30th Jun 2006
Location: On Toast
Posted: 25th Aug 2014 15:03
TL;DR

Bookmarked this for later. Thanks for the bump Chris.

Formerly OBese87.
Ashingda 27
16
Years of Service
User Offline
Joined: 15th Feb 2008
Location:
Posted: 10th Nov 2014 03:09
Very well done! I followed it from start to end and learned a lot

Chris Tate
DBPro Master
15
Years of Service
User Offline
Joined: 29th Aug 2008
Location: London, England
Posted: 10th Nov 2014 15:06
How's it going Ashingda 27? Long time since I've seen a post from you on these forums.

Adrian
20
Years of Service
User Offline
Joined: 11th Nov 2003
Location: My Living Room
Posted: 6th Jan 2015 22:29
Very useful, thanks for posting

Login to post a reply

Server time is: 2024-05-02 20:53:18
Your offset time is: 2024-05-02 20:53:18