Sunday, May 29, 2011

Adding textures

Hello Reader! For this week I have prepared a bit more interesting tutorial then last week. I will try to explain how to add textures to an object (yes, it's a  plural) and give some more insight into attributes and shaders.

Anyone lost a 1d6 dice?

 Enough with the chit-chat, let's get started! As before, you can download the code from here.

     function tutorial2(){
if (!shaderProgram){

gl.clearColor(0.5, 0.5, 0.5, 1.0);

requestAnimationFrame = window.mozRequestAnimationFrame;
if (!requestAnimationFrame)
requestAnimationFrame= window.webkitRequestAnimationFrame;
if (!requestAnimationFrame){
alert ('Ooops');

The entry code didn't change at all - we just added a new function, initTextures(),  which is loading some images, and creating WebGLTexture objects. These objects is what WebGL uses to represent textures in JavaScipt (it won't work with normal images). The initTextures() function is actually quite simple - it loads six images for the six cube faces and after each of them gets loaded it creates a WebGLTexture from it.

          function initTextures() {
var image1 = new Image();
image1.onload = function () {
frontTexture = createTextureFromImage(
                             image1, 0, shaderProgram.front);
image1.src = someImageSource;
var image6 = new Image();
image6.onload = function () {
leftTexture = createTextureFromImage(
                             image6, 5, shaderProgram.left);
image6.src = someImageSource;

This is the place where half of the magic happens. The first line just creates a texture object (not much to explain here). After we have created the texture, we need to assign an bind an image to it - and that's what the next few line do. First of all, we tell webgl that the texture we are working on is the newly created texture (webgl operates on one texture at a time). After that we are saying to flip our image - the reason for this is solely human error: Images are defined with the y-axis pointing upwards (thank you, paint and photoshop), while the screen coordinates are defined with the y-axis pointing downwards. That is the only reason why we call this function - otherwise the images would be upside down (set the value to false and try it out).

The next function, gl.texImage2D, is a kicker though. Simply speaking, it does the following: upload our image into the texture object in the graphics card. The parameters are:

  • the type of image we’re using (can also be gl.TEXTURE_CUBE
  • the level of detail (mip-mapping)
  • the format in which we want it to be stored on the graphics card - twice
  • the type of each channel of the image - unsigned byte for RGB
  • the image we want to load

    After this we need to do still one thing - specify how our image is going to be mapped onto the texture object. This is necessary because the texture object we are texturing on does not have to have the image's size (or proportions). Regarding that, I want you to remember one thing - WebGL works supercool with images that have width and height a power of two (POT) - it knows how to scale them (there are different parameters and I will wirte a how-to for that soon). For the other case (non-POT), we simply say "Just WRAP the texture object without thinking". After that we simply decrease a "texture left to load"counter and tidy up thing by binding null to the current texture. That's it!

          function createTextureFromImage(image, offset, uniform) {
    var texture = gl.createTexture();
    gl.bindTexture(gl.TEXTURE_2D, texture);
    gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA,
                                gl.UNSIGNED_BYTE, image);
    if (isPowerOfTwo(image.width) &&
                              gl.TEXTURE_MAG_FILTER, gl.NEAREST);
                              gl.TEXTURE_MIN_FILTER, gl.NEAREST);
                                   gl.TEXTURE_MIN_FILTER, gl.LINEAR);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S,
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T,
    gl.bindTexture(gl.TEXTURE_2D, null);
    return texture;

    Now that we have created and assigned values to our texture objects, we want to use them on our cube, don't we? Well, to do this we need to modify our shaders a bit. Let's first take a look at the vertex shader. We defined two new attributes here - atextureCoord and aFace. The first one specifies the texture coordinate per cube vertex - e.g. vertex (0,0,0) shall have the texture coordinate (0,0) while vertex (0,1,0) shall have the coordinate (0,1) (the exact numbers you can see in the, but don't break your head on them!). So, the lower left corner of the texture get's mapped to the lower left corner of the cube, the upper left corner to the upper left vertex of the cube (both front face - z = 0).

    The seond attribute tells us which face this vertex belongs to (0-5). This is important for our fragment shader, so we can use different textures for different faces - front face vertices have the face 0, back face vertices 1 and so on.

    Vertext shader:

                attribute vec3 aVertexPosition;
    attribute vec2 aTextureCoord;
    attribute float aFace;
    uniform mat4 uMVMatrix;
    uniform mat4 uPMatrix;

    varying vec2 vTextureCoord;
    varying float vFace;

          void main(void) {
    gl_Position = uPMatrix * uMVMatrix *
                               vec4(aVertexPosition, 1.0);
    vTextureCoord = aTextureCoord;
    vFace = aFace;

    Following up with the fragment shader - here we only check the value of the face attribute (passed as a varying, remember that?). Depending on the value (0-5), we use the corresponding texture. But wait, there are no texture uniforms, just some sampler stuff?! Well, shaders call textures sampler, that's all :) The function texture2D(sampler, vTextureCoord)  get's the correct pixel value from the sampler at coordinate vTextureCoord and assigns it the gl_FragColor.

    Fragment shader:

    uniform sampler2D front;
    uniform sampler2D back;
    uniform sampler2D top;
    uniform sampler2D bottom;
    uniform sampler2D right;
    uniform sampler2D left;

    varying vec2 vTextureCoord;
    varying float vFace;

    void main(void) {
    if (vFace < 0.1)
    gl_FragColor = texture2D(front, vTextureCoord);
    else if (vFace < 1.1)
    gl_FragColor = texture2D(back, vTextureCoord);
    else if (vFace < 2.1)
    gl_FragColor = texture2D(top, vTextureCoord);
    else if (vFace < 3.1)
    gl_FragColor = texture2D(bottom, vTextureCoord);
    else if (vFace < 4.1)
    gl_FragColor = texture2D(right, vTextureCoord);
    gl_FragColor = texture2D(left, vTextureCoord);

    Since our shader code has changed, we need to update our createShaderProgram() function also. We add these lines:

      shaderProgram.textureLookUpAttribute =
                        gl.getAttribLocation(shaderProgram, "aFace");
     shaderProgram.left =gl.getUniformLocation(shaderProgram,"left");   

    We enable the aFace attribute which tells, and get the uniform locations for each sampler in our fragment shader. Makes sense, doesn't it? The last thing left to is to bind the WebGLTexture objects with our shader program. that looks like this (we do it 6 time for six sampler):

        gl.activeTexture(gl.TEXTURE0 + face);
    gl.bindTexture(gl.TEXTURE_2D, faceTexture);
    gl.uniform1i(shaderProgram.front, face); 

    Before I explain what they do, I first need to tell you a bit more about textures in WebGL. You can define as much WebGLTexture objects as you wish, but at any given moment you can only use a maximum of 32 - that is the limit of samplers you can have in you shader code. In respect to that, the function gl.activeTexture(gl.TEXTURE0 + face) says: 'I want to work with TEXTURE0 + offset (0-31) now!' - you can not work with more then one a a time. After that we call the bind function which is passing the value of faceTexture to the currently active texture. Last thing to do is to bind the currently active texture (gl.TEXTURE0 + face) to our uniform face. And that's it!

    One P.S. though - I have not provided you with the number values of the buffer for the attributes. I consider that just a waste of space and unnecessary distraction. In 99,9% of cases you will not specify does values, neither even know - they will be generated by some 3D max or maya object exporter, and the only thing left to do for you is to load then into buffers. In the source code they are specified by hand, take a look at them if you wish. But as said - you will probably never see them again :)


    Many thanks to for providing the base for this tutorial!