Component System using Objective-C Message Forwarding

Ever since I started playing around with the Unity3D, I wanted to build one of my games with component system. And, since I use Objective-C most of the time, so why not use it.

First let me elaborate on my intention. I wanted a component system where I’ve a GameObject that can have one or more components plugged in. To illustrate the main problems with the traditional Actor based model lets assume we’re developing a platformer game. We have these three Actor types:

1. Ninja: A ninja is an actor that a user can control with external inputs.
2. Background: Background is something like a static image.
3. HiddenBonus: Hidden bonus is a invisible actor that a ninja can hit.

Let’s take a look at all the components inside these actors.

Ninja = RenderComponent + PhysicsComponent
Background = RenderComponent
HiddenBonus = PhysicsComponent.

Here RenderComponent is responsible for just drawing some content on the screen, while the PhysicsComponent is responsible for just doing physics updates like, collision detection.

So, in a nutshell from our game loop we need to call things like

- (void)loopUpdate:(int)dt
{
    [ninja update:dt];
    [bonus update:dt];
}

- (void)loopRender
{
    [background render];
    [ninja render];
}

Now, in the traditional Actor model, if we have an Actor class with both RenderComponent and PhysicsComponent like:

@interface Actor : NSObject

- (void)render;
- (void)update:(int)dt;

@end

Then it would inevitably add a PhysicsComponent to Background actor and a RenderComponent to a HiddenBonus actor.

In Objective-C we can in fact design the component system using the message forwarding.
( Don’t forget to checkout this older post on message forwarding ).

At the core, we can have a GameObject class like

@interface GameObject : NSObject

- (void)render;
- (void)update:(int)dt;

@end

But this doesn’t implements any of these methods, instead it forwards them to relevant Component classes. We can write our Component classes as

@interface PhysicsComponent : NSObject

- (void)update:(int)dt;

@end

@interface RenderComponent : NSObject

- (void)render;

@end

Our GameObject class gets reduced to

@interface GameObject : NSObject

+ (GameObject *)emptyObject;

- (void)enablePhysicsComponent;
- (void)enableRenderComponent;

@end

Our game loop initializes our GameObjects as

- (void)loadGameObjects
{
    ninja = [GameObject emptyObject];
    [ninja enablePhysicsComponent];
    [ninja enableRenderComponent];

    bonus = [GameObject emptyObject];
    [bonus enablePhysicsComponent];

    background = [GameObject emptyObject];
    [background enableRenderComponent];
}

And updates and render them as

- (void)loopUpdate:(int)dt
{
    [ninja update:dt];
    [bonus update:dt];

}

- (void)loopRender
{
    [background render];
    [ninja render];
}

If you compile at this stage, you will get errors like:

‘No visible @interface for ‘GameObject’ declares the selector ‘update’.
‘No visible @interface for ‘GameObject’ declares the selector ‘render’.

In earlier days the code used to compile just like that. Later it became an warning, which was good for most of the developer who migrated to Objective-C from other compile time bound languages like C++. Objective-C is mainly a runtime bound language, but very few developers appreciate this fact and they complained a lot on why compilers don’t catch their errors early on, or more precisely, why it doesn’t works somewhat like C++. And I understand that feeling, because I too love C++. But, when I’m coding in Objective-C I use it like Objective-C. They are two completely different languages based on completely different ideology.

Moving forward in time, ARC came along. It made using Objective-C runtime behavior impossible. Now the ‘missing’ method declaration became an error. For this experimentation, let’s disable ARC entirely and see how it goes.

OK, good enough. The errors are now warnings. But we can at least move forward with our experiment.

Lets add the message forwarding machinery to our GameObject. Whenever a message is passed to an object in Objective-C, if the message is not implemented by the receiver object, then before throwing an exception the Objective-C runtime offers us an opportunity to forward the message to some other delegate object. The way this is done is quite simple, we just need to implement following methods.

First thing we need to handle is passing a selector availability test.

- (BOOL)respondsToSelector:(SEL)aSelector
{
    if ( [super respondsToSelector:aSelector] ) {
        return YES;
    } else if(renderComponent && [renderComponent respondsToSelector:aSelector]) {
        return YES;
    }  else if(physicsComponent && [physicsComponent respondsToSelector:aSelector]) {
        return YES;
    }
    return NO;
}

Next, we need to handle the method selector signature availability test.

- (NSMethodSignature*)methodSignatureForSelector:(SEL)selector
{
    NSMethodSignature* signature = [super methodSignatureForSelector:selector];
    if (signature) {
        return signature;
    }

    if (renderComponent) {
        signature = [renderComponent methodSignatureForSelector:selector];
        if (signature) {
            return signature;
        }
    }

    if (physicsComponent) {
        signature = [physicsComponent methodSignatureForSelector:selector];
        if (signature) {
            return signature;
        }
    }

    return nil;
}

After all the tests have been cleared, it time to implement the actual message forwarding.

- (void)forwardInvocation:(NSInvocation *)anInvocation
{
    if ([renderComponent respondsToSelector: [anInvocation selector]]) {
        [anInvocation invokeWithTarget:renderComponent];
    } else if ([physicsComponent respondsToSelector: [anInvocation selector]]) {
        [anInvocation invokeWithTarget:physicsComponent];
    } else {
        [super forwardInvocation:anInvocation];
    }
}

If you run the code at this point, you should see update and render getting invoked for the corresponding component objects. So yay, our component system prototype is working!

Now let try to make it more concrete. Let’s work on the RenderComponent first. For testing in each render call, we just draw a Cube on the screen.

The best part about the Component System is that every component focuses on just one thing. For example, the RenderComponent just focuses on rendering. We just need to add methods that are required just for rendering.

For our rapid prototyping, lets pass in the matrix information to the RenderComponent in form of a string.

@interface RenderComponent : NSObject

- (void)setNormalMatrix:(NSString *)nMat;
- (void)setModelViewProjectionMatrix:(NSString *)mvpMat;

- (void)render;

@end
@implementation RenderComponent

- (void)setNormalMatrix:(NSString *)nMat;
{
    // remove uneccessary characters from string..
    nMat = [nMat stringByReplacingOccurrencesOfString:@"{" withString:@""];
    nMat = [nMat stringByReplacingOccurrencesOfString:@"}" withString:@""];
    nMat = [nMat stringByReplacingOccurrencesOfString:@" " withString:@""];

    // build an NSArray with string components and convert them to an array of floats
    NSArray* array = [nMat componentsSeparatedByString:@","];
    float data[9];
    for(int i = 0; i < 9; ++i){
        data[i] = [[array objectAtIndex:i] floatValue];
    }

    _normalMatrix = GLKMatrix3MakeWithArray(data);
}

- (void)setModelViewProjectionMatrix:(NSString *)mvpMat;
{
    // remove uneccessary characters from string..
    mvpMat = [mvpMat stringByReplacingOccurrencesOfString:@"{" withString:@""];
    mvpMat = [mvpMat stringByReplacingOccurrencesOfString:@"}" withString:@""];
    mvpMat = [mvpMat stringByReplacingOccurrencesOfString:@" " withString:@""];

    // build an NSArray with string components and convert them to an array of floats
    NSArray* array = [mvpMat componentsSeparatedByString:@","];
    float data[16];
    for(int i = 0; i < 16; ++i){
        data[i] = [[array objectAtIndex:i] floatValue];
    }

    _modelViewProjectionMatrix = GLKMatrix4MakeWithArray(data);
}

- (void)render;
{
    glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
    glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);

    glDrawArrays(GL_TRIANGLES, 0, 36);
}

@end

The conversion from NSString to GLKMatrix is done by using this code.

Now using our message forwarding magic, in our main loop we just need to update the GameObjects with rendering enabled like so:

- (void)loopUpdate:(int)dt
{

    float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
    GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);

    GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f);
    baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f);

    // Compute the model view matrix for the object rendered with GLKit
    GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f);
    modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f);
    modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);

    [ninja setNormalMatrix: NSStringFromGLKMatrix3(
     GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL))];
    [ninja setModelViewProjectionMatrix: NSStringFromGLKMatrix4(GLKMatrix4Multiply(projectionMatrix, modelViewMatrix))];

    modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f);
    modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f);
    modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);

    [background setNormalMatrix: NSStringFromGLKMatrix3(
                                                   GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL))];
    [background setModelViewProjectionMatrix: NSStringFromGLKMatrix4(GLKMatrix4Multiply(projectionMatrix, modelViewMatrix))];

    [ninja update:dt];
    [bonus update:dt];
}

As you can already tell, how convenient Message Forwarding has made it for us to add our custom code in Components and just directly call them, even though the GameObject class doesn’t directly implements them.

I believe, you can extend this idea to add more Components and rapidly add and use the functionality, while keeping the code logically separated by Components.

The accompanied code is available at https://github.com/chunkyguy/OjbCComponentSystem. Since, this was just a rapid prototype test, and we have disable the ARC, the code could have potential memory leaks. Don’t use the code directly, use it just for reference purposes.

Posted in OpenGL, _dev, _iOS | Tagged , , | Leave a comment

C++: Typesafe programming

Lets say you have a function that needs angle in degrees as a parameter.

    void ApplyRotation(const float angle)
    {
        std::cout << "ApplyRotation: " << angle << std::endl;
    }

And an another function that returns a angle in radians

    float GetRotation()
    {
        return 0.45f;
    }

To fill out the missing piece, you write a radians to degree function

    float RadiansToDeg(const float angle)
    {
        return angle * 180.0f / M_PI;
    }

Then some place later, you use the functions like

    float angleRadians = GetRotation();
    float angleDegrees = RadiansToDeg(angleRadians);
    ApplyRotation(angleDegrees);

This is bad. The user of this code, who might be the person next to you, or yourself
10 weeks later, doesn’t knows what does it means by angle in functions ApplyRotation or GetRotation.
Is it angle in radians or angle in degrees?

Yes, you can add a comment on top of this function about where is it a angle in degrees and
where radians. But, that doesn’t actually stop the user from passing a value in whatever format
they will.

The main problem with this piece of code is that it uses a float as a parameter, which is
an implementation detail, and doesn’t passes any other information. In C++ we can do better.

Lets create new types.

    struct Degrees {
        explicit Degrees(float v) :
        value(v)
        {}
        
        float value;
    };
    
    struct Radians {
        explicit Radians(float v) :
        value(v)
        {}
        
        float value;
    };

And update the functions as

    Radians GetRotation()
    {
        return Radians(0.45f);
    }
    
    void ApplyRotation(const Degrees angle)
    {
        std::cout << "ApplyRotation: " << angle.value << std::endl;
    }
    
    Degrees RadiansToDeg(const Radians angle)
    {
        return Degrees(angle.value * 180.0f / M_PI);
    }

Now if we try to call it with following it just works.

    Radians angleRadians = GetRotation();
    Degrees angleDegrees = RadiansToDeg(angleRadians);
    ApplyRotation(angleDegrees);

Notice that, if we don’t specify the constructors as explicit the compiler will implicitly
converts the value from float to corresponding types. Which isn’t what we want.

This is already starting to look good. The code is self documenting, and the user will have
difficult times, if they try to use if differently than intended as most of the error checking
is done by the compiler.

Another benefit is that, now we can have a member function that does the conversion like

    struct Radians {
        explicit Radians(float v) :
        value(v)
        {}

        Degrees ToDegrees() const
        {
            return Degrees(value * 180.0f / M_PI);
        }

        float value;
    };

So, your calling code reduces to

    Radians angle = GetRotation();
    ApplyRotation(angle.ToDegrees());

As you can see, we no longer have variable names that tell the type information, but the
type that tells about itself.

On the final note we can make passing from value to passing as a const reference

    void ApplyRotation(const Degrees &angle)
    {
        std::cout << "ApplyRotation: " << angle.value << std::endl;
    }

This does two things. Firstly, no more copies when data gets passed around and secondly we
can pass a temporary value as guaranteed by the C++ standard, like so

    ApplyRotation(GetRotation().ToDegrees());
    ApplyRotation(Degrees(45.0f));
Posted in _dev | Tagged , | 1 Comment

Getting started Cocos2d with Swift

I understand that everybody is super excited about the Swift Programming Language right? Although Apple has provided the Playground for getting acquainted with Swift but there is an even better way – by making a game.

If you’re already familiar with Cocos2d and want to make games using Cocos2d and Swift, then walk through there simple steps to get started with the Cocos2d project template you already have in your Xcode.

1. Create a new Cocos2d project: This is nothing new for you guys.

2. Give it a name: Why am I even bothering with these steps?

3. Add Frameworks: I’m not sure if this is a temporary thing, but for some reasons I was getting all sort of linking errors for missing frameworks and libraries. These are all the iOS Frameworks the Cocos2d uses internally. Click on the photo to enlarge and see what all frameworks you actually need to add.

Make sure you tidy them up in a group, because the Xcode beta will add them at top level of your nice structure. Again, I guess this is also a temporary thing as Xcode 5 already knows how to put them inside a nice directory.

4. Remove all your .h/m files: Next remove all the .m files that Cocos2d creates for you. Ideally they should be AppDelegate.h/m HelloWorldScene.h/m IntroScene.h/m and main.m.

Yes, you heard it right. We don’t need main.m anymore as Swift has no main function as an entry point. Don’t remove any Cocos2d source code. Best guess is, if it’s inside the Classes directory it’s probably your code.

5. Download the repository: https://github.com/chunkyguy/Cocos2dSwift

6. Add Swift code: Look for Main.swift, HelloWorldScene.swift and IntroScene.swift in the Classes directory and add them to your project by the usual drag and drop. While you’re adding them, you should get a dialog box from Xcode ‘Would you like to configure an Objective-C bridging header?’.

Say ‘Yes’. The Xcode is smart enough to guess that you’re adding Swift files to a project that already has Objective-C and C files.

The bridging header makes all your Objective-C and C code visible inside your Swift code. Well, in this case it’s actually the Cocos2d source code.

7. Locate the bridging header: It should have a name such as ‘YourAwesomeGame-Bridging-Header.h’. In it add the two lines we need to bring all the cocos2d code we need inside our Swift code.

In case you have some other custom Objective-C code you would like your Swift code to see, import it here.

8. That it folks! Compile and run.

There are some trivial code changes that you can read in repository’s README file.

Have fun!

Posted in _games, _iOS | Tagged , , , | Leave a comment

Experiment 14: Object Picking

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Every game at some point requires the user to interact with the 3D world. Probably in the form of a mouse click event firing bullets at targets or like a touch drag on a mobile device to rotate the view on the screen. This is called object picking, because most probably we are picking objects in our 3D space using mouse or touch screen from the device space.

How I like to picture this problem is, that we have a point in the device coordinate system and we would like to ray trace it back to some relevant 3D coordinate system, like the world space or probably the model space.

In the fixed function pipeline days there was a gluUnProject function which helped in this ray tracing, and still most developers like to use it. Even the GLKit provides a similar GLKMathUnproject function. Basically, this function requires a modelview matrix, a projection matrix, a viewport or the device coordinates and a 3D vector in device coordinate and it returns back the vector transformed to the object space.

With modern OpenGL, we don’t need to use that function. First of all it requires us to provide all the matrices seperately, and secondly, it returns the vector in model space, but our needs could be different, maybe we need it in some other space. Or we don’t need the vector at all, rather, just a bool flag to indicate whether the object can be picked or not.

At the core, we just need two things: a way to transform our touch point from device space to the clip space and an inverse of model-view-projection matrix to transform it from the clip space to the object’s model space.

Lets assume we have a scene with two rotating cubes and we need to handle the touch events on the screen and test if the touch point a collision with any of the objects in the scene. If the collision does happens, we just change the color of that cube.

First problem is to convert the touch point from iPhone’s coordinates system to the OpenGL’s coordinate system. Which is easy as it just means we need to flip the y-coordinate


  /* calculate window size */
  GLKVector2 winSize = GLKVector2Make(CGRectGetWidth(self.view.bounds), CGRectGetHeight(self.view.bounds));

  /* touch point in window space */
  GLKVector2 point = GLKVector2Make(touch.x, winSize.y-touch.y);

Next, we need to transform the point to a normalized device coordinates space, or to range of [-1, 1]


  /* touch point in viewport space */
  GLKVector2 pointNDC = GLKVector2SubtractScalar(GLKVector2MultiplyScalar(GLKVector2Divide(point, winSize), 2.0f), 1.0f)

Next question that we need to tackle is, how to calculate a 3D vector from a device space, as the device is a 2D space. We need to remember that after the projection transformations are applied the depth of the scene is reduces to the range of [-1, 1].
So, we can use this fact and calculate the 3D positions at both the depth locations.


  /* touch point in 3D for both near and far planes */
  GLKVector4 win[2];
  win[0] = GLKVector4Make(pointNDC.x, pointNDC.y, -1.0f, 1.0f);
  win[1] = GLKVector4Make(pointNDC.x, pointNDC.y, 1.0f, 1.0f);

Then, we need to calculate the inverse of the model-view-projection matrix


  GLKMatrix4 invMVP = GLKMatrix4Invert(mvp, &success);

Now, we have all the things required to trace our ray


  /* ray at near and far plane in the object space */
  GLKVector4 ray[2];
  ray[0] = GLKMatrix4MultiplyVector4(invMVP, win[0]);
  ray[1] = GLKMatrix4MultiplyVector4(invMVP, win[1]);

Remember the values could be in the homogenous coordinates system, to convert them back to human-imaginable cartesian coordinate system we need to divide by the w component.


  /* covert rays from homogenous coordsys to cartesian coordsys */
  ray[0] = GLKVector4DivideScalar(ray[0], ray[0].w);
  ray[1] = GLKVector4DivideScalar(ray[1], ray[1].w);

We don’t need the start and end of the ray, but we need the ray in form of


 R = o + dt

Where o is the ray origin and d is the ray direction and t is the variable.

We calculate the ray direction as


  /* direction of the ray */
  GLKVector4 rayDir = GLKVector4Normalize(GLKVector4Subtract(ray[1], ray[0]));

Why is it normalized? We will take look at that later.

Now we have the ray in the object space. We can simply to a sphere intersection test. We must already know the radius of the bounding sphere, if not we can easily calculate it. Like for a cube, I’m assuming the radius to be equal to the half edge of any side.

For detailed information regarding a ray-sphere intersection test I recommend this article, but I’ll go through the minimum that we need to accomplish our goal.

Let points on surface of sphere of radius r is given by


x^2 + y^2 + z^2 = r^2
P^2 - r^2 = 0; where P = {x, y, z}

All points on sphere where ray hits should obey


(o + dt)^2 - r^2 = 0
o^2 + (dt)^2 + 2odt - r^2 = 0
f(t) = (d^2)t^2 + (2od)t + (o^2 - r^2) = 0

This is a quadratic equation in form


 ax^2 + bx + c = 0

And, it makes sense, because every ray can hit the sphere a maximum two points, first when it goes in the sphere and second when it leaves the sphere.

The determinant of the quadratic equation is calculated as


det = b^2 - 4ac

If determinant is < 0 it means no roots or the ray misses the sphere
If determinant is equal 0 means one root or the ray started from the inside of the sphere or it just touches it.
If determinant is > 0 means we have two roots, or the perfect case of ray passing through the sphere.

In our equation the values of a, b, c are


a = d^2 = 1; as d is a normalized vector and dot(d, d) = 1
b = 2od
c = o^2 - r^2

So, if we have ray with direction normalized we can simply test if it hits our sphere in model space with


- (BOOL)hitTestSphere:(float)radius
        withRayOrigin:(GLKVector3)rayOrigin
         rayDirection:(GLKVector3)rayDir
{
  float b = GLKVector3DotProduct(rayOrigin, rayDir) * 2.0f;
  float c = GLKVector3DotProduct(rayOrigin, rayOrigin) - radius*radius;
  printf("b^2-4ac = %f x %f = %f\n",b*b, 4.0f*c, b*b - 4*c);
  return b*b >= 4.0f*c;
}

Hope this helps you in understanding the object picking concept. Don’t forget to checkout the full code from the repository linked at the top of this article.

Posted in OpenGL, _algo, _dev | Tagged , , | 2 Comments

Experiment 13: Shadow mapping

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Shadows add a great detail to the 3D scene. For instance, it’s really hard to judge how high the object really is above the floor without a shadow, or how far away is the light source from the object.

Shadow mapping is one of the trick to create shadows for our geometry.

The basic idea it to render the scene twice. First from the light’s point of view, and save the result in a texture. Next, render from the desired point of view and compare the result with the saved result. If the part of geometry being rendered falls under shadow region, don’t apply lighting to it.

During setup routine, we need 2 framebuffers. One for each rendering pass. The first framebuffer needs to have a depth renderbuffer, and a texture attached to it. The second framebuffer is your usual framebuffer with color and depth.

The first pass is very simple. Switch the point of view to the light’s view and just draw the scene as quick as possible to the framebuffer. You just need to pass the vertex positions and the model-view-projection matrix to take those positions from the object space to the clip space. The result is what some like to call as the Shadow map.

The shadow map is just the 1-channel texture where blacker the color means, more close it is to the light. While whiter the color means the farther it is from light. And, a totally white color means, that part is not visible to the light.

With black and white I’m assuming the depth range is from 0.0 to 1.0, which is the default range I guess.

The second pass is also not very difficult. We just need a matrix that will take our vertex positions from object space to the clip space from the light’s view. Let’s call it the shadow matrix.

Once, we have the shadow matrix, we just need to compare the compare between the z of stored depth map and the z of our vertex’s position.

Creating of shadow matrix is also not very difficult. I calculate it like this:


    GLKMatrix4 basisMat = GLKMatrix4Make(0.5f, 0.0f, 0.0f, 0.0f,
                                         0.0f, 0.5f, 0.0f, 0.0f,
                                         0.0f, 0.0f, 0.5f, 0.0f,
                                         0.5f, 0.5f, 0.5f, 1.0f);
    GLKMatrix4 shadowMat = GLKMatrix4Multiply(basisMat,                  
                            GLKMatrix4Multiply(renderer->shadowProjection,  
                            GLKMatrix4Multiply(renderer->shadowView ,mMat)));

The basis matrix is just another way to move the range from [-1, 1] to [0, 1]. Or from depth coordinate system to texture coordinate system.

This is how it can be calculated:


GLKMatrix4 genShadowBasisMatrix()
{
  GLKMatrix4 m = GLKMatrix4Identity;
  
  /* Step 2: scale by 1/2 
   * [0, 2] -> [0, 1]
   */
  m = GLKMatrix4Scale(m, 0.5f, 0.5f, 0.5f);

  /* Step 1: Translate +1 
   * [-1, 1] -> [0, 2]
   */
  m = GLKMatrix4Translate(m, 1.0f, 1.0f, 1.0f);

  return m;
}

The first run was pretty cool on the simulator, but not so good on the device.

That is something I learned from my last experiment. To try out on the actual device as soon as possible.

I tried playing around with many configurations, like using 32 bits for depth precision, using bilinear filtering with depth map texture, changing the depth range from 1.0 to 0.0 and many other I don’t even remember. I should say, changing the depth range from the default [0, 1] to [1, 0] was the best.

If you’re working on something, where the shadows needs to be projected far away from the object. Like on a wall behind, try this in the first pass, while creating the depth map:


glDepthRangef(1.0f, 0.0f);

And, in the second pass don’t forget to switch back to


glDepthRangef(0.0f, 1.0f);

Of course, you also need to update your shader code, so that the result 1.0 now means fail case while 0.0 means the pass case. Or you could even update the GL_TEXTURE_COMPARE_FUNC to GL_GREATER.

But, for majority cases where the shadow is attached to the geometry. The experiment’s code should work fine.

Another bit of improvement, that can be added is using the PCF (Percentage Closer Filtering) method. It helps with the anti-aliasing at the shadow edges.

The way PCF works is, we read the result from surrounding textures and average them.
Here’s an example of the improvement you can expect.

Another shadow mapping method, that I would like to try in the future is with random sampling. Where instead of fixed offsets, we pick random samples around the fragment. I’ve heard this creates the soft shadow effect.

Shadow mapping, is a field of many hit and trials. There doesn’t seems to be one solution fit all. There are so many parameters that can be configured in so many ways. Like, you can try with culling front face, or setting polygon offset to avoid z-fighting. In the end, you just need to find the solution that fits your current needs.

The code right now only works for OpenGL ES3, as I was using many things available only to GLSL 3.00. But, since the trick is very trivial, I would in the future upgrade to code to work with OpenGL ES 2 as well. I see no reasons why it shouldn’t work.

Posted in OpenGL, _dev | Tagged , , , | 3 Comments

Experiment 12: Gaussian Blur

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

This experiment was all about learning differences between the simulator and the actual device.

I started out with trying to apply the blur filter using the famous Gaussian filter.

I used the 1-dimension equation for filtering and rendered the result multiple times.

In the original experiment, I had two framebuffers, first was attached to a texture and second was attached to the output.

Now, in first pass I rendered whatever was on the plan to the off-screen framebufffer. In pass 2, I rendered the output from the off-screen framebuffer’s attached texture with blurring applied in x axis. In pass 3, I again used the off-screen framebuffer’s attached texture as input and rendered to the on-screen framebuffer with blurring applied to the y axis.

This worked perfectly as expected on the Simulator. But, failed on the actual device.

First thing that I noticed was that the GLSL on device doesn’t seems to be working fine with uniform arrays.

I had this declaration


uniform lowp float uf5k_Weight[5];

But for some weird reasons, it compiled without any problem but didn’t just worked in the practical world. So, I simply unrolled it to:


uniform lowp float uf5k_Weight0;
uniform lowp float uf5k_Weight1;
uniform lowp float uf5k_Weight2;
uniform lowp float uf5k_Weight3;
uniform lowp float uf5k_Weight4;

And that resolved the problem of data flow between the App and the shader.

Next problem I faced was with my 3-pass algorithm. The 3rd pass never seemed to work. After trying many things out, I decided to do it in the ugliest way possible.

Now, instead of 2 framebuffers, we have 3 framebuffers. One on-screen and two off-screen with attached textures. In first pass we render the result to first off-screen framebuffer. Pass 2, we render the result with blurring in one direction to second off-screen framebuffer. And in the final Pass 3, we render the result to the on-screen framebuffer with blurring applied to the other direction.

I know, this is not the best way to work, and probably in future I’ll optimize it. But, for now this is just working fine with me.

One thing that I might try in the future is, to overload the fragment shader to process the 2D blurring in a single pass.

This is the result I get on my 4th generation iPod Touch.

Posted in OpenGL, _dev, _iOS | Tagged , , , | Leave a comment

Experiment 11: Multisampling

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Multisampling is an anti aliasing algorithm where the pixel information is calculated more than once, with each time adjusting the pixel’s center by some factor.

With OpenGL, it is just a few lines of code. The basic theory is that we use two framebuffers. We draw our geometry to the first framebuffer, which is multi-sampled and then the resolved color information is written out to a second framebuffer. The second framebuffer is then written out to the screen.

It’s very quick to be implemented and the output is very pleasing.

There is a bit of difference in the function calls between ES2 and ES3 context. That is one reason why I picked C++ with this experiment. There are two RenderingEngine class, each dedicated to ES2 and ES3 way of implementing MSAA.

Posted in OpenGL, _general_things | Tagged , , , | Leave a comment

Experiment 10: Edge Detection

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

With last experiment’s success, now we can dive into more interesting shading effects. We can now render to textures and then use them to achieve some post processing steps. To begin with, here’s edge detection.

The edge detection algorithm I’ve used is based on the Sobel operator, which takes into consideration all neighboring pixel information and judge whether the current pixel qualifies a certain level to be declared as edge case. If yes, we color is black otherwise color it white.

Since this is a post-processing effect, we need two framebuffers. One that binds the rendering information to the screen, while the other binds it to a temporary texture.

First we just draw our geometry to the texture. Next, we apply the post processing effect and draw the result to a quad on the screen.

Here’s my output

Posted in OpenGL, _dev | Tagged , , | Leave a comment

Experiment 9: Render to texture

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Rendering to texture is a fairly common operation when dealing with some intermediate or advance effects with OpenGL.

The operation is fairly straight forward. We just need to bind a texture to the framebuffer, and all the rendering outputs to the texture.

Another possibility is to have a new offscreen framebuffer, and bind the texture to that framebuffer.

In this experiment I have two framebuffers. One with clear color set to red and other with blue. I’m rendering a triangle to the offscreen framebuffer and then render a quad with texture on screen. Finally, I’m also rendering the triangle directly to the on screen framebuffer.

As always, check out the code from repository. I’m hoping to use render to texture soon to achieve few post-processing effects.

Posted in OpenGL, _dev | Tagged , | Leave a comment

Experiment 8: Cubemaps

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Cubemap is another extension provided by OpenGL to us. Basically, it consists of 6 POT images, where each image represents a face of the cube. We can use this texture to map on to our geometry.

One of the few basic operations where cubemaps can be used in are:

1. Skybox

Skybox refers to mapping the texture in a way that the entire environment can be rendered on the screen. To implement skybox you just need to pass in the position vectors and sample the texture from interpolated position vectors per fragment.


In action

2. Reflection

Another important effect that can be easily achieved by cubemaps is reflection of the environment on the geometry. The way it works is, we need to calculate the reflection vector per position and use it to sample the texture. One of the easiest way to calculate the reflection vector is to find the eye to position vector and reflect is around the surface normal at that position. Think of it as what would the eye see at any given position.


In action

3. Refraction

With cube maps implementing refraction is also quite simple. Just like reflection, we need to find a refracted ray. We can use Snell’s law

Let’s say we have two environments A and B, with refractive index nA and nB respectively. The the ratio of angle of incidence (tA) and angle of refraction (tB) can be evaluated as:

sin(tA)/sin(tB) = nB/nA

All this calculation is already done in the GLSL function refract, and it returns us the refracted vector.


In action

The only downside of using cube maps is that it is in fact just reading the data out of texture, so any geometry in the scene is not considered. Say we have more than one teapot in the scene, and each teapot is moving around. Then we won’t get the image of other teapot in reflection or refraction.

One way to fix this is creating the cube map dynamically, by rendering to the texture 6 times, and using the generated textures to create a cubemap. As expected this method is avoided for real time rendering.

Posted in OpenGL, _dev | Leave a comment