Getting started Cocos2d with Swift

I understand that everybody is super excited about the Swift Programming Language right? Although Apple has provided the Playground for getting acquainted with Swift but there is an even better way – by making a game.

If you’re already familiar with Cocos2d and want to make games using Cocos2d and Swift, then walk through there simple steps to get started with the Cocos2d project template you already have in your Xcode.

1. Create a new Cocos2d project: This is nothing new for you guys.

2. Give it a name: Why am I even bothering with these steps?

3. Add Frameworks: I’m not sure if this is a temporary thing, but for some reasons I was getting all sort of linking errors for missing frameworks and libraries. These are all the iOS Frameworks the Cocos2d uses internally. Click on the photo to enlarge and see what all frameworks you actually need to add.

Make sure you tidy them up in a group, because the Xcode beta will add them at top level of your nice structure. Again, I guess this is also a temporary thing as Xcode 5 already knows how to put them inside a nice directory.

4. Remove all your .h/m files: Next remove all the .m files that Cocos2d creates for you. Ideally they should be AppDelegate.h/m HelloWorldScene.h/m IntroScene.h/m and main.m.

Yes, you heard it right. We don’t need main.m anymore as Swift has no main function as an entry point. Don’t remove any Cocos2d source code. Best guess is, if it’s inside the Classes directory it’s probably your code.

5. Download the repository: https://github.com/chunkyguy/Cocos2dSwift

6. Add Swift code: Look for Main.swift, HelloWorldScene.swift and IntroScene.swift in the Classes directory and add them to your project by the usual drag and drop. While you’re adding them, you should get a dialog box from Xcode ‘Would you like to configure an Objective-C bridging header?’.

Say ‘Yes’. The Xcode is smart enough to guess that you’re adding Swift files to a project that already has Objective-C and C files.

The bridging header makes all your Objective-C and C code visible inside your Swift code. Well, in this case it’s actually the Cocos2d source code.

7. Locate the bridging header: It should have a name such as ‘YourAwesomeGame-Bridging-Header.h’. In it add the two lines we need to bring all the cocos2d code we need inside our Swift code.

In case you have some other custom Objective-C code you would like your Swift code to see, import it here.

8. That it folks! Compile and run.

There are some trivial code changes that you can read in repository’s README file.

Have fun!

Posted in _games, _iOS | Tagged , , , | Leave a comment

Experiment 14: Object Picking

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Every game at some point requires the user to interact with the 3D world. Probably in the form of a mouse click event firing bullets at targets or like a touch drag on a mobile device to rotate the view on the screen. This is called object picking, because most probably we are picking objects in our 3D space using mouse or touch screen from the device space.

How I like to picture this problem is, that we have a point in the device coordinate system and we would like to ray trace it back to some relevant 3D coordinate system, like the world space or probably the model space.

In the fixed function pipeline days there was a gluUnProject function which helped in this ray tracing, and still most developers like to use it. Even the GLKit provides a similar GLKMathUnproject function. Basically, this function requires a modelview matrix, a projection matrix, a viewport or the device coordinates and a 3D vector in device coordinate and it returns back the vector transformed to the object space.

With modern OpenGL, we don’t need to use that function. First of all it requires us to provide all the matrices seperately, and secondly, it returns the vector in model space, but our needs could be different, maybe we need it in some other space. Or we don’t need the vector at all, rather, just a bool flag to indicate whether the object can be picked or not.

At the core, we just need two things: a way to transform our touch point from device space to the clip space and an inverse of model-view-projection matrix to transform it from the clip space to the object’s model space.

Lets assume we have a scene with two rotating cubes and we need to handle the touch events on the screen and test if the touch point a collision with any of the objects in the scene. If the collision does happens, we just change the color of that cube.

First problem is to convert the touch point from iPhone’s coordinates system to the OpenGL’s coordinate system. Which is easy as it just means we need to flip the y-coordinate


  /* calculate window size */
  GLKVector2 winSize = GLKVector2Make(CGRectGetWidth(self.view.bounds), CGRectGetHeight(self.view.bounds));

  /* touch point in window space */
  GLKVector2 point = GLKVector2Make(touch.x, winSize.y-touch.y);

Next, we need to transform the point to a normalized device coordinates space, or to range of [-1, 1]


  /* touch point in viewport space */
  GLKVector2 pointNDC = GLKVector2SubtractScalar(GLKVector2MultiplyScalar(GLKVector2Divide(point, winSize), 2.0f), 1.0f)

Next question that we need to tackle is, how to calculate a 3D vector from a device space, as the device is a 2D space. We need to remember that after the projection transformations are applied the depth of the scene is reduces to the range of [-1, 1].
So, we can use this fact and calculate the 3D positions at both the depth locations.


  /* touch point in 3D for both near and far planes */
  GLKVector4 win[2];
  win[0] = GLKVector4Make(pointNDC.x, pointNDC.y, -1.0f, 1.0f);
  win[1] = GLKVector4Make(pointNDC.x, pointNDC.y, 1.0f, 1.0f);

Then, we need to calculate the inverse of the model-view-projection matrix


  GLKMatrix4 invMVP = GLKMatrix4Invert(mvp, &success);

Now, we have all the things required to trace our ray


  /* ray at near and far plane in the object space */
  GLKVector4 ray[2];
  ray[0] = GLKMatrix4MultiplyVector4(invMVP, win[0]);
  ray[1] = GLKMatrix4MultiplyVector4(invMVP, win[1]);

Remember the values could be in the homogenous coordinates system, to convert them back to human-imaginable cartesian coordinate system we need to divide by the w component.


  /* covert rays from homogenous coordsys to cartesian coordsys */
  ray[0] = GLKVector4DivideScalar(ray[0], ray[0].w);
  ray[1] = GLKVector4DivideScalar(ray[1], ray[1].w);

We don’t need the start and end of the ray, but we need the ray in form of


 R = o + dt

Where o is the ray origin and d is the ray direction and t is the variable.

We calculate the ray direction as


  /* direction of the ray */
  GLKVector4 rayDir = GLKVector4Normalize(GLKVector4Subtract(ray[1], ray[0]));

Why is it normalized? We will take look at that later.

Now we have the ray in the object space. We can simply to a sphere intersection test. We must already know the radius of the bounding sphere, if not we can easily calculate it. Like for a cube, I’m assuming the radius to be equal to the half edge of any side.

For detailed information regarding a ray-sphere intersection test I recommend this article, but I’ll go through the minimum that we need to accomplish our goal.

Let points on surface of sphere of radius r is given by


x^2 + y^2 + z^2 = r^2
P^2 - r^2 = 0; where P = {x, y, z}

All points on sphere where ray hits should obey


(o + dt)^2 - r^2 = 0
o^2 + (dt)^2 + 2odt - r^2 = 0
f(t) = (d^2)t^2 + (2od)t + (o^2 - r^2) = 0

This is a quadratic equation in form


 ax^2 + bx + c = 0

And, it makes sense, because every ray can hit the sphere a maximum two points, first when it goes in the sphere and second when it leaves the sphere.

The determinant of the quadratic equation is calculated as


det = b^2 - 4ac

If determinant is < 0 it means no roots or the ray misses the sphere
If determinant is equal 0 means one root or the ray started from the inside of the sphere or it just touches it.
If determinant is > 0 means we have two roots, or the perfect case of ray passing through the sphere.

In our equation the values of a, b, c are


a = d^2 = 1; as d is a normalized vector and dot(d, d) = 1
b = 2od
c = o^2 - r^2

So, if we have ray with direction normalized we can simply test if it hits our sphere in model space with


- (BOOL)hitTestSphere:(float)radius
        withRayOrigin:(GLKVector3)rayOrigin
         rayDirection:(GLKVector3)rayDir
{
  float b = GLKVector3DotProduct(rayOrigin, rayDir) * 2.0f;
  float c = GLKVector3DotProduct(rayOrigin, rayOrigin) - radius*radius;
  printf("b^2-4ac = %f x %f = %f\n",b*b, 4.0f*c, b*b - 4*c);
  return b*b >= 4.0f*c;
}

Hope this helps you in understanding the object picking concept. Don’t forget to checkout the full code from the repository linked at the top of this article.

Posted in OpenGL, _algo, _dev | Tagged , , | 2 Comments

Experiment 13: Shadow mapping

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Shadows add a great detail to the 3D scene. For instance, it’s really hard to judge how high the object really is above the floor without a shadow, or how far away is the light source from the object.

Shadow mapping is one of the trick to create shadows for our geometry.

The basic idea it to render the scene twice. First from the light’s point of view, and save the result in a texture. Next, render from the desired point of view and compare the result with the saved result. If the part of geometry being rendered falls under shadow region, don’t apply lighting to it.

During setup routine, we need 2 framebuffers. One for each rendering pass. The first framebuffer needs to have a depth renderbuffer, and a texture attached to it. The second framebuffer is your usual framebuffer with color and depth.

The first pass is very simple. Switch the point of view to the light’s view and just draw the scene as quick as possible to the framebuffer. You just need to pass the vertex positions and the model-view-projection matrix to take those positions from the object space to the clip space. The result is what some like to call as the Shadow map.

The shadow map is just the 1-channel texture where blacker the color means, more close it is to the light. While whiter the color means the farther it is from light. And, a totally white color means, that part is not visible to the light.

With black and white I’m assuming the depth range is from 0.0 to 1.0, which is the default range I guess.

The second pass is also not very difficult. We just need a matrix that will take our vertex positions from object space to the clip space from the light’s view. Let’s call it the shadow matrix.

Once, we have the shadow matrix, we just need to compare the compare between the z of stored depth map and the z of our vertex’s position.

Creating of shadow matrix is also not very difficult. I calculate it like this:


    GLKMatrix4 basisMat = GLKMatrix4Make(0.5f, 0.0f, 0.0f, 0.0f,
                                         0.0f, 0.5f, 0.0f, 0.0f,
                                         0.0f, 0.0f, 0.5f, 0.0f,
                                         0.5f, 0.5f, 0.5f, 1.0f);
    GLKMatrix4 shadowMat = GLKMatrix4Multiply(basisMat,
                            GLKMatrix4Multiply(renderer->shadowProjection,
                            GLKMatrix4Multiply(renderer->shadowView ,mMat)));

The basis matrix is just another way to move the range from [-1, 1] to [0, 1]. Or from depth coordinate system to texture coordinate system.

This is how it can be calculated:


GLKMatrix4 genShadowBasisMatrix()
{
  GLKMatrix4 m = GLKMatrix4Identity;

  /* Step 2: scale by 1/2
   * [0, 2] -> [0, 1]
   */
  m = GLKMatrix4Scale(m, 0.5f, 0.5f, 0.5f);

  /* Step 1: Translate +1
   * [-1, 1] -> [0, 2]
   */
  m = GLKMatrix4Translate(m, 1.0f, 1.0f, 1.0f);

  return m;
}

The first run was pretty cool on the simulator, but not so good on the device.

That is something I learned from my last experiment. To try out on the actual device as soon as possible.

I tried playing around with many configurations, like using 32 bits for depth precision, using bilinear filtering with depth map texture, changing the depth range from 1.0 to 0.0 and many other I don’t even remember. I should say, changing the depth range from the default [0, 1] to [1, 0] was the best.

If you’re working on something, where the shadows needs to be projected far away from the object. Like on a wall behind, try this in the first pass, while creating the depth map:


glDepthRangef(1.0f, 0.0f);

And, in the second pass don’t forget to switch back to


glDepthRangef(0.0f, 1.0f);

Of course, you also need to update your shader code, so that the result 1.0 now means fail case while 0.0 means the pass case. Or you could even update the GL_TEXTURE_COMPARE_FUNC to GL_GREATER.

But, for majority cases where the shadow is attached to the geometry. The experiment’s code should work fine.

Another bit of improvement, that can be added is using the PCF (Percentage Closer Filtering) method. It helps with the anti-aliasing at the shadow edges.

The way PCF works is, we read the result from surrounding textures and average them.
Here’s an example of the improvement you can expect.

Another shadow mapping method, that I would like to try in the future is with random sampling. Where instead of fixed offsets, we pick random samples around the fragment. I’ve heard this creates the soft shadow effect.

Shadow mapping, is a field of many hit and trials. There doesn’t seems to be one solution fit all. There are so many parameters that can be configured in so many ways. Like, you can try with culling front face, or setting polygon offset to avoid z-fighting. In the end, you just need to find the solution that fits your current needs.

The code right now only works for OpenGL ES3, as I was using many things available only to GLSL 3.00. But, since the trick is very trivial, I would in the future upgrade to code to work with OpenGL ES 2 as well. I see no reasons why it shouldn’t work.

Posted in OpenGL, _dev | Tagged , , , | Leave a comment

Experiment 12: Gaussian Blur

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

This experiment was all about learning differences between the simulator and the actual device.

I started out with trying to apply the blur filter using the famous Gaussian filter.

I used the 1-dimension equation for filtering and rendered the result multiple times.

In the original experiment, I had two framebuffers, first was attached to a texture and second was attached to the output.

Now, in first pass I rendered whatever was on the plan to the off-screen framebufffer. In pass 2, I rendered the output from the off-screen framebuffer’s attached texture with blurring applied in x axis. In pass 3, I again used the off-screen framebuffer’s attached texture as input and rendered to the on-screen framebuffer with blurring applied to the y axis.

This worked perfectly as expected on the Simulator. But, failed on the actual device.

First thing that I noticed was that the GLSL on device doesn’t seems to be working fine with uniform arrays.

I had this declaration


uniform lowp float uf5k_Weight[5];

But for some weird reasons, it compiled without any problem but didn’t just worked in the practical world. So, I simply unrolled it to:


uniform lowp float uf5k_Weight0;
uniform lowp float uf5k_Weight1;
uniform lowp float uf5k_Weight2;
uniform lowp float uf5k_Weight3;
uniform lowp float uf5k_Weight4;

And that resolved the problem of data flow between the App and the shader.

Next problem I faced was with my 3-pass algorithm. The 3rd pass never seemed to work. After trying many things out, I decided to do it in the ugliest way possible.

Now, instead of 2 framebuffers, we have 3 framebuffers. One on-screen and two off-screen with attached textures. In first pass we render the result to first off-screen framebuffer. Pass 2, we render the result with blurring in one direction to second off-screen framebuffer. And in the final Pass 3, we render the result to the on-screen framebuffer with blurring applied to the other direction.

I know, this is not the best way to work, and probably in future I’ll optimize it. But, for now this is just working fine with me.

One thing that I might try in the future is, to overload the fragment shader to process the 2D blurring in a single pass.

This is the result I get on my 4th generation iPod Touch.

Posted in OpenGL, _dev, _iOS | Tagged , , , | Leave a comment

Experiment 11: Multisampling

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Multisampling is an anti aliasing algorithm where the pixel information is calculated more than once, with each time adjusting the pixel’s center by some factor.

With OpenGL, it is just a few lines of code. The basic theory is that we use two framebuffers. We draw our geometry to the first framebuffer, which is multi-sampled and then the resolved color information is written out to a second framebuffer. The second framebuffer is then written out to the screen.

It’s very quick to be implemented and the output is very pleasing.

There is a bit of difference in the function calls between ES2 and ES3 context. That is one reason why I picked C++ with this experiment. There are two RenderingEngine class, each dedicated to ES2 and ES3 way of implementing MSAA.

Posted in OpenGL, _general_things | Tagged , , , | Leave a comment

Experiment 10: Edge Detection

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

With last experiment’s success, now we can dive into more interesting shading effects. We can now render to textures and then use them to achieve some post processing steps. To begin with, here’s edge detection.

The edge detection algorithm I’ve used is based on the Sobel operator, which takes into consideration all neighboring pixel information and judge whether the current pixel qualifies a certain level to be declared as edge case. If yes, we color is black otherwise color it white.

Since this is a post-processing effect, we need two framebuffers. One that binds the rendering information to the screen, while the other binds it to a temporary texture.

First we just draw our geometry to the texture. Next, we apply the post processing effect and draw the result to a quad on the screen.

Here’s my output

Posted in OpenGL, _dev | Tagged , , | Leave a comment

Experiment 9: Render to texture

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Rendering to texture is a fairly common operation when dealing with some intermediate or advance effects with OpenGL.

The operation is fairly straight forward. We just need to bind a texture to the framebuffer, and all the rendering outputs to the texture.

Another possibility is to have a new offscreen framebuffer, and bind the texture to that framebuffer.

In this experiment I have two framebuffers. One with clear color set to red and other with blue. I’m rendering a triangle to the offscreen framebuffer and then render a quad with texture on screen. Finally, I’m also rendering the triangle directly to the on screen framebuffer.

As always, check out the code from repository. I’m hoping to use render to texture soon to achieve few post-processing effects.

Posted in OpenGL, _dev | Tagged , | Leave a comment

Experiment 8: Cubemaps

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Cubemap is another extension provided by OpenGL to us. Basically, it consists of 6 POT images, where each image represents a face of the cube. We can use this texture to map on to our geometry.

One of the few basic operations where cubemaps can be used in are:

1. Skybox

Skybox refers to mapping the texture in a way that the entire environment can be rendered on the screen. To implement skybox you just need to pass in the position vectors and sample the texture from interpolated position vectors per fragment.


In action

2. Reflection

Another important effect that can be easily achieved by cubemaps is reflection of the environment on the geometry. The way it works is, we need to calculate the reflection vector per position and use it to sample the texture. One of the easiest way to calculate the reflection vector is to find the eye to position vector and reflect is around the surface normal at that position. Think of it as what would the eye see at any given position.


In action

3. Refraction

With cube maps implementing refraction is also quite simple. Just like reflection, we need to find a refracted ray. We can use Snell’s law

Let’s say we have two environments A and B, with refractive index nA and nB respectively. The the ratio of angle of incidence (tA) and angle of refraction (tB) can be evaluated as:

sin(tA)/sin(tB) = nB/nA

All this calculation is already done in the GLSL function refract, and it returns us the refracted vector.


In action

The only downside of using cube maps is that it is in fact just reading the data out of texture, so any geometry in the scene is not considered. Say we have more than one teapot in the scene, and each teapot is moving around. Then we won’t get the image of other teapot in reflection or refraction.

One way to fix this is creating the cube map dynamically, by rendering to the texture 6 times, and using the generated textures to create a cubemap. As expected this method is avoided for real time rendering.

Posted in OpenGL, _dev | Leave a comment

Experiment 7: Lights

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

So far in these experiments we’ve been using some form of shading all along, now is the time to talk about them.

For all these experiments I used the teapot model available in OBJ format from http://groups.csail.mit.edu/graphics/classes/6.837/F03/models/teapot.obj.

With light or shading any object would appear to be in plain 2 dimensions, lights are the factor that makes us perceive any object in 3 dimensions.

1. Diffuse Shading.

This one of the most basic shading equation. It just depends on the direction of light and the angle it makes with the surface normal of the object.

I tried with both per vertex and per fragment. The result was almost similar because the mesh is a high poly model.

2. ADS shading (Phong shading)

Another popular shading equation is Phong shading, which takes into account the location of the eye or the viewer.

Basically, it just adds the specular highlights or the glossiness. The way it works is that, it calculates the angle between the reflection vector and the eye vector for each point. The reflection vector is the reflection or light vector from light to the point on object about surface normal.

It has a considerable difference in the per vertex and per fragment shading.

Another extension to Phong shading is Blinn Shading, which exploits the fact that we don’t need to calculate the reflection vector at all, as all we’re considering is the angle between reflection vector and the eye vector. We can just pick the half vector between eye and light vector and calculate the angle between the half vector and the normal.

3. Toon shading.

Toon shading is an interesting shading that restricts the color shades to a limited levels. This is easily achieved by clamping down color values to limited range. One way to achieve this is to clamp the values of diffuse factor, the angle between the light vector and the point on 3D surface.

It only looks good when used per fragment basis.

4. Double sided shading.

Another important improvement we can add to the shading is by enabling double sided shading. So far, we’ve been only calculating color on the front facing side. It looks good for closed surface, but for 3D objects with hollow space, like the mouth of our teapot is, we need to calculate the back color.

Calculating the back color is very easy, just invert the normal in the shader and pass two color values to the fragment shader, where the face of the triangle can be checked from built-in variable gl_FrontFacing.

5. Flat shading.

Flat shading was popular in the days when the GPU had very limited processing power. Back in those days, the color of a triangle was not interpolated, so each triangle had a single color.

It’s surprisingly hard to emulate flat shading behavior in modern GPUs using shaders. I think it is impossible to do flat shading in GLSL 100. We need the flat qualifier enabled to pass a value from vertex shader to fragment shader without interpolation. The flat keyword is only available on OpenGL ES 3.0 + GLSL 300 onwards. So you need the latest hardware to do the flat shading.

I was able to do this in iPhone 5S. But, the result was wonderful.

Posted in OpenGL, _dev | Leave a comment

Experiment 6: Normal Mapping

This article is part of the Experiments with OpenGL series

The source code is available at https://github.com/chunkyguy/EWOGL

Normal mapping has been one of the oldest trick in the book of graphics programming. With programmable pipeline, it has got even easier. In this experiment I deal the basics of normal mapping.

At the core, we store all the normal information in a texture form. The coordinate system for normal texture is known as ‘tangent space’.

Here’s a list of things we need to do in order to achieve normal mapping:

1. Read the texture RGB values. They would be between [0.0, 1.0] scale them to [-1.0, 1.0].

2. Calculate per pixel tangent and binormal vector. One trick to calculate this is using position vector and texture coordinates.

For each triangle ABC, we pick any two edges say AB and AC. Lets call those edges as E1 and E2.

E1 = B - A
E2 = C - A

If Ta, Tb and Tc are the texture coordinates at A, B and C, then

T1 = Tb - Ta
T2 = Tc - Ta

Lets say the tangent and binormal vectors are P and Q.

E1 = P * T1.x + Q * T1.y
E2 = P * T2.x + Q * T2.y

Solving for P and Q we get,

P = (E1 * T2.y - E2 * T1.y) / R
Q = (E2 * T1.x - E1 * T2.x) / R

where,

R = T1.x * T2.y - T1.y * T2.x

3. Calculate the normal as cross product of tangent and binormal vectors calculated above.

4. Create a TBN matrix for each fragment on the screen. TBN stands for Tangent, Binormal, Normal, which is just axes of the tangent space coordinate system. Since, it needs to calculated per pixel, we do this in fragment shader.
The TBN matrix helps transforming vectors from tangent space to object space.

5. Once the normal is in normal space, apply usual lightening equations. Only thing to consider is that the light vector should be in the object space as well. This task can be done in the CPU and passed to shader as a uniform.

Here’s a normal map I applied to the cube.

You can observe that the cube appears to be bumpy.

Posted in OpenGL, _dev | Leave a comment