Home / 2015

Simple environment map

Just want to share a simple technique I used in my christmas experiment this year. I was trying to create some image based lighting. I found myself often in a situation that I just need a background. However I don’t have enough photoshop skill to make a custom background. I’ve always want to put more colours into my project and like to have some beautiful gradient background instead of just a plain colour. So I discover this trick : I want the colour to look nature in my work, so why don’t I just grab the colour from the nature itself ?

It’s simple : search images for ‘sky gradient’ and you’ll get tons of beautiful gradients colour for you :


some of them have some clouds but you can just apply a massive blur on it and it’ll look smooth.


After get this in there’s a really easy way to make an image based lighting that doesn’t require a cube map. I found an amazing article here :

and this super useful shader  :

vec2 envMapEquirect(vec3 wcNormal, float flipEnvMap) {
  //I assume envMap texture has been flipped the WebGL way (pixel 0,0 is a the bottom)
  //therefore we flip wcNorma.y as acos(1) = 0
  float phi = acos(-wcNormal.y);
  float theta = atan(flipEnvMap * wcNormal.x, wcNormal.z) + PI;
  return vec2(theta / TwoPI, phi / PI);

vec2 envMapEquirect(vec3 wcNormal) {
    //-1.0 for left handed coordinate system oriented texture (usual case)
    return envMapEquirect(wcNormal, -1.0);

With this you only need the normal to get the reflect colour from an image, combine this with the gradient colour image we got, you can produce a very nature look environment lighting.

Ultimately you’ll probably want to go for cube map + PBR but I think this could be useful for some smaller projects.

And lastly here is the link to my christmas experiment this year :


and the source code is here :


the sound cloud loader I was using is from here :
Really glad to be part of it again and merry christmas everyone !

Codevember and ray marching



Still can’t believe that I’ve made it, but really glad I did. I decided to do this because I feel I never pushed myself hard enough, and want to challenge myself. It was easier at the beginning, while you hare a lot of ideas from the past. And then as the time goes you start to run out of ideas, that’s where the panic starts. I want to say thank you to all my friends who provides me ideas and inspirations. In this month everyday is like this : finish the experiment of the day just before I go to bed, then start to think about what I can do next day. It’s really intense, however it helped me a lot. In order to create work quickly I need to gather tools first, and save more tools while building them. The more tools you have, the quicker you can build.


Ray marching

A great part of my codevember experiments are ray marchings. I really like it. It was a huge mystery to me and seems super complicated. I am lucky to come across this live coding tutorial just before the codevember starts.

I’m so glad that my french hasn’t completely gone so I am still able to understand the most part of it. It’s a really wonder tutorial that guide you step by step to build your first ray marching experiment. Once finished this you’ll be able to start understand better the codes on shadertoy.com. And need to mention this amazing blog post of iq. It has all the basic tools you need. With this you are already able to create some amazing stuff.

I really like ray marching. It’s really simple: everything happens in 1 fragment shader. All the effects you need is just one function call, e.g. AO, Shadow, Lighting ( Diffuse, Specular), Spherical reflections … etc. For me it feels much simpler and easier to deal with. Besides, there’s already tons of tools on shadertoy that you can use. All you need is just to figure out what the arguments you need to pass in to the function, and most of the time they are really simple.

Also here are some other interesting videos related to ray marching :

also some useful links :



My latest project finally gone live. I spend some efforts working on the liquid/fluid looks of the bubble and found some interesting techniques. Really thankful for the people who create these techniques and willing to share with everyone.

Here is a demo link to the bubble, you can click on the bubble to launch a wave as well.




Animate bubble in shader and normal

The first task is to animate the bubble and getting the right normal. The way we’ve done it is to put everything to the vertex shader and then calculate the normals based on the vertices positions. This approach makes it really easy for us when we decide to add the ripples in the bubble. We only need to calculate the position offset caused by the ripple and added to the vertex position, then the normal map is updated.
In order to do so, in the positions buffer instead of putting in the position of the vertex, I put the rotation of x , rotation of y and the size of the bubble. And to get the position of vertex you can use this function :

vec3 getPosition(vec3 values) {
  float rx = values.y / numSeg * PI - PI;
  float ry = values.x / numSeg * PI * 2.0;

  vec3 pos = vec3(0.0);
  pos.y = cos(rx) * values.z;
  float r = sin(rx) * values.z;
  pos.x = cos(ry) * r;
  pos.z = sin(ry) * r;
  return pos;

Then using the position to get the 3D noise ( I’m using this noise function) and the ripple height.
At the end, the final position of the vertex is the original position (sphere) + noise + ripple.

Because we are using the rotation X and rotation Y to get the vertex position so we can get the neighbor position just by offset this rotation X and rotation Y. And with the position of the neighbors, we can calculate a simple normal by using cross product. The shader code looks like this:

vec3 currPos = getFinalPosition(position); // getPosition() + noise + ripple
vec3 rightPos = getFinalPosition(position+vec3(1.0, 0.0, 0.0);
vec3 bottomPos = getFinalPosition(position+vec3(0.0, 1.0, 0.0);
vec3 vRight = rightPos - currPos;
vec3 vBottom = bottomPos - currPos;
vec3 normal = normalize(cross(vBottom, vRight));

This way you could get a animate bubble and a normal with it.



Distortion with background image

The second task is to distort the background behind the bubble. I started with the refract function but that requires a cube map, we’ve only got an image. So i start look around to see if there’s a simpler way to create the refraction effect, then I found this article :
In short you can create a refraction effect by just using the normal.xy as a displacement map.

vec2 newUV =uv + normal * distortionRate;
gl_FragColor = texture2D(texture, newUV);

with this you can achieve a good simulate refraction effect with just a background image instead of a cube map.




At the beginning of the project we started with the traditional diffuse and specular lighting. It works however the bubble lacks one important feature : reflection. I went back to search for possible solutions and then found this amazing article :
using this effect add a lots to the bubble and gives it a very strong glassy/fluid look, which is exactly what the client after.



Small Details

We also add 2 small detail to the bubble :

  1. Distorted a bit more toward the edge of the bubble.
  2. Darker on the edge of the bubble.

These 2 works in the same way and I need a value that changes from the center of the bubble to the edge. A quick way to do it is to get the dot product of the normal and the vector(0.0, 0.0, 1.0). Once you got this you can add this to the distrotionRate and get the different distortion between the center and the edge.



It is really fun and good learning process to go through all these steps in order to create the final look of it. I believe there are more ways to achieve this look but we really happy with this one. I’m also trying and learning cubemap now. For the next time I might try with the cubemap to recreate this fresnel effect. And even more a dynamic cubemap could be an interesting effect to add on this.

One of my codevember experiment is based on this technique. I only remove the noise animation also replace the lighting map with a much simpler one ( just a glow on the edge )



and also the case study of the whole project on stink digital’s site :

Reasons to be creative 2015

I wasn’t planed to write this, but somehow I feel I should write something about it. So here it is 2 months after the festival.

Reasons to be creative

I feel really different about this RTBC this year. Not only because it’s the first time for me to give a talk on a full session, but also meeting all the amazing people. I felt like a “guest” for the last 2 years with RTBC, but this time I feel “home”. During these 3 days I feel really relaxed ( apart from my talk) and enjoy all the talks. Most of all, I met a lot of friends and make a lot of new friends as well. It’s a weird feeling that I finally meet some twitter friends in the real life, it’s just make it much real instead of just some messages showing on Twitter. You guys are totally awesome and I really enjoy having conversations with you.

I’ve always enjoyed the talks in RTBC. It just feels different to me, I like the mixture between the Dev talks and Designer talks, they are all equally inspiring to me. I always feel motivated after the 3 days and got tons of idea that I want to make. This year is no exception. And plus there’s one special one : the talk by Stacey Mulcahy. She is an amazing speaker and build amazing stuff. There’s one thing touched me the most which is the young game makers. It’s such a wonderful idea and makes me thinking about doing something for the kids as well. After become a father, I keep thinking about what I can do for my kids ? I have some skills and what I can do with it ? That’s the reason makes me start building all these small experiments. I wish my children see me as a maker or a creator, rather than just sitting in front of the computer hitting the keyboard all day. I want them to understand that the computer is just a tool to help you building and creating, and we should focus more on the things we build and the idea/story behind it. They might be too young to get the idea but I’ll keep doing this. Not only I want to give them this idea but also because I enjoy all these moments of building, testing and finally get kids to play with them. When I saw the young game makers project, I was really excited. I saw a possibility that I might be able to bring my work and experiments to more kids and helping them building things. I still don’t have too much idea how to make it happen but at least now I got a goal.


On Stage

It’s such a wonderful experience to me to have my first full session. To be honest I feel it’s actually much easier than the elevator pitch 😀 Having a full hour gives me more room to make mistakes. However I am still nervous to death. I rehearsed like crazy the night before the very morning. I actually feels less nervous when I start talking. I have to thank all my friends give me the advices, you are totally right about everything. Nobody understand better than the thing your are talking about. Once I started talking it feels just like working through my process again. The other useful advice i got is that you can never get rid of your nervous, so just accept it and not trying to fight against it. I found this very useful and actually helped me relaxed before the talk. I know that I was still quite nervous on the stage, that’s why I finished about 5 mins earlier. But which is good as well, so I could have some QAs. And I have to say I am really really really flattered by one of the question asked me about the Kuafu project that I started last year. I am so glad that people still remember it, also I am embarrassed that I haven’t worked on it for a long time. But now I’ve made it my project next year. I’ll make it to a more presentable state.




The project : Dark forest

I got the idea of this project just after John offering me this chance to speak. I had a quite clear goal when I start this project. I know I want to learn the flocking behaviour, I know I want to make some small installations in my backyard. I know I want to test projection on the grass. At the end I made it, which is very important to me, that i set a goal and achieve it. Although the result is something I wasn’t expected in the first place, I didn’t know about the synchronised flashing behaviour, and I didn’t expect that I could find a way to simulate it. I expected the projection on the grass will looks better but it’s actually not. I have to admit it was all these unexpected success and failures make the most memory for me. Now looking back at it after 2 months, I saw lots of space for improvement, but also I still enjoy this project very much. And now I really like setting a goal and work your way to it and document the process of it. If you are interested in this project, i’ve put everything here :


I want to say thank you for all the people who helped me on this project. It meant a lot for me !

So a bit of random stuff but I’m glad I made it to the RTBC this year and meet all the amazing people. I’m glad I made the project and now I can move on to the next one !

Dark Forest – part 2

Here is the part 2 of this project. I start to explore different materials to project on. My first try is on a wall in my backyard. It does looks slightly better when in big scale but it’s not very interesting. So I move on to the grass. It creates some very different visuals. I like how it makes the grass shine. However I don’t have the right equipment that I can hang the projector high enough to cover a larger area, which is a bit of shame because i think it would make it looks much better. Also it’s not projecting from the top. It’s with some angles which sometimes make the particles looks like a short line instead of a dot. This depends where you stand as well.

Just when I was trying to move the projector I accidentally project the particles to the trees. And that’s something really interesting. It looks very similar to the fireflies I saw in Taiwan. The leaves gives it a really different view and serve as a randomness in the system. I really like the result of it. Here is a short video of the experiments I’ve made :


I got another related idea when I test the projection on the grass. I want to make an interactive version for my kids to play. The idea is simple : the firefly with gather to where you stand. I start a new branch in my code, I keep the particles but remove all the trees and make the camera stays in the front. Then I connect it to a kinect which I could capture the position of my kids. Here I tried openCV with kinect for the first time. The performance and accuracy is amazing. I was using the findContour method and it returns a very impressive result :

螢幕快照 2015-08-12 下午2.30.17

The next step is to remap the position to the flocking system and then create an attractor force to pull the particles closer to this point. I had a great fun building this. Not only because I’m playing with OpenCV and kinect, but also my kids reaction to this is just wonderful. During the weekend they keep asking me if they could play with the fireflies again tonight. And after I made it, my daughter just start dancing with the particles. It’s one of the best memory in my life. Here is a short video of that night :

I’ve made another test to project on my chalkboard as well :


Now I’m working on finishing the project. I start added the terrain, trees and the background. Here are some WIP screenshots :






I am really excited about it and glad to see things finally getting together. I’ll keep working on it and hope to see your at reasons to be creative !


Dark Forest – part 1

Hi, here is the part one of this project, which is going to be part of my speech at Reasons to be Creative this year as well.


I get this idea with this beautiful photo :


I fell in love with it right away and want to do something about it. The first idea I had was the flocking experiments, I’ve always enjoy it and wanted to do it myself for a long time. The picture gives me the feeling that the fireflies are swarming in the forest. So I decide to create a flocking system of fireflies flying among the trees.


I started building my first experiment with the particle stream i made a while ago. And adding some cylinders as the placeholder for trees and the particles will fly around them. Here’s what I’ve got :


And then I start wondering how it would look like if I project it on my chalkboard wall ? Also I think it will be interesting if the tree is actually drawn on the wall instead of being rendered in 3D, just give it a bit different feelings. I render the trees with the background colour so when the particles run behind them they will be blocked but shows the colour of the background. I wasn’t 100% sure this will create the surrounding feeling I want but just give it a shot. Surprisingly it works quite well.

I was really happy with the result and decide to take it to the next step.


Couple months later I came across a video talking about the synchronising behaviour of fireflies. I was really shocked and also excited about it. I think it will be very fun to try to reproduce this behaviour in my project. I start searching for videos but there’s not too many until I found this one:

The way they synchronized together is just unbelievable. I went back online and try to search for the ways of recreate this synchronisation. They are not too hard to find. I tried several, they work but not very satisfying :


First one doesn’t really sync completely, they kind of form into groups. The second one synchronised too perfect, which is obviously not the case in the real world. So I read more articles about firefly synchronisation and finally find this approach : Image the firefly keep a circular period. Each firefly will check with its neighbours within a distance. If it senses his flashing circle is fall behind from his neighbour, it will speed up, otherwise slow down. Just 2 simple rules, this video demonstrates how it works :


This time I was really satisfied with the result, of course there’s some tricks to make it less uniform such as if the period difference between it and its neighbor is smaller than certain value, stop adjust its speed. This will make sure they won’t end up in a perfect synchronisation. And the other reason I love this solution is that this is very similar to how flocking works. You don’t need to know the overall speed is, you just need focus on your neighbors and adjust yourself. And it’s perfect for putting in my system as well because it’s the same way to implement the flocking behaviour. Here is the result :


With this I am ready to do the next step : Projection testing in my backyard. I want to bring it out from the screen and see if it will work better to project on the grass.

Harpapong – Challenge of 400 pixels

Couple months ago my friend Owen approached me with this project. It was based on the great work of his harpa pong last year. The basic idea is that they turned the facades of the Harpa concert hall in Reykjavík into a huge canvas by putting a led light on each window. Last year they created a pong game on this enormous canvas that user could play with their phone. This year during the Sónar Reykjavík 2015 they want to put some audio visualisations of the music of the main stage on it. And my friend Owen asked my if I would like to make one visualisation to it. I was really excited about the idea and I said yes right away, and then here comes the challenge : there’s only about 400 pixels per facade. So how big exactly is this canvas ? about this big : Just that tiny thing in the centre.

This is definitely the smallest canvas that I have every worked. I’ve been used to create visuals on a big canvas but now suddenly we only have 400 pixels to make visuals, that’s a whole new challenge to me. At first I was testing with basic geometries such as lines and rectangles. But in the same time I was trying out some ripples for other projects and then I was wondering what will happen if I put the ripples on the canvas of this size ? will it still be recognisable ? This is the ripple I made :


When there’s a beat it will trigger a wave. In the fragment shader I add all the waves together and based on the height of the map I mapped it to different colour which I get them randomly from ColourLovers.

I put it to the platform tool Owen build and this is what I get, it’s hard to recognise the circle but the movement and the colour changing is really interesting. 04



So that’s my contribution to this project, you could check the live demo here :


The project page :  harpapong.com and a short film to the project : https://vimeo.com/122900808

Again thanks Owen for inviting me to this project. I am really proud to be part of it, and also I had a great time to create visuals and play on such a small canvas.


Maps, portrait and Chalkboard

Just playing with map and portraits, inspired by the amazing works of Ed Fairburn

Not too much on the code side. I just create a flood fill function, so the program will pick up a random pixel and the fill the region around it. Although it feels more like photoshopping : Combine map image and the portrait using mask and blend modes. The code it self doesn’t alter the image at all. But I really enjoy watching the image being generated. Then I start to draw the map on my chalkboard wall, and the project these result on it, which looks really good.




And one of my colleague said that it will be interesting if the program could generate the city shape automatically, it reminds me the old substrate thing right away. I took a quick test and the result is very interesting as well. These are more like generate art to me, it still uses these portraits but could generate quite different result each time. There’s some more picture here.



Touch table

I build this projection / touch table for a while now, never got a chance to write about it until now. I got this idea last year and in the time I need a working table for myself so I think : why don’t I just build one for both working and projection ? The idea is simple : Just make the top of the table removable and keep the width / height ratio to 16 : 9 which is the aspect ratio of my projector.



Building the table

For the frames I got some pieces of wood left from my ikea shelves and found a big and thick piece of wood in my backyard which is perfect for the top. It took me about 2 days to build and I don’t have proper tool for this, it will be much faster with the right tool. And of course the quality will be much better too 😀

Projection and Touch

When I want to project I just remove the top and cover with a sheet.  The way that the touch works is that I put a Kinect under the table and facing straight up. So when I press on the sheet the Kinect can capture the depth difference of the press point. It’s not a complicate concept but just a lot of tweaking and calibre, e.g find the right distance range to detect, ignore the frames, noise reduction … etc. However there’s one thing does matter a lot, which is the sheet. I was using the bed sheet, it works but it’s not very flexible so when you press you also pull down quite a big area, therefore it’s not very accurate. Then later I found a really flexible piece of cloth that when you press it can create a small point which is perfect for position detection.


And then the next just use this point as a virtual mouse. Theoretically it could detect multitouch as long as the sheet can shows different points you press, but also need an algorithm to find all the different points. I haven’t tried openCV yet, maybe there’s some thing to use.



It’s a simple and silly idea, also the table is really shaky, but I really enjoy it.  I especially like the touch feeling, it’s very satisfying. And also building the table itself is a lot of fun too, I really enjoy building real stuff that I can actually touch it, it’s very different from code but both very interesting to me.