Case study – Night Eye

Here comes my annual blog post 😛   I should make more posts but i got too busy/lazy.

Recently I was invited to take part of the Christmas experiment this year. Along with my colleague Clement and my friend Bertrand we got an idea of using abstract lines to recreate the shape of the animals in the forest.

You can check out the experiment here :

Also if you happen to have HTC VIVE, give it a try in the latest chromium build. It’s also a WebVR project.
In this project Clement has taken care of the line animation while I was focusing on the environment and the VR part. The following section is the case study of my part.


Initial Designs

Here are some pictures of initial designs  with different colour theme:




Reflection Matrix

The idea started with my experiments with reflections. I’ve always wanted to understand how to create a proper reflection and failed so many times. But then I found a really good tutorial on youtube, walk through the process step by step. I highly recommend to have a look if you are interested in implement the reflection youself. The tutorial is in Java but it covers all the concept and explained it clearly, plus the shader doesn’t change ( too much ) .
The only problem I got to follow this tutorial is the clipping plane, which I think webgl doesn’t support ( Please correct me if I am wrong ). So end up just using discard to do the simplest clipping. I’ve also find another really good presentation about rendering reflection in Webgl, in it it mentioned other ways to clip, you could have a look :



In order to get the best position and the right angle for the animals, we created a simple editor for us to place the animals and tweak the camera angles. It took a little bit extra time to build it, but it saves us a lot of time tweaking. It’s always easier when you can visualise your settings in live. After we have selected the positions and camera angles in the editor we just export a big json to the project and it’s done.



In this project we want to try the latest WebVR API. Which is really amazing ! They make it really simple to implement. The first step is to get the VRDisplay and setup the frame data holder :  

vrDisplay = navigator.getVRDisplays();
frameData = new VRFrameData();

Then in the loop you can get the data by :



For the rendering it’s become really simple. The WebVR now returns the view matrix and the projection matrix of both eyes to you.

setEye(mDir) {
    this._projection = this._frameData[`${mDir}ProjectionMatrix`];
    this._matrix = this._frameData[`${mDir}ViewMatrix`];

You can just pass it into your shader and you are ready to go. No need to setup the eye separation, no need to calculate the projection matrix. It’s just simple like that. And the code become really clean too : Set the scissoring, set the camera, render then it’s done.

const w2 = GL.width/2;

//	get VR data

//	left eye
scissor(0, 0, w2, GL.height);

//	right eye
scissor(w2, 0, w2, GL.height);


The next is to present in the VR headset. Which they make it really simple too :

vrDisplay.requestPresent([{ source: canvas }])

Then at the end of your render call, add:


Then it’s on.

However one more thing need to do but a simple one : You’ll need to use vrDisplay.requestAnimationFrame instead of window.requestAnimationFrame in order to get the right frame rate.

The WebVR api is really awesome and easy to use. There are couple things to check but I’m pretty sure you can just group them into 1 tool class. Here is a simple checklist for you: 

  • Matrices : View matrix / Projection Matrix
  • Scissor for Stereo Rendering
  • VR frame rate
  • Present mode for VR

And don’t forget to check out the examples from you got everything you need to start in there.


After rendering, the next step for us is to implement the control. The interaction of our project is simple : press a button to go to next step and press another button to drag the snow particle with your hand. We are using the gamepad API with WebVR. It’s really straightforward. Start with : 


To get your gamepads. You might get multiple gamepads so do a check get the one you want. After this for the position and orientation are in the gamepad.pose. The button states are in the gamepad.buttons. And these are everything you need to create the interactions.



It has been a lot of fun to work on this project with friends. And a good challenge too for learning and using the latest WebVR API. Again like I mentioned they’ve made the API so easy to use and recommend everyone to give it a try. I am really surprised by it and also how little time it took me to convert my old projects into WebVR. If you are interested in the code, it’s here :

So let’s it, hope you enjoy the read and I wish you a merry xmas and happy new year !



P.S. Some behind the scenes for the commits 😀


A talk of my projects.

Recently I have been invited by an old friend to give a talk about my projects. Mostly I go through the talk I gave last year at Reasons to be creative and add a bit more. Here is the video.

I got people asked me this question constantly : how could you make so many experiments ? especially you are father of 2 kids, where and how do you find the time to make this ? My answer is always the same : because of the fear, The fear of that if I don’t make this projects now, I won’t have any time in the future to make it. I hate to see these ideas dies in my head. Having children completely changed my life. And without questions it consumes me a lot of time. However I find myself more productive then before I had the kids. Why ? I think I used to think I’ll always have time to build things in the future. So I always get distracted by films, games … etc. But now whenever I had some time and I have an idea, I’ll grab whatever I can to make it. I’m not saying that you should have kids ( or perhaps you should ?! 😛 ) But more about trying to grab as much time as possible. Don’t expect to have a complete and long quiet time to build your idea. Most of the time you won’t have it. Do small parts one by one, divide big tasks into small tasks.

Then the next question I got asked a lot is why I am making these projects ? Because I want to have fun ! How many times you see all the amazing projects from and you dreams about making your own ? How many times you see all the crazy WebGL experiments on and wish you had a project like that ? and how many time you actually got the project ? I think we all know the answer. I am not a person of patience, I want to have fun building , experimenting and learning things. I don’t count on the client project to do these. Yes it might be easier with client projects because you’ll have a proper budget and time to do so. But you never know when it will come. I don’t want to wait, I want to have fun right now. I think everyone understands the importance of the R&D, and everyone wants to do it because it is fun. Since you know it’s fun, then there’s nothing stoping you to have these fun yourself.

There’s also one other reason I’m making these : for my kids. The are old enough now to understand what I am doing. They live in a whole new generation and going to use computers a lot in everyday life. Personally I want to show them that computer is just a tool helping us to create or form or ideas. Besides I really enjoy making things for kids because they’ll give you the most straightforward feedbacks. If they love your stuff they’ll just tell you and let you know you’ve done a good job plus a bit smile. I was really lucky to have a chance to setup an installation for the kids at Geneva international school earlier this year. One of the best moments is the a kid (of age 6) approach me and talk to me how much he enjoy the installation and thank me. During these 4 days in the school my friend and I become the super star ( A.K.A. the STARGAZERS ) The smiles and feedback from these kids are truly amazing. They thank me for bringing the installation to them, but the true is I thank them for giving me motivation to build more.



These are the main points I want to share. Everyone has her/his own way to do R&D. But the most important thing of all is as once my friend told me : have fun ! You gotta to find the joy in building these experiments/projects. Fun is the only way to make you keep doing it, find the stuff that inspires you no matter where they are coming from. It could be a picture, a game, a book or anything. Then you could start from there building your project and having fun !


P.S. You can find the presentation of my talk here :

Simple environment map

Just want to share a simple technique I used in my christmas experiment this year. I was trying to create some image based lighting. I found myself often in a situation that I just need a background. However I don’t have enough photoshop skill to make a custom background. I’ve always want to put more colours into my project and like to have some beautiful gradient background instead of just a plain colour. So I discover this trick : I want the colour to look nature in my work, so why don’t I just grab the colour from the nature itself ?

It’s simple : search images for ‘sky gradient’ and you’ll get tons of beautiful gradients colour for you :


some of them have some clouds but you can just apply a massive blur on it and it’ll look smooth.


After get this in there’s a really easy way to make an image based lighting that doesn’t require a cube map. I found an amazing article here :
and this super useful shader  :

vec2 envMapEquirect(vec3 wcNormal, float flipEnvMap) {
  //I assume envMap texture has been flipped the WebGL way (pixel 0,0 is a the bottom)
  //therefore we flip wcNorma.y as acos(1) = 0
  float phi = acos(-wcNormal.y);
  float theta = atan(flipEnvMap * wcNormal.x, wcNormal.z) + PI;
  return vec2(theta / TwoPI, phi / PI);

vec2 envMapEquirect(vec3 wcNormal) {
    //-1.0 for left handed coordinate system oriented texture (usual case)
    return envMapEquirect(wcNormal, -1.0);

With this you only need the normal to get the reflect colour from an image, combine this with the gradient colour image we got, you can produce a very nature look environment lighting.

Ultimately you’ll probably want to go for cube map + PBR but I think this could be useful for some smaller projects.

And lastly here is the link to my christmas experiment this year :

and the source code is here :

the sound cloud loader I was using is from here :
Really glad to be part of it again and merry christmas everyone !

Codevember and ray marching


Still can’t believe that I’ve made it, but really glad I did. I decided to do this because I feel I never pushed myself hard enough, and want to challenge myself. It was easier at the beginning, while you hare a lot of ideas from the past. And then as the time goes you start to run out of ideas, that’s where the panic starts. I want to say thank you to all my friends who provides me ideas and inspirations. In this month everyday is like this : finish the experiment of the day just before I go to bed, then start to think about what I can do next day. It’s really intense, however it helped me a lot. In order to create work quickly I need to gather tools first, and save more tools while building them. The more tools you have, the quicker you can build.


Ray marching

A great part of my codevember experiments are ray marchings. I really like it. It was a huge mystery to me and seems super complicated. I am lucky to come across this live coding tutorial just before the codevember starts.

I’m so glad that my french hasn’t completely gone so I am still able to understand the most part of it. It’s a really wonder tutorial that guide you step by step to build your first ray marching experiment. Once finished this you’ll be able to start understand better the codes on And need to mention this amazing blog post of iq. It has all the basic tools you need. With this you are already able to create some amazing stuff.

I really like ray marching. It’s really simple: everything happens in 1 fragment shader. All the effects you need is just one function call, e.g. AO, Shadow, Lighting ( Diffuse, Specular), Spherical reflections … etc. For me it feels much simpler and easier to deal with. Besides, there’s already tons of tools on shadertoy that you can use. All you need is just to figure out what the arguments you need to pass in to the function, and most of the time they are really simple.

Also here are some other interesting videos related to ray marching :

also some useful links :


My latest project finally gone live. I spend some efforts working on the liquid/fluid looks of the bubble and found some interesting techniques. Really thankful for the people who create these techniques and willing to share with everyone.

Here is a demo link to the bubble, you can click on the bubble to launch a wave as well.



Animate bubble in shader and normal

The first task is to animate the bubble and getting the right normal. The way we’ve done it is to put everything to the vertex shader and then calculate the normals based on the vertices positions. This approach makes it really easy for us when we decide to add the ripples in the bubble. We only need to calculate the position offset caused by the ripple and added to the vertex position, then the normal map is updated.
In order to do so, in the positions buffer instead of putting in the position of the vertex, I put the rotation of x , rotation of y and the size of the bubble. And to get the position of vertex you can use this function :

vec3 getPosition(vec3 values) {
  float rx = values.y / numSeg * PI - PI;
  float ry = values.x / numSeg * PI * 2.0;

  vec3 pos = vec3(0.0);
  pos.y = cos(rx) * values.z;
  float r = sin(rx) * values.z;
  pos.x = cos(ry) * r;
  pos.z = sin(ry) * r;
  return pos;

Then using the position to get the 3D noise ( I’m using this noise function) and the ripple height.
At the end, the final position of the vertex is the original position (sphere) + noise + ripple.

Because we are using the rotation X and rotation Y to get the vertex position so we can get the neighbor position just by offset this rotation X and rotation Y. And with the position of the neighbors, we can calculate a simple normal by using cross product. The shader code looks like this:

vec3 currPos = getFinalPosition(position); // getPosition() + noise + ripple
vec3 rightPos = getFinalPosition(position+vec3(1.0, 0.0, 0.0);
vec3 bottomPos = getFinalPosition(position+vec3(0.0, 1.0, 0.0);
vec3 vRight = rightPos - currPos;
vec3 vBottom = bottomPos - currPos;
vec3 normal = normalize(cross(vBottom, vRight));

This way you could get a animate bubble and a normal with it.



Distortion with background image

The second task is to distort the background behind the bubble. I started with the refract function but that requires a cube map, we’ve only got an image. So i start look around to see if there’s a simpler way to create the refraction effect, then I found this article :
In short you can create a refraction effect by just using the normal.xy as a displacement map.

vec2 newUV =uv + normal * distortionRate;
gl_FragColor = texture2D(texture, newUV);

with this you can achieve a good simulate refraction effect with just a background image instead of a cube map.




At the beginning of the project we started with the traditional diffuse and specular lighting. It works however the bubble lacks one important feature : reflection. I went back to search for possible solutions and then found this amazing article :
using this effect add a lots to the bubble and gives it a very strong glassy/fluid look, which is exactly what the client after.



Small Details

We also add 2 small detail to the bubble :

  1. Distorted a bit more toward the edge of the bubble.
  2. Darker on the edge of the bubble.

These 2 works in the same way and I need a value that changes from the center of the bubble to the edge. A quick way to do it is to get the dot product of the normal and the vector(0.0, 0.0, 1.0). Once you got this you can add this to the distrotionRate and get the different distortion between the center and the edge.



It is really fun and good learning process to go through all these steps in order to create the final look of it. I believe there are more ways to achieve this look but we really happy with this one. I’m also trying and learning cubemap now. For the next time I might try with the cubemap to recreate this fresnel effect. And even more a dynamic cubemap could be an interesting effect to add on this.

One of my codevember experiment is based on this technique. I only remove the noise animation also replace the lighting map with a much simpler one ( just a glow on the edge )



and also the case study of the whole project on stink digital’s site :

Reasons to be creative 2015

I wasn’t planed to write this, but somehow I feel I should write something about it. So here it is 2 months after the festival.

Reasons to be creative

I feel really different about this RTBC this year. Not only because it’s the first time for me to give a talk on a full session, but also meeting all the amazing people. I felt like a “guest” for the last 2 years with RTBC, but this time I feel “home”. During these 3 days I feel really relaxed ( apart from my talk) and enjoy all the talks. Most of all, I met a lot of friends and make a lot of new friends as well. It’s a weird feeling that I finally meet some twitter friends in the real life, it’s just make it much real instead of just some messages showing on Twitter. You guys are totally awesome and I really enjoy having conversations with you.

I’ve always enjoyed the talks in RTBC. It just feels different to me, I like the mixture between the Dev talks and Designer talks, they are all equally inspiring to me. I always feel motivated after the 3 days and got tons of idea that I want to make. This year is no exception. And plus there’s one special one : the talk by Stacey Mulcahy. She is an amazing speaker and build amazing stuff. There’s one thing touched me the most which is the young game makers. It’s such a wonderful idea and makes me thinking about doing something for the kids as well. After become a father, I keep thinking about what I can do for my kids ? I have some skills and what I can do with it ? That’s the reason makes me start building all these small experiments. I wish my children see me as a maker or a creator, rather than just sitting in front of the computer hitting the keyboard all day. I want them to understand that the computer is just a tool to help you building and creating, and we should focus more on the things we build and the idea/story behind it. They might be too young to get the idea but I’ll keep doing this. Not only I want to give them this idea but also because I enjoy all these moments of building, testing and finally get kids to play with them. When I saw the young game makers project, I was really excited. I saw a possibility that I might be able to bring my work and experiments to more kids and helping them building things. I still don’t have too much idea how to make it happen but at least now I got a goal.


On Stage

It’s such a wonderful experience to me to have my first full session. To be honest I feel it’s actually much easier than the elevator pitch 😀 Having a full hour gives me more room to make mistakes. However I am still nervous to death. I rehearsed like crazy the night before the very morning. I actually feels less nervous when I start talking. I have to thank all my friends give me the advices, you are totally right about everything. Nobody understand better than the thing your are talking about. Once I started talking it feels just like working through my process again. The other useful advice i got is that you can never get rid of your nervous, so just accept it and not trying to fight against it. I found this very useful and actually helped me relaxed before the talk. I know that I was still quite nervous on the stage, that’s why I finished about 5 mins earlier. But which is good as well, so I could have some QAs. And I have to say I am really really really flattered by one of the question asked me about the Kuafu project that I started last year. I am so glad that people still remember it, also I am embarrassed that I haven’t worked on it for a long time. But now I’ve made it my project next year. I’ll make it to a more presentable state.




The project : Dark forest

I got the idea of this project just after John offering me this chance to speak. I had a quite clear goal when I start this project. I know I want to learn the flocking behaviour, I know I want to make some small installations in my backyard. I know I want to test projection on the grass. At the end I made it, which is very important to me, that i set a goal and achieve it. Although the result is something I wasn’t expected in the first place, I didn’t know about the synchronised flashing behaviour, and I didn’t expect that I could find a way to simulate it. I expected the projection on the grass will looks better but it’s actually not. I have to admit it was all these unexpected success and failures make the most memory for me. Now looking back at it after 2 months, I saw lots of space for improvement, but also I still enjoy this project very much. And now I really like setting a goal and work your way to it and document the process of it. If you are interested in this project, i’ve put everything here :

I want to say thank you for all the people who helped me on this project. It meant a lot for me !

So a bit of random stuff but I’m glad I made it to the RTBC this year and meet all the amazing people. I’m glad I made the project and now I can move on to the next one !

Dark Forest – part 2

Here is the part 2 of this project. I start to explore different materials to project on. My first try is on a wall in my backyard. It does looks slightly better when in big scale but it’s not very interesting. So I move on to the grass. It creates some very different visuals. I like how it makes the grass shine. However I don’t have the right equipment that I can hang the projector high enough to cover a larger area, which is a bit of shame because i think it would make it looks much better. Also it’s not projecting from the top. It’s with some angles which sometimes make the particles looks like a short line instead of a dot. This depends where you stand as well.

Just when I was trying to move the projector I accidentally project the particles to the trees. And that’s something really interesting. It looks very similar to the fireflies I saw in Taiwan. The leaves gives it a really different view and serve as a randomness in the system. I really like the result of it. Here is a short video of the experiments I’ve made :


I got another related idea when I test the projection on the grass. I want to make an interactive version for my kids to play. The idea is simple : the firefly with gather to where you stand. I start a new branch in my code, I keep the particles but remove all the trees and make the camera stays in the front. Then I connect it to a kinect which I could capture the position of my kids. Here I tried openCV with kinect for the first time. The performance and accuracy is amazing. I was using the findContour method and it returns a very impressive result :

螢幕快照 2015-08-12 下午2.30.17

The next step is to remap the position to the flocking system and then create an attractor force to pull the particles closer to this point. I had a great fun building this. Not only because I’m playing with OpenCV and kinect, but also my kids reaction to this is just wonderful. During the weekend they keep asking me if they could play with the fireflies again tonight. And after I made it, my daughter just start dancing with the particles. It’s one of the best memory in my life. Here is a short video of that night :

I’ve made another test to project on my chalkboard as well :


Now I’m working on finishing the project. I start added the terrain, trees and the background. Here are some WIP screenshots :






I am really excited about it and glad to see things finally getting together. I’ll keep working on it and hope to see your at reasons to be creative !


Dark Forest – part 1

Hi, here is the part one of this project, which is going to be part of my speech at Reasons to be Creative this year as well.


I get this idea with this beautiful photo :


I fell in love with it right away and want to do something about it. The first idea I had was the flocking experiments, I’ve always enjoy it and wanted to do it myself for a long time. The picture gives me the feeling that the fireflies are swarming in the forest. So I decide to create a flocking system of fireflies flying among the trees.


I started building my first experiment with the particle stream i made a while ago. And adding some cylinders as the placeholder for trees and the particles will fly around them. Here’s what I’ve got :


And then I start wondering how it would look like if I project it on my chalkboard wall ? Also I think it will be interesting if the tree is actually drawn on the wall instead of being rendered in 3D, just give it a bit different feelings. I render the trees with the background colour so when the particles run behind them they will be blocked but shows the colour of the background. I wasn’t 100% sure this will create the surrounding feeling I want but just give it a shot. Surprisingly it works quite well.

I was really happy with the result and decide to take it to the next step.


Couple months later I came across a video talking about the synchronising behaviour of fireflies. I was really shocked and also excited about it. I think it will be very fun to try to reproduce this behaviour in my project. I start searching for videos but there’s not too many until I found this one:

The way they synchronized together is just unbelievable. I went back online and try to search for the ways of recreate this synchronisation. They are not too hard to find. I tried several, they work but not very satisfying :


First one doesn’t really sync completely, they kind of form into groups. The second one synchronised too perfect, which is obviously not the case in the real world. So I read more articles about firefly synchronisation and finally find this approach : Image the firefly keep a circular period. Each firefly will check with its neighbours within a distance. If it senses his flashing circle is fall behind from his neighbour, it will speed up, otherwise slow down. Just 2 simple rules, this video demonstrates how it works :


This time I was really satisfied with the result, of course there’s some tricks to make it less uniform such as if the period difference between it and its neighbor is smaller than certain value, stop adjust its speed. This will make sure they won’t end up in a perfect synchronisation. And the other reason I love this solution is that this is very similar to how flocking works. You don’t need to know the overall speed is, you just need focus on your neighbors and adjust yourself. And it’s perfect for putting in my system as well because it’s the same way to implement the flocking behaviour. Here is the result :


With this I am ready to do the next step : Projection testing in my backyard. I want to bring it out from the screen and see if it will work better to project on the grass.

Harpapong – Challenge of 400 pixels

Couple months ago my friend Owen approached me with this project. It was based on the great work of his harpa pong last year. The basic idea is that they turned the facades of the Harpa concert hall in Reykjavík into a huge canvas by putting a led light on each window. Last year they created a pong game on this enormous canvas that user could play with their phone. This year during the Sónar Reykjavík 2015 they want to put some audio visualisations of the music of the main stage on it. And my friend Owen asked my if I would like to make one visualisation to it. I was really excited about the idea and I said yes right away, and then here comes the challenge : there’s only about 400 pixels per facade. So how big exactly is this canvas ? about this big : Just that tiny thing in the centre.

This is definitely the smallest canvas that I have every worked. I’ve been used to create visuals on a big canvas but now suddenly we only have 400 pixels to make visuals, that’s a whole new challenge to me. At first I was testing with basic geometries such as lines and rectangles. But in the same time I was trying out some ripples for other projects and then I was wondering what will happen if I put the ripples on the canvas of this size ? will it still be recognisable ? This is the ripple I made :


When there’s a beat it will trigger a wave. In the fragment shader I add all the waves together and based on the height of the map I mapped it to different colour which I get them randomly from ColourLovers.

I put it to the platform tool Owen build and this is what I get, it’s hard to recognise the circle but the movement and the colour changing is really interesting. 04



So that’s my contribution to this project, you could check the live demo here :

The project page : and a short film to the project :

Again thanks Owen for inviting me to this project. I am really proud to be part of it, and also I had a great time to create visuals and play on such a small canvas.


Maps, portrait and Chalkboard

Just playing with map and portraits, inspired by the amazing works of Ed Fairburn

Not too much on the code side. I just create a flood fill function, so the program will pick up a random pixel and the fill the region around it. Although it feels more like photoshopping : Combine map image and the portrait using mask and blend modes. The code it self doesn’t alter the image at all. But I really enjoy watching the image being generated. Then I start to draw the map on my chalkboard wall, and the project these result on it, which looks really good.




And one of my colleague said that it will be interesting if the program could generate the city shape automatically, it reminds me the old substrate thing right away. I took a quick test and the result is very interesting as well. These are more like generate art to me, it still uses these portraits but could generate quite different result each time. There’s some more picture here.