Home / Archive by category "Installation"

Reasons to be creative 2015

I wasn’t planed to write this, but somehow I feel I should write something about it. So here it is 2 months after the festival.

Reasons to be creative

I feel really different about this RTBC this year. Not only because it’s the first time for me to give a talk on a full session, but also meeting all the amazing people. I felt like a “guest” for the last 2 years with RTBC, but this time I feel “home”. During these 3 days I feel really relaxed ( apart from my talk) and enjoy all the talks. Most of all, I met a lot of friends and make a lot of new friends as well. It’s a weird feeling that I finally meet some twitter friends in the real life, it’s just make it much real instead of just some messages showing on Twitter. You guys are totally awesome and I really enjoy having conversations with you.

I’ve always enjoyed the talks in RTBC. It just feels different to me, I like the mixture between the Dev talks and Designer talks, they are all equally inspiring to me. I always feel motivated after the 3 days and got tons of idea that I want to make. This year is no exception. And plus there’s one special one : the talk by Stacey Mulcahy. She is an amazing speaker and build amazing stuff. There’s one thing touched me the most which is the young game makers. It’s such a wonderful idea and makes me thinking about doing something for the kids as well. After become a father, I keep thinking about what I can do for my kids ? I have some skills and what I can do with it ? That’s the reason makes me start building all these small experiments. I wish my children see me as a maker or a creator, rather than just sitting in front of the computer hitting the keyboard all day. I want them to understand that the computer is just a tool to help you building and creating, and we should focus more on the things we build and the idea/story behind it. They might be too young to get the idea but I’ll keep doing this. Not only I want to give them this idea but also because I enjoy all these moments of building, testing and finally get kids to play with them. When I saw the young game makers project, I was really excited. I saw a possibility that I might be able to bring my work and experiments to more kids and helping them building things. I still don’t have too much idea how to make it happen but at least now I got a goal.


On Stage

It’s such a wonderful experience to me to have my first full session. To be honest I feel it’s actually much easier than the elevator pitch 😀 Having a full hour gives me more room to make mistakes. However I am still nervous to death. I rehearsed like crazy the night before the very morning. I actually feels less nervous when I start talking. I have to thank all my friends give me the advices, you are totally right about everything. Nobody understand better than the thing your are talking about. Once I started talking it feels just like working through my process again. The other useful advice i got is that you can never get rid of your nervous, so just accept it and not trying to fight against it. I found this very useful and actually helped me relaxed before the talk. I know that I was still quite nervous on the stage, that’s why I finished about 5 mins earlier. But which is good as well, so I could have some QAs. And I have to say I am really really really flattered by one of the question asked me about the Kuafu project that I started last year. I am so glad that people still remember it, also I am embarrassed that I haven’t worked on it for a long time. But now I’ve made it my project next year. I’ll make it to a more presentable state.




The project : Dark forest

I got the idea of this project just after John offering me this chance to speak. I had a quite clear goal when I start this project. I know I want to learn the flocking behaviour, I know I want to make some small installations in my backyard. I know I want to test projection on the grass. At the end I made it, which is very important to me, that i set a goal and achieve it. Although the result is something I wasn’t expected in the first place, I didn’t know about the synchronised flashing behaviour, and I didn’t expect that I could find a way to simulate it. I expected the projection on the grass will looks better but it’s actually not. I have to admit it was all these unexpected success and failures make the most memory for me. Now looking back at it after 2 months, I saw lots of space for improvement, but also I still enjoy this project very much. And now I really like setting a goal and work your way to it and document the process of it. If you are interested in this project, i’ve put everything here :


I want to say thank you for all the people who helped me on this project. It meant a lot for me !

So a bit of random stuff but I’m glad I made it to the RTBC this year and meet all the amazing people. I’m glad I made the project and now I can move on to the next one !

Dark Forest – part 2

Here is the part 2 of this project. I start to explore different materials to project on. My first try is on a wall in my backyard. It does looks slightly better when in big scale but it’s not very interesting. So I move on to the grass. It creates some very different visuals. I like how it makes the grass shine. However I don’t have the right equipment that I can hang the projector high enough to cover a larger area, which is a bit of shame because i think it would make it looks much better. Also it’s not projecting from the top. It’s with some angles which sometimes make the particles looks like a short line instead of a dot. This depends where you stand as well.

Just when I was trying to move the projector I accidentally project the particles to the trees. And that’s something really interesting. It looks very similar to the fireflies I saw in Taiwan. The leaves gives it a really different view and serve as a randomness in the system. I really like the result of it. Here is a short video of the experiments I’ve made :


I got another related idea when I test the projection on the grass. I want to make an interactive version for my kids to play. The idea is simple : the firefly with gather to where you stand. I start a new branch in my code, I keep the particles but remove all the trees and make the camera stays in the front. Then I connect it to a kinect which I could capture the position of my kids. Here I tried openCV with kinect for the first time. The performance and accuracy is amazing. I was using the findContour method and it returns a very impressive result :

螢幕快照 2015-08-12 下午2.30.17

The next step is to remap the position to the flocking system and then create an attractor force to pull the particles closer to this point. I had a great fun building this. Not only because I’m playing with OpenCV and kinect, but also my kids reaction to this is just wonderful. During the weekend they keep asking me if they could play with the fireflies again tonight. And after I made it, my daughter just start dancing with the particles. It’s one of the best memory in my life. Here is a short video of that night :

I’ve made another test to project on my chalkboard as well :


Now I’m working on finishing the project. I start added the terrain, trees and the background. Here are some WIP screenshots :






I am really excited about it and glad to see things finally getting together. I’ll keep working on it and hope to see your at reasons to be creative !


Touch table

I build this projection / touch table for a while now, never got a chance to write about it until now. I got this idea last year and in the time I need a working table for myself so I think : why don’t I just build one for both working and projection ? The idea is simple : Just make the top of the table removable and keep the width / height ratio to 16 : 9 which is the aspect ratio of my projector.



Building the table

For the frames I got some pieces of wood left from my ikea shelves and found a big and thick piece of wood in my backyard which is perfect for the top. It took me about 2 days to build and I don’t have proper tool for this, it will be much faster with the right tool. And of course the quality will be much better too 😀

Projection and Touch

When I want to project I just remove the top and cover with a sheet.  The way that the touch works is that I put a Kinect under the table and facing straight up. So when I press on the sheet the Kinect can capture the depth difference of the press point. It’s not a complicate concept but just a lot of tweaking and calibre, e.g find the right distance range to detect, ignore the frames, noise reduction … etc. However there’s one thing does matter a lot, which is the sheet. I was using the bed sheet, it works but it’s not very flexible so when you press you also pull down quite a big area, therefore it’s not very accurate. Then later I found a really flexible piece of cloth that when you press it can create a small point which is perfect for position detection.


And then the next just use this point as a virtual mouse. Theoretically it could detect multitouch as long as the sheet can shows different points you press, but also need an algorithm to find all the different points. I haven’t tried openCV yet, maybe there’s some thing to use.



It’s a simple and silly idea, also the table is really shaky, but I really enjoy it.  I especially like the touch feeling, it’s very satisfying. And also building the table itself is a lot of fun too, I really enjoy building real stuff that I can actually touch it, it’s very different from code but both very interesting to me.

Lego NXT / Processing

When I left my last job, my friends gave me a big farewell gift : LegoNxt. It’s an amazing gift, i’ve always loved lego. And when I was a boy the toy I love the most is always lego. And LegoNxt is even more awesome for me now as I become a programmer, combining program and lego is the sweetest dream for me.

So in order to return my thanks to my friends, I try to come up with a project using LegoNxt, I did have an idea a year ago and had a working prototype, but the result is not satisfying enough, however I really enjoy the process of building robots and control them through codes.

This time i had a simple idea : I want to record myself building another lego set, but i don’t want the camera stay still. I want it to move slowly from one side to another. So i begin to build this little lego car and control it by processing.



I found this NXTComm library and it’s very useful. It allows you to control your legoNxt via bluetooth, and it’s very easy to use. To setup you just need to do this :

_nxt = new LegoNXT(this, Serial.list()[4]);

As I recalled the only tricky thing is to find the right bluetooth port you are using which is the [4] means, you might need to test a little bit to find out which one is exactly the one you need.

After this you are ready to go, the library is very compact, you have all the api you need including both the sensors and motors. I didn’t need any sensor on this project but i did a quick test with ultrasonic sensor to get the distance from the sensor to the object, the result is very responsive, makes me want to do more with it. As for the motor it’s really simple to control as well :

_nxt.motorForwardLimit(LegoNXT.MOTOR_A, maxForce, 150); // moving with limit
_nxt.motorForward(LegoNXT.MOTOR_A, maxForce); // just keep moving, need to call _nxt.motorStop manually.

So this library covers pretty much all the controls you need, the only thing left is to build the actual robot. It’s so much fun to build it e.g. how to place the gears to control the speed so it won’t go too fast, some old knowledge from high school/university i thought i’ll never need them become very handy now :)

I’ve always worked on the coding part and didn’t get involved with the hardware part too much. But now i’m really interested in building robots, hope I’ll have more time to dig into this field and I’d like to learn arduino as well !


And last, a short video of the recording result using this little robot:

夸父 Kuafu

It begins sometime about half year ago, i was trying to recreate a chinese style 3D world. By the time the idea is really simple, just grabbing some chinese ink style mountain textures and trying to put them together. Then a good friend of mine saw this, start chatting with me about it. He said it would be interesting to use the real map data to construct the world, and also along with a good story. At that moment this ancient chinese myth “Kuafu” ( 夸父 ) come to my mind, it suits this idea perfectly, a giant chasing the sun across the land. We both like this story and get really excited about this project, we start right away playing the some experiences, not long after my friend came out with this amazing design :


It was so beautiful and went behind my imagination about this story. Later on I start to work on the first working prototype :LINK

We really like this prototype and get more ideas while play with this, so we start to make it a project. The first step is to create the storyboard :

After this prototype and storyboard, we both caught up by works and kept us busy for some while until one weekend. It was a friday evening and I am on my way home. Suddenly I have this idea to record a ink drop and use it as a texture for mountain. Our first prototype looks good but the mountains are flat, I do want to make a real 3D mountain but I have no idea how to create the texture for it. And then this idea struck me, so i do a quick test on that night.

The result is better than I thought it will be, they make a really good looking chinese style mountain.


So I went back and create more textures, trying and playing with different colors and also test on different papers. I really enjoy this process of creating textures, it takes time to create them but sit there watching the ink flows create all kinds of interesting and beautiful shapes is really exciting. As a dev we all know the more randomness we throw into the code the more alive it will become and have more variety. But there’s nothing can compare with the actual thing. Every ink drop will create different shapes based on how thick is the ink, how high you drop them , how much water is on the paper, the flow of the water, the tiny different on the paper itself, it’s all these things that you cannot control which make it more beautiful. And also make it feel really different when you put it in 3D render.

After these testings, the next step is to get the real elevation data and try to recreate the terrain. In the beginning I was using the google elevation service to get the data, which works perfectly, and the most amazing thing is that it even return the elevation underneath water. However I was afraid that we will hit the api call limit quite quickly as we are going to generate a good amount of mountains, so I switched to use an elevation map :


The idea and the way to do it is really simple : We translate the latitude and longitude to x and y on this map, read the pixel value of this coordinate of this point, the brighter the pixel is, the higher the mountain will be. And each time to create a mountain , I set a minimum height of the mountain and ignore the those lower than this, and also check if there is already a mountain nearby, if not, then I’ll create a new mountain.


And in the same time my friend created these amazing style frames :


I was really excited with these stunning images he created, and we really like this landscape layout, so we start thinking : Why don’t we make it an installation ? We think it will looks better on a long panoramic format and also this will make it feel more immersive. So we tweak our direction from an online experience to an installation.

Just at this moment, google launch a new project call DevArt, we think it’s a really good chance for us to show our project to the world. So now we’ve put it on here. And also because now we are making it an installation, we think it will be great to have some sound effect in, and would be even better if it’s interactive. So we invited our other friend on board to work on the sound design part. He joins with lots of amazing ideas with sounds and make the sound design a big and interesting part of this project right away.

In order to make an installation, I switched to cinder now, combined with Node.js as our server and an HTML5/JS page as controller. This week I started to work on the communication part. Sending the data from the controller to Node.js server and then to the frontend using different technologies.

So now we are working to finishing the project, it’s really interesting to look back all these testings and prototypes. It seems that we have already done a lot. But the truth is there is still more ahead. I want to thank my friends who give us so many positive feedbacks after we announce this project. It’s definitely a good motivation for us to keep working on it. We will keep update our dev art project page, and I will keep updating my blog on this project too, even if the dev art is over.

Star Canvas – Case study

Last thursday it was the 5th anniversary of B-reel London, we had a party on thursday night along with couple of our R&D projects. And I am working on this one : “Five”, it was based on one of my old prototype :


The idea is to show all the constellations on the sky, and also we created a drawing tool so people can create their own constellation and send it to the sky. We find a way to build a dome and project it inside and we did it. It’s my first installation, really fun and a lot of learning, and I need to say thanks for my colleagues who makes this possible. I want to write a little bit more about this project, I think b-reel will put together a beautifully edited making-of video, so in this article I will be focusing on the tech/dev part.

The installation

Here is a diagram of the basic structure :



Our developer Liam build the communication using node.js as the backend server for this project. Apart from the projection, we have 2 drawing tool running on iPad allow user to create their own constellation, and also a virtual keyboard on the iPad for people to use it to search the constellation they created. We don’t want to use a real keyboard because people can miss-trigger other applications, therefore we create this keyboard and limit the usage only to this project.

The projection is built using WebGL, at the beginning I was considering using Cinder to build this project, however due to the limit time and also because I had built a working prototype in webgl, I choose to use webgl in the end. I wasn’t too sure about it when I choose it, I didn’t know if the performance will be good enough and is it stable enough to run through all night with a lot of communications, but it turns out work quite well, I was thinking that i might need to restart the whole thing couple of times during the party but in the end i didn’t have to. I’m really happy with it and feel more security now with using WebGL for installations. And also i’m not using any webgl libraries but just some tool class i create, this is another big achievement for me as well.

The sound is another big part, done by our sound master Owen. All I did is send out the event base on the hand gesture and the constellation selected to the node server, then it gets re-send to pure data to generate the sound effect. When a constellation is selected, I calculate the number of lines in this constellation and max/min/avg distance of these lines and then send it out, and Owen create different melody based on these informations, so each constellation has its unique melody. I would really like to bring this feature to the web version, but i need to learn from master Owen first 😀

For the drawing tool we capture user’s strokes and then simplify them to fewer dots to create the “constellation like” kind of effect, then we insert this generated constellation to the database and in the same time send out a signal via node server to tell the projection to update the display.

And the keyboard is just simply transfer the event of which key being pressed to the projection, very straight forward.


The dome

We discover this article teaching people how to build a dome just using card boards, and the best is that it already has all the detail of triangles you need including the plan and sizes. Once I saw this i couldn’t help but want to build one myself. Me and my wife start right away to build a small scale one to test, here are some pictures :

It’s really easy to build, even with the actual dome we build is not that complicated either, however the problem is how to support it, the geometry itself is flexible, in order to make it stay in a perfect shape we use a lot of fish strings attached to almost every corner to hold the shape. This is actually the most difficult part, building the dome actually is much easier than this.



The other challenge is how to project on to this dome, in the link they are using a hemispherical mirror which we did try, however the result is really not acceptable. The first problem is when reflected to the dome, it loses a lot of details and become very pixelated. The second one is the distortion is really hard to correct. Due to this 2 reasons we give up using the hemispherical mirror. Then I tried to project it directly on the dome and then correct the distortion in the code, however we find actually it looks better just leave the way it is without doing any correction. Maybe this is because the nature of this project that everything is already on a sphere so there is no need to do more to correct it. All I need to do is just create a gradient circle mask to mask out the part outside the dome.



The constellations

This project is not actually super 3D heavy, the only 3D part of it is everything is on a sphere and that’s all, the stars are flat, the constellations are flat, almost everything is flat. This is the nature of this project, we don’t need super complicated 3D models for this. On the other hand, we want to push a little bit more to the visual parts, to push as many layers as possible, as we all know the more layer you have, the more detailed / beautiful it’s going to get. So here is a little demo of the project and you can see all the layers being rendered  :


We started from a basic skybox, however soon after we build it we think it will feel much better if we can have an animated background, therefore our motion designer create an animate background for me. At the beginning we just put it as a plane texture on top of the skybox, but then we discover it will look better to map it onto a sphere that moves with the skybox, this gives the animation a feeling that it’s the real sky instead of just an overlay.

I’ve already blogged how i make the stars and lines facing to center/screen in my previous post, it was in stage3D but it works in the same way.


I’ve put a fake depth of field effect on the name of the constellation and the stars so it gets a little bit blurry/transparent when it gets closer to the edge. It’s a fake depth of field because i didn’t use the depth buffer, it’s just a simple calculation base on its x and y, however this is very suitable for this project.


For the appearing of the constellation, I had a little fun with the fragment shader. I want to make a simple particle effect transition to show the constellation drawings. I found this useful glsl random function on shader toy :

float rand(vec2 co){
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);

to create this effect is actually quite simple : When getting the texture coordinate, add this random number to the desired coordinate, and this will result getting a random position therefore it looks like particles. And to create the animation is just to tweak the amount of this offset :

targetPos = orgPos + offset * randomPos;

so you can see it here : the bigger the offset is, the more random it gets( more particle like), and if the the offset is 0 then we get the original position which will generate a clean drawing. So basically the animation is just to tween offset from some big value back to 0. Voila, that’s how simple it is. You can add more to this random position such as scale or rotation to give a more dramatic effect.

And in this project we used a lot of video textures, some of them we need transparency, here is a easy way to do it : using the blend mode, before you render it, set the blend mode to this :

this.gl.blendFunc(this.gl.SRC_ALPHA, this.gl.ONE);

and make sure the transparent part is pure black, then it will get removed when being rendered. This trick is not only for video but for all the textures, so you can save some file size if you use this wisely.

The video texture is quite heavy for the loading, some of them can be done in code but will be very difficult to get the detail as rendered video. I think this is the choice based on the project, in our case we are building an installation so we don’t need to care about the loading, and we use super powerful machine to run it, so the file size is not a problem. In this case I will choose to use videos to get the best detail and also easier to modify. However if we are building an online experience then we need to do more tests on the performance and loading. Anyway my point is : Choose the best solution for the project, I know i’ll have a lot of fun playing with shaders if I were building it in code, but it will be very time consuming and hard to change.


The navigation

This is the first time we use leap motion for a real project, it turns out working quite well. I won’t say that it’s going to replace the mouse but definitely it can provide an extra way to navigate. The part I like about leap motion is it’s really sensitive and responsive,  you can create a really good control with it. However some gesture are still very hard to use, especially everyone has its way to do the gesture. At the beginning or you can see my prototype video, i created this “Grab” gesture to navigate, to be honest i quite like it, it gives me the control of holding something, however some people find it difficult to use, and it’s really hard for me to improve the gesture because people have different ways of “grabbing”, it sounds a little bit funny but it’s what I encounter during this project. So in the end i have to remove this grab gesture and goes for full hand. If you have a leap motion you can have a play with the link i mentioned before. We have 3 gestures : full hand open to move around, 1 finger pointing to select, and clap, this one i’ll leave for you to discover 😀

There an interesting part of navigation : How I select the constellation i want ? do I need to compare the distance of every constellation to my mouse’s position ? this sounds quite heavy and need a lot of math. Luckily our ancestor has already solved this problem, basic astronomy : there are in total 88 constellations on the sky and have taken all the space. There’s a way to determine the boundary of each constellation using Right ascension andDeclination, which is the IAU constellation boundariesSo basically these scientists has already create the boundaries of all the constellations, and you can find a map like this (of course without the name, i put the names on just easier to see)


when you map it to a sphere you can see it fits perfectly with all the constellations. So what i did is to paint each region with different color, and then when the mouse event triggered ( mouse move or click), i will perform a gl.readPixels to get the pixel value of the mouse position, and because each region has an unique value, therefore i can know which one i just selected ( or roll over ). Just couple things to pay attention : when you doing the readPixels you don’t need to read from the whole render, you just need that 1 pixel under your mouse, this will save some performance. However the readPixels call is still heavy so make sure to skip it whenever you can ( e.g. when the mouse is not moved, so it’s the same result as last frame), secondly when you export this map make sure you export a png so it won’t get compressed and loose the value you set.


This is about the longest post i have ever made but i am really proud of this project. I am still working on the last part of this project with my friend, there’s something we want to do with this project and would like to bring the beauty of the constellations to everybody. We have discovered the beautiful drawings of  Johannes Hevelius and we want to show it to everybody. So stay tuned for the updates!

Since I started working on this project i fall in love with astronomy. Our sky is amazingly beautiful, every night if the sky is clear i will look up and try to find the constellation i know. And I realise no matter how much I do I cannot compete with the beauty of the nature, however I can get inspired by it. There is a lot of inspirations you can find just by looking at the nature around you.

Some pictures of the night