Dark Forest – part 2

Here is the part 2 of this project. I start to explore different materials to project on. My first try is on a wall in my backyard. It does looks slightly better when in big scale but it’s not very interesting. So I move on to the grass. It creates some very different visuals. I like how it makes the grass shine. However I don’t have the right equipment that I can hang the projector high enough to cover a larger area, which is a bit of shame because i think it would make it looks much better. Also it’s not projecting from the top. It’s with some angles which sometimes make the particles looks like a short line instead of a dot. This depends where you stand as well.

Just when I was trying to move the projector I accidentally project the particles to the trees. And that’s something really interesting. It looks very similar to the fireflies I saw in Taiwan. The leaves gives it a really different view and serve as a randomness in the system. I really like the result of it. Here is a short video of the experiments I’ve made :


I got another related idea when I test the projection on the grass. I want to make an interactive version for my kids to play. The idea is simple : the firefly with gather to where you stand. I start a new branch in my code, I keep the particles but remove all the trees and make the camera stays in the front. Then I connect it to a kinect which I could capture the position of my kids. Here I tried openCV with kinect for the first time. The performance and accuracy is amazing. I was using the findContour method and it returns a very impressive result :

螢幕快照 2015-08-12 下午2.30.17

The next step is to remap the position to the flocking system and then create an attractor force to pull the particles closer to this point. I had a great fun building this. Not only because I’m playing with OpenCV and kinect, but also my kids reaction to this is just wonderful. During the weekend they keep asking me if they could play with the fireflies again tonight. And after I made it, my daughter just start dancing with the particles. It’s one of the best memory in my life. Here is a short video of that night :

I’ve made another test to project on my chalkboard as well :


Now I’m working on finishing the project. I start added the terrain, trees and the background. Here are some WIP screenshots :






I am really excited about it and glad to see things finally getting together. I’ll keep working on it and hope to see your at reasons to be creative !


Dark Forest – part 1

Hi, here is the part one of this project, which is going to be part of my speech at Reasons to be Creative this year as well.


I get this idea with this beautiful photo :


I fell in love with it right away and want to do something about it. The first idea I had was the flocking experiments, I’ve always enjoy it and wanted to do it myself for a long time. The picture gives me the feeling that the fireflies are swarming in the forest. So I decide to create a flocking system of fireflies flying among the trees.


I started building my first experiment with the particle stream i made a while ago. And adding some cylinders as the placeholder for trees and the particles will fly around them. Here’s what I’ve got :


And then I start wondering how it would look like if I project it on my chalkboard wall ? Also I think it will be interesting if the tree is actually drawn on the wall instead of being rendered in 3D, just give it a bit different feelings. I render the trees with the background colour so when the particles run behind them they will be blocked but shows the colour of the background. I wasn’t 100% sure this will create the surrounding feeling I want but just give it a shot. Surprisingly it works quite well.

I was really happy with the result and decide to take it to the next step.


Couple months later I came across a video talking about the synchronising behaviour of fireflies. I was really shocked and also excited about it. I think it will be very fun to try to reproduce this behaviour in my project. I start searching for videos but there’s not too many until I found this one:

The way they synchronized together is just unbelievable. I went back online and try to search for the ways of recreate this synchronisation. They are not too hard to find. I tried several, they work but not very satisfying :


First one doesn’t really sync completely, they kind of form into groups. The second one synchronised too perfect, which is obviously not the case in the real world. So I read more articles about firefly synchronisation and finally find this approach : Image the firefly keep a circular period. Each firefly will check with its neighbours within a distance. If it senses his flashing circle is fall behind from his neighbour, it will speed up, otherwise slow down. Just 2 simple rules, this video demonstrates how it works :


This time I was really satisfied with the result, of course there’s some tricks to make it less uniform such as if the period difference between it and its neighbor is smaller than certain value, stop adjust its speed. This will make sure they won’t end up in a perfect synchronisation. And the other reason I love this solution is that this is very similar to how flocking works. You don’t need to know the overall speed is, you just need focus on your neighbors and adjust yourself. And it’s perfect for putting in my system as well because it’s the same way to implement the flocking behaviour. Here is the result :


With this I am ready to do the next step : Projection testing in my backyard. I want to bring it out from the screen and see if it will work better to project on the grass.

Harpapong – Challenge of 400 pixels

Couple months ago my friend Owen approached me with this project. It was based on the great work of his harpa pong last year. The basic idea is that they turned the facades of the Harpa concert hall in Reykjavík into a huge canvas by putting a led light on each window. Last year they created a pong game on this enormous canvas that user could play with their phone. This year during the Sónar Reykjavík 2015 they want to put some audio visualisations of the music of the main stage on it. And my friend Owen asked my if I would like to make one visualisation to it. I was really excited about the idea and I said yes right away, and then here comes the challenge : there’s only about 400 pixels per facade. So how big exactly is this canvas ? about this big : Just that tiny thing in the centre.

This is definitely the smallest canvas that I have every worked. I’ve been used to create visuals on a big canvas but now suddenly we only have 400 pixels to make visuals, that’s a whole new challenge to me. At first I was testing with basic geometries such as lines and rectangles. But in the same time I was trying out some ripples for other projects and then I was wondering what will happen if I put the ripples on the canvas of this size ? will it still be recognisable ? This is the ripple I made :


When there’s a beat it will trigger a wave. In the fragment shader I add all the waves together and based on the height of the map I mapped it to different colour which I get them randomly from ColourLovers.

I put it to the platform tool Owen build and this is what I get, it’s hard to recognise the circle but the movement and the colour changing is really interesting. 04



So that’s my contribution to this project, you could check the live demo here :


The project page :  harpapong.com and a short film to the project : https://vimeo.com/122900808

Again thanks Owen for inviting me to this project. I am really proud to be part of it, and also I had a great time to create visuals and play on such a small canvas.


Maps, portrait and Chalkboard

Just playing with map and portraits, inspired by the amazing works of Ed Fairburn

Not too much on the code side. I just create a flood fill function, so the program will pick up a random pixel and the fill the region around it. Although it feels more like photoshopping : Combine map image and the portrait using mask and blend modes. The code it self doesn’t alter the image at all. But I really enjoy watching the image being generated. Then I start to draw the map on my chalkboard wall, and the project these result on it, which looks really good.




And one of my colleague said that it will be interesting if the program could generate the city shape automatically, it reminds me the old substrate thing right away. I took a quick test and the result is very interesting as well. These are more like generate art to me, it still uses these portraits but could generate quite different result each time. There’s some more picture here.



Touch table

I build this projection / touch table for a while now, never got a chance to write about it until now. I got this idea last year and in the time I need a working table for myself so I think : why don’t I just build one for both working and projection ? The idea is simple : Just make the top of the table removable and keep the width / height ratio to 16 : 9 which is the aspect ratio of my projector.



Building the table

For the frames I got some pieces of wood left from my ikea shelves and found a big and thick piece of wood in my backyard which is perfect for the top. It took me about 2 days to build and I don’t have proper tool for this, it will be much faster with the right tool. And of course the quality will be much better too 😀

Projection and Touch

When I want to project I just remove the top and cover with a sheet.  The way that the touch works is that I put a Kinect under the table and facing straight up. So when I press on the sheet the Kinect can capture the depth difference of the press point. It’s not a complicate concept but just a lot of tweaking and calibre, e.g find the right distance range to detect, ignore the frames, noise reduction … etc. However there’s one thing does matter a lot, which is the sheet. I was using the bed sheet, it works but it’s not very flexible so when you press you also pull down quite a big area, therefore it’s not very accurate. Then later I found a really flexible piece of cloth that when you press it can create a small point which is perfect for position detection.


And then the next just use this point as a virtual mouse. Theoretically it could detect multitouch as long as the sheet can shows different points you press, but also need an algorithm to find all the different points. I haven’t tried openCV yet, maybe there’s some thing to use.



It’s a simple and silly idea, also the table is really shaky, but I really enjoy it.  I especially like the touch feeling, it’s very satisfying. And also building the table itself is a lot of fun too, I really enjoy building real stuff that I can actually touch it, it’s very different from code but both very interesting to me.

Blow : My Christmas Experiment this year

I was really surprised when I get the invitation from David to create one project for the Christmas experiment this year. I am a huge fan of them and always wondering if I can make my contribute to it. I cannot express how excited I am when I receive the email.

By that time I was working with some particles so I come up with this idea : to blow the particle ( sand ) away to reveal the image. Here is the first test :
xmas_xperiment_0I had a lot of fun building this, playing particles is always my favorite and It looks cool. However this looks more like Chinese paintings and I don’t know how to make it feel more holiday. Then my friend Bert come up with this design with golden particles and a pink background and suddenly it becomes very holiday like.


In this experiment I was still using the texture to save the particle positions and perform the calculation in the shader as my last post. In total there are 512 x 512 particles which is just the size of the image. I use a black/white image as a map, only the black part will stay and the white part will fly away. For the revealing I put a center in a random place and also combined with Perlin noise to give it more natural feeling. The last thing is the gold particles, which I just took it from an image and it works quite well. I think it could be more interesting with some point light effect but I ran out of time and the it already looks quite good to me so I didn’t try it in the end.


So That’s it, that’s how I build this experiment. It’s simple but I had a lot of fun building it. Especially after a very stressful project I feel I need to do something fun to release my pressure. Again I am very thankful for being part of this and really proud to stand with all other talented developers. I enjoy all the experiments and can’t wait to see the rest !

WebGL GPU Particle stream

I’ve once blogged about a project which I build an interactive particle stream in Cinder but I lost it when I move to new webspace. Now I rebuild it with WebGL and want to post again and with some tips that I learn while building it. First thing first, the live demo is here :

and also the source code is available here :



Saving data in the texture

This is a quite common technique when dealing a large particle system : Save the information of the particle on a texture ( such as the particle position and particle speed ) and perform the movement calculation in GPU. Then when you want to move the particles, you just need to modify this texture. The basic concept is that a pixel contains these 3 color channels : Red, Green and Blue, so we can use these 3 channels to save the x, y and z coordinates. It could be the x,y,z of a particle’s position or the x,y,z of a particle’s velocity. This idea is simple, but need some works to make it work. The first thing is how to map a position to a color, the range of the position could be anything from negative to positive, but the range of a color channel is only from 0 to 1. In order to make it work we need to set a range for the positions, and the zero point will be (0.5, 0.5, 0.5) anything smaller than .5 will be negative and positive if greater than .5. A simple example that convert a pixel color to a position  range from -100 to 100.

var range = 100;
position.x = ( color.r - .5 ) * range * 2.0;
position.y = ( color.g - .5 ) * range * 2.0;
position.z = ( color.b - .5 ) * range * 2.0;

And vice versa you can save a position to a color like this :

color.r = (position.x/range + 1.0 ) * .5;
color.g = (position.y/range + 1.0 ) * .5;
color.b = (position.z/range + 1.0 ) * .5;

So each pixel on the texture represent a set of x,y,z coordinate, that’s how we save the positions of all particles.



But how exactly we can write our data to a texture ? We need to use a framebuffer. Framebuffer allows your program to render things on a texture instead of render directly to your screen. It’s a very useful tool especially when dealing with post effects. For learning more about framebuffer you can check this post. With framebuffer now we can save the data to a texture, but here I meet the biggest problem in this experiment : Precision. Because we are working in the color space that all the numbers are really small, for example the speed of a particle could be only .01 and the acceleration of the particle will be even smaller. So when you multiply things together sometimes it gets too small and the pixel cannot hold the precision. This happens both to this experiment and the project that I mentioned about with Cinder. In WebGL by default(gl.UNSIGNED_BYTE) each color channel have 8 bits to store the data. In our case this is not enough, luckily there’s a solution for it : Using gl.FLOAT instead of gl.UNSIGNED_BYTE, gl.FLOAT will allow each color channel to have 32 bits to save the data. In order to use gl.FLOAT we need to do one extra step :

gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this.frameBuffer.width, this.frameBuffer.height, 0, gl.RGBA, gl.FLOAT, null);

This will enable WebGL to use gl.FLOAT and solve our problem with precision. Here is a screenshot of how the framebuffer look like in this experiment, I save the position of the particles on the left side of the framebuffer, and the velocity of the particle on the right.



Particle movements

The next step is to calculate the movement of the particle. It all base on this rule :

new velocity = old velocity + acceleration
new position = old position + velocity

So with our texture, on the left side which is the position of the particle, we just need to get its velocity and add it to the current position, don’t forget that the range of velocity is from 0-1 so need to subtract vec3(.5) from it

if(vTextureCoord.x < .5) {      //  POSITION
    vec2 coordVel       = vec2(vTextureCoord.x + .5, vTextureCoord.y);   // get the coordinate of the velocity pixel
    vec3 position       = texture2D(texture, vTextureCoord).rgb;         
    vec3 velocity       = texture2D(texture, coordVel).rgb;              
    position            += (velocity - vec3(.5) ) * velOffset;       

For right side (which is the velocity), I want to add a random force to the particle based on where the particle is. I found a very useful GLSL noise function here. So the shader code look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);

    velocity            += vec3(xAcc, yAcc, zAcc);

Where snoise is the noise function and I passed in time as well so it will keep changing constantly. But this is just roughly how it looks like, in the real life you need to tweak the value in order to get the natural movement feeling. The last thing is that you need to prepare 2 framebuffers and swap them every frame, so you can always get the result of last frame and update it to the other framebuffer.

this._vCal.render( this.fboCurrent.getTexture(), this.fboForce.getTexture() ); // Perform the calculation


var tmp = this.fboTarget;
this.fboTarget = this.fboCurrent;
this.fboCurrent = tmp;


Adding interaction

The final step is to add interaction to it. With Leap motion we can easily get the position and velocity of the hands, so we can easily determine a force with position of the hand, and its strength will be determined by the length of the hand velocity. As for the direction there are couple of options : the first one is to take the direction of  the velocity, which is the most common one. However it can be improved with using the direction of your palm, which leap motion is able to give us (hand.palmNormal). This will make it feel better when you do several movements in a roll, trying to push the particles to same place. And one final touch to this is to check the dot product of the hand velocity and this palmNormal, if the dot result is smaller than zero which means they move in different direction, we should set the strength to zero to avoid the weird movements.

To apply this force to our particles, first we need to create a force texture like this :


Again we use color to represent the force. Back to the shader, when we calculate the velocity of the particle we need to add this force as well. So the shader will look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);
    velocity            += vec3(xAcc, yAcc, zAcc);

    // get the force pixel by the position of the particle
    vec3 forceGesture   = texture2D(textureForce, position.xy).rgb;   

    // map the force value to -.5 to .5 and add it to velocity   
    velocity            += forceGesture - vec3(.5);                      


So that’s how I build this. The concept is not complicated, but there are a lot of small steps to take care. Also because everything happens in texture and shader which makes it hard to debug. Sometimes you just get a white or black texture and hard to tell which step went wrong. But once you got it all working and you can push for a huge amount of particles, that feeling is incredible. It’s a really good practice for learning framebuffer, shader and particle movements, I learn a lot and had a lot of fun when building it.

Here is a short video of the Samsung project I build if you are curious how it looks in motion : https://vimeo.com/92043935


Substrate Cube

I still amazed by Jared Tarbell’s work every time I go back to his site even if it’s created already 10 years ago.  I’ve try to recreate his substrate years ago in flash and it was so much fun to build it.

Last week I went back to his substrate again and wanted to recreate it in javascript. I haven’t done any generative coding for a while, it feels so good when I pick it up. I really like the feeling that you set up some rules and just let the code run. Every time you get an unexpected result and amazed by them. For this substrate experiment, the rules are simple :

1. Start a line and moving forward.
2. When hit the edge of the canvas or another line, stop.
3. If this line is longer than minimum length required, generate 2 more lines from this line.

It’s just this simple and it creates such an amazing result, of course there are few bits to make it looks better but this is the basic idea. So here is the javascript version I created :

substrate_0014_Screen Shot 2014-09-13 at 16.16.39


Few things about this experiment : First thing is the size of the canvas, I doubled the size of the canvas and resize back to normal. In this way you can get a lot details and feel less pixelate, especially those shadows. The second thing is I draw by directly modifying the image data of the canvas. The process of it is to get the position of the pixel that you want to modify in the big array of the image data. Change it and call context.putImageData. A tip for the performance is that doing putImageData for every time you change a pixel is super heavy. In every frame I need to update a lot of pixels, so the better way to do it is not to call putImageData until you’ve updated all the pixels you want to change, then just call it once every frame.


Put it on a cube

After I done this i get an idea to put it on a cube. I image it will be quite interesting to watch the line march over the edge from one side to another. So I started to create the texture like this :


It looks simple but actually quite a challenge to find out when the line hit the edge where it needs to appear again and what’s it’s new direction on the texture. I’m really glad I sort it all out eventually. Here is the result :

substrate_0004_Screen Shot 2014-09-15 at 11.06.10


I’ve also put some screenshots here.



I had so much fun building this. And it reminds me the talk of Mario Klingemann ( Quasimondo ) ‘s talk on the RTBC this year. I really like one thing he said : when you create something, most of the time you will find somebody already done it before, and sometimes it’s really really long time ago. But it doesn’t matter , the important thing is the process, you will always have new inspirations from building it or solving the problem. And for me that’s where the new idea begins. I feel so sad when I work with people trying to find new ideas and when there’s an idea being brought to the table someone just said : “It has been done before.” I’m not a thinker, it’s hard for me to just “think” something new. I need to build something and start from there, and after trying add new stuff to it or improve it for number times then I might be able to find a new idea. This is how it works for me, if you just ask me to think a new concept I will never able to  find it by just thinking.

That’s why I like to go back to these sites, they are old indeed. But they are timeless to me and are amazing. Almost every time it gives me some new ideas. So if you haven’t try it, I encourage you to do it on your own. You will find a lot of fun during the process, and you will enjoy every unexpected result it brings to you.

DIY Steampunk Keyboard

It is such a stupid idea but in the same time it is so much fun to build it.

Few month ago I saw this ( Qwerkywriter ) on the internet, it caught my eyes right away, it looks amazing and beautiful. However there’s one problem with it : It’s too expensive. Don’t get me wrong, I believe the quality of the final product will be amazing, and I believe he spend a lot of time and effort to build this and it looks great. But for me it’s just hard to spend 300 dollars on a keyboard. So I looked around and found actually there’s a lot of people doing their own customize vintage or steampunk style keyboard. At the end I found this one, it looks great and seems to be possible for me to build a similar one. So I decided to build one on my own.


Getting the parts

The first thing to do is to get the parts. I choose to buy a mechanical keyboard because it has a better type feeling and also the sound of hitting the key is closer to vintage typewriters. This is not too hard to find. And then it comes the challenge : the keys. It took me some time to finally settle with the metal buttons. I was searching for typewriter keys which already comes with the letters but they are quite expensive too, and also for a modern keyboard you have about 105 keys in total, for the vintage typewriter you only get 35-50 keys i think. Which means you need to buy 2 or 3 full sets of them and also need some customize jobs too. Therefore I switch to search for metal buttons, which you can find a lot on amazon or ebay. There are couple things you need to be careful : the first one is the size of the button, you don’t want it to be small but you don’t want it to be too big either, from my point of view i think between 14 ~ 16mm is the best. The second thing is that you want it to be flat, some buttons comes with a small ring in the back. I don’t have proper tool to remove it and bear in mind we are looking at over 100 keys. To remove it for everyone of them is going to be a huge amount of work. At the end I found these :


These buttons are perfect to me, they do have that small thing on the back but it’s really flat so it doesn’t matter. And I really like edge in the front, make it look like one of those vintage typewriter keys.


Building it

So finally we get all the things we need and can start building it. What I did is really simple : I remove the key from the keyboard, then cut the 4 sides of it and leave only the top. And then just use super glue to glue these buttons on to the key. There are things you can do to improve this such as minimum the surface the key and also make it thinner as well. As for me this is already good enough.


But these are only the small keys. For the bigger keys such as space bar, shift, backspace and enter, I don’t want to put just a button on it. It will looks empty and hard to type. So I decide just to remove the 4 sides and leave it like that, which looks quite ok to me to be honest. However there some extra work needs to be done for these big keys : I need to polish the edges. Because when I cut them it leaves a very ugly and uneven edge, I want to polish it and make it smoother.

Again I don’t have the right tool to do it, but I don’t want to spend some money on a tool that I won’t be using that often. So I asked myself : why not just build it myself, and I can have some fun with my LegoNxt ! And here it is, my DIY lego nxt polisher 😀


The button on the left is for turning on and off, the ultrasonic sensor is for detecting the distance from my hand to the wheels. The original idea is that the wheel will start itself automatically when my hands is close to the machine, and stop itself when I move away. It does work however I get some noise from the ultrasonic sensor ( return a lot of zeros ) and also I find actually easier just let it run. So at the end I just disable it but it’s still very fun to play with these sensors. It’s a simple thing which I spent about 2 hours to build it and make it work, but it’s perfect for polish my keys. Here is a short video of how it works  :


So that’s it, that’s my DIY steampunk keyboard, I’ve never feel so nerdy in my life 😀 There are still things can be done to make it better. But I kind of enjoy the look of it now so i’ll just leave it like this for now. To be honest it’s not very difficult to make one, I spend the most time on cutting the keys, but if you have proper tools it could save you a lot of time. Also I really enjoy building this mini robot. I’ve always been working with codes and haven’t explored hardwares that much. My next goal will be learning Arduino and build some awesome robot !



Chinese calligraphy in 3D and Reasons to be creative

I’ve been playing with Chinese brushes for a while now, besides the ink drops i created and use it to create mountains, I’ve created these strokes too. 03_strokeMy favorite part of these strokes is the gradient, I created them actually by accident. I find some amazing work of 張大千 and there is a lot of beautiful gradient in his work. I was wondering how to do it but it turns out not as complicated as I thought, of course there’s a lot of trials and errors, I throw out a lot fail tests before i get this. The trick is simple : put only color ink on your brush, when you are about to draw, just dip some black ink on the tip of your brush and there you go, really easy and you get a very nice gradient. I had a lot of fun trying with different portion of color ink, black ink and water, at the end it’s very hard for me to stop and to pick the one I’m going to use in my code, because each one of them is unique and have its own character. I think this is one of the reason why I like to create these ink textures so much. It’s a similar process as doing generative art : you have few controls, and you just let it run and enjoy the result, and then you go back to tweak these controls again and try to discover new controls or new settings. I’ve done this a lot in the code but it’s the first time for me to do it outside computer, really enjoy it.


Bringing it to code

After I created these strokes, actually I don’t really know what am I going to do with it, they look amazing but I don’t have any ideas how I can use it. Until one day I was building some prototypes for a project and doing some explorations, and one of the idea is to create ribbons, then suddenly this idea strike me : how about put the textures on the ribbon, and that’s how this started. The moment I put it on the ribbons, it feels totally fit with it. And really gives the feeling of the real brushes. I show it to my colleagues and they all love it. So I start to make some decorations for it : adding a texture background, some ink drops and lastly : a video layer to overlay on the textures, it makes the texture moves constantly, it just a small touch but make it feel different.


A online demo could be found here.


Drawing a smooth ribbon in 3D space

One problem I had while building these ribbons is that it sometime twisted in 3D space.  Calligraphy_05

The one on the left is twisted, you can see the normal (  the purple lines ) are flipped to the other side. Luckily there is a solutions for this called Parallel Transport Frames , I was building this in cinder and it’s already part of the framework so it’s very straight forward to use. You can check the Tubular sample of cinder. Using this could generate a smooth ribbon ( the image on the right), you can see the normal is now all in the same side.


Another theme, another world

Two years ago when I went back to Taiwan and visited the national museum. I found something really fascinating that haunt my mind for years. Which is this old book, these golden characters are just so beautiful to me and also the dark blue background. And since after I’ve been always want to find a place to try out these colours.


After I build the prototype this idea come to my mind again, therefore I put the golden colour on the strokes and put a dark blue background to give it a test. Surprisingly it works ! I did try to keep the gradient of the stroke but turning them into grey scales then overlay this golden colour on top of it. Also the video layer helps a lot as well.


And it looks a little bit flat so I add some random shadows around it. I put this on my backyard with projector for my kids and they love it :

I actually did couple more prototypes based on this however they are build in C++ with cinder. I’ve put my source code on the github which you can find it here. It includes 2 versions in cinder, one is with leap motion, the other one is for the projection table i made, with kinect. And a version in web which is build with WebGL. I didn’t have too much time to go into details of these codes so if you have any questions please send me an email. And also the video texture that i use is too big to upload, so you might need to find one yourself or remove it from code.


Reasons to be creative

One of my resolutions this year is to give a speech on stage, so when I saw that Reasons to be creative is looking for elevator pitchers I didn’t hesitate too much then send them my proposal. I was thinking about to give a talk about my DevArt project with my friend together. But John (Davey) of RTBC replied it’s impossible to do that, the elevator pitch need to be solo, but he is very kind and said that he like both of our works and offer us one pitch each.  Therefore we decide that my friend Bertrand will still be presenting our Kuafu project, and I will be talking about these ink experiments i made ( strokes and ink drops )

It’s first time for me to step on the stage, I was scared to death. I am really glad that i didn’t do this alone, myself and bert we practiced a lot in our hotel room and time ourselves so we had a good sense of our timing. Also the crew from RTBC ( Chris and Andy ) helped a lot as well, they let us know all the details we need to take care of and always cheering for us. We rehearsed couple of times and at the end it all went well on the stage. It is such a great experience that I won’t ever forget. I encourage you to do the same if you haven’t done it before, it’s scary but also it’s a lot of fun ! Also I am really glad to meet all the other elevator pitchers as well, they are all very talented and amazing.


I was at RTBC last year but i was working for the first day and get called back the last day. So this year is the first time for me doing the whole event. I’ve never been to other events like this and this is totally mind blowing. I enjoy almost every talk I went. Pretty much every talk implied one thing : work on your stuff, don’t wait things to happen. You might get a chance to use one of your old stuff someday. I think this is the biggest thing i get from these 3 days, keep doing the stuff you like and enjoy the process. I really feel full of energy and motivations to work on my stuff after this event. Now I’m just trying to find a way to keep the motivations not to be destroyed by the daily life.



I really glad I made this 3 minutes talk, and I really enjoy these 3 days. I’ve been working by myself for a long time and it’s really great to see that there are some people like my work. That gives me a lot of motivations to keep going. I hope i’ll get other chances to give speeches in the future, i’ll keep working my way to it. But the most important of all is to enjoy and have fun. That i’ll keep in my mind and create more stuff.