Reasons to be creative 2015

I wasn’t planed to write this, but somehow I feel I should write something about it. So here it is 2 months after the festival.

Reasons to be creative

I feel really different about this RTBC this year. Not only because it’s the first time for me to give a talk on a full session, but also meeting all the amazing people. I felt like a “guest” for the last 2 years with RTBC, but this time I feel “home”. During these 3 days I feel really relaxed ( apart from my talk) and enjoy all the talks. Most of all, I met a lot of friends and make a lot of new friends as well. It’s a weird feeling that I finally meet some twitter friends in the real life, it’s just make it much real instead of just some messages showing on Twitter. You guys are totally awesome and I really enjoy having conversations with you.

I’ve always enjoyed the talks in RTBC. It just feels different to me, I like the mixture between the Dev talks and Designer talks, they are all equally inspiring to me. I always feel motivated after the 3 days and got tons of idea that I want to make. This year is no exception. And plus there’s one special one : the talk by Stacey Mulcahy. She is an amazing speaker and build amazing stuff. There’s one thing touched me the most which is the young game makers. It’s such a wonderful idea and makes me thinking about doing something for the kids as well. After become a father, I keep thinking about what I can do for my kids ? I have some skills and what I can do with it ? That’s the reason makes me start building all these small experiments. I wish my children see me as a maker or a creator, rather than just sitting in front of the computer hitting the keyboard all day. I want them to understand that the computer is just a tool to help you building and creating, and we should focus more on the things we build and the idea/story behind it. They might be too young to get the idea but I’ll keep doing this. Not only I want to give them this idea but also because I enjoy all these moments of building, testing and finally get kids to play with them. When I saw the young game makers project, I was really excited. I saw a possibility that I might be able to bring my work and experiments to more kids and helping them building things. I still don’t have too much idea how to make it happen but at least now I got a goal.


On Stage

It’s such a wonderful experience to me to have my first full session. To be honest I feel it’s actually much easier than the elevator pitch 😀 Having a full hour gives me more room to make mistakes. However I am still nervous to death. I rehearsed like crazy the night before the very morning. I actually feels less nervous when I start talking. I have to thank all my friends give me the advices, you are totally right about everything. Nobody understand better than the thing your are talking about. Once I started talking it feels just like working through my process again. The other useful advice i got is that you can never get rid of your nervous, so just accept it and not trying to fight against it. I found this very useful and actually helped me relaxed before the talk. I know that I was still quite nervous on the stage, that’s why I finished about 5 mins earlier. But which is good as well, so I could have some QAs. And I have to say I am really really really flattered by one of the question asked me about the Kuafu project that I started last year. I am so glad that people still remember it, also I am embarrassed that I haven’t worked on it for a long time. But now I’ve made it my project next year. I’ll make it to a more presentable state.




The project : Dark forest

I got the idea of this project just after John offering me this chance to speak. I had a quite clear goal when I start this project. I know I want to learn the flocking behaviour, I know I want to make some small installations in my backyard. I know I want to test projection on the grass. At the end I made it, which is very important to me, that i set a goal and achieve it. Although the result is something I wasn’t expected in the first place, I didn’t know about the synchronised flashing behaviour, and I didn’t expect that I could find a way to simulate it. I expected the projection on the grass will looks better but it’s actually not. I have to admit it was all these unexpected success and failures make the most memory for me. Now looking back at it after 2 months, I saw lots of space for improvement, but also I still enjoy this project very much. And now I really like setting a goal and work your way to it and document the process of it. If you are interested in this project, i’ve put everything here :

I want to say thank you for all the people who helped me on this project. It meant a lot for me !

So a bit of random stuff but I’m glad I made it to the RTBC this year and meet all the amazing people. I’m glad I made the project and now I can move on to the next one !

Dark Forest – part 2

Here is the part 2 of this project. I start to explore different materials to project on. My first try is on a wall in my backyard. It does looks slightly better when in big scale but it’s not very interesting. So I move on to the grass. It creates some very different visuals. I like how it makes the grass shine. However I don’t have the right equipment that I can hang the projector high enough to cover a larger area, which is a bit of shame because i think it would make it looks much better. Also it’s not projecting from the top. It’s with some angles which sometimes make the particles looks like a short line instead of a dot. This depends where you stand as well.

Just when I was trying to move the projector I accidentally project the particles to the trees. And that’s something really interesting. It looks very similar to the fireflies I saw in Taiwan. The leaves gives it a really different view and serve as a randomness in the system. I really like the result of it. Here is a short video of the experiments I’ve made :


I got another related idea when I test the projection on the grass. I want to make an interactive version for my kids to play. The idea is simple : the firefly with gather to where you stand. I start a new branch in my code, I keep the particles but remove all the trees and make the camera stays in the front. Then I connect it to a kinect which I could capture the position of my kids. Here I tried openCV with kinect for the first time. The performance and accuracy is amazing. I was using the findContour method and it returns a very impressive result :

螢幕快照 2015-08-12 下午2.30.17

The next step is to remap the position to the flocking system and then create an attractor force to pull the particles closer to this point. I had a great fun building this. Not only because I’m playing with OpenCV and kinect, but also my kids reaction to this is just wonderful. During the weekend they keep asking me if they could play with the fireflies again tonight. And after I made it, my daughter just start dancing with the particles. It’s one of the best memory in my life. Here is a short video of that night :

I’ve made another test to project on my chalkboard as well :


Now I’m working on finishing the project. I start added the terrain, trees and the background. Here are some WIP screenshots :






I am really excited about it and glad to see things finally getting together. I’ll keep working on it and hope to see your at reasons to be creative !


Dark Forest – part 1

Hi, here is the part one of this project, which is going to be part of my speech at Reasons to be Creative this year as well.


I get this idea with this beautiful photo :


I fell in love with it right away and want to do something about it. The first idea I had was the flocking experiments, I’ve always enjoy it and wanted to do it myself for a long time. The picture gives me the feeling that the fireflies are swarming in the forest. So I decide to create a flocking system of fireflies flying among the trees.


I started building my first experiment with the particle stream i made a while ago. And adding some cylinders as the placeholder for trees and the particles will fly around them. Here’s what I’ve got :


And then I start wondering how it would look like if I project it on my chalkboard wall ? Also I think it will be interesting if the tree is actually drawn on the wall instead of being rendered in 3D, just give it a bit different feelings. I render the trees with the background colour so when the particles run behind them they will be blocked but shows the colour of the background. I wasn’t 100% sure this will create the surrounding feeling I want but just give it a shot. Surprisingly it works quite well.

I was really happy with the result and decide to take it to the next step.


Couple months later I came across a video talking about the synchronising behaviour of fireflies. I was really shocked and also excited about it. I think it will be very fun to try to reproduce this behaviour in my project. I start searching for videos but there’s not too many until I found this one:

The way they synchronized together is just unbelievable. I went back online and try to search for the ways of recreate this synchronisation. They are not too hard to find. I tried several, they work but not very satisfying :


First one doesn’t really sync completely, they kind of form into groups. The second one synchronised too perfect, which is obviously not the case in the real world. So I read more articles about firefly synchronisation and finally find this approach : Image the firefly keep a circular period. Each firefly will check with its neighbours within a distance. If it senses his flashing circle is fall behind from his neighbour, it will speed up, otherwise slow down. Just 2 simple rules, this video demonstrates how it works :


This time I was really satisfied with the result, of course there’s some tricks to make it less uniform such as if the period difference between it and its neighbor is smaller than certain value, stop adjust its speed. This will make sure they won’t end up in a perfect synchronisation. And the other reason I love this solution is that this is very similar to how flocking works. You don’t need to know the overall speed is, you just need focus on your neighbors and adjust yourself. And it’s perfect for putting in my system as well because it’s the same way to implement the flocking behaviour. Here is the result :


With this I am ready to do the next step : Projection testing in my backyard. I want to bring it out from the screen and see if it will work better to project on the grass.

Harpapong – Challenge of 400 pixels

Couple months ago my friend Owen approached me with this project. It was based on the great work of his harpa pong last year. The basic idea is that they turned the facades of the Harpa concert hall in Reykjavík into a huge canvas by putting a led light on each window. Last year they created a pong game on this enormous canvas that user could play with their phone. This year during the Sónar Reykjavík 2015 they want to put some audio visualisations of the music of the main stage on it. And my friend Owen asked my if I would like to make one visualisation to it. I was really excited about the idea and I said yes right away, and then here comes the challenge : there’s only about 400 pixels per facade. So how big exactly is this canvas ? about this big : Just that tiny thing in the centre.

This is definitely the smallest canvas that I have every worked. I’ve been used to create visuals on a big canvas but now suddenly we only have 400 pixels to make visuals, that’s a whole new challenge to me. At first I was testing with basic geometries such as lines and rectangles. But in the same time I was trying out some ripples for other projects and then I was wondering what will happen if I put the ripples on the canvas of this size ? will it still be recognisable ? This is the ripple I made :


When there’s a beat it will trigger a wave. In the fragment shader I add all the waves together and based on the height of the map I mapped it to different colour which I get them randomly from ColourLovers.

I put it to the platform tool Owen build and this is what I get, it’s hard to recognise the circle but the movement and the colour changing is really interesting. 04



So that’s my contribution to this project, you could check the live demo here :

The project page : and a short film to the project :

Again thanks Owen for inviting me to this project. I am really proud to be part of it, and also I had a great time to create visuals and play on such a small canvas.


Maps, portrait and Chalkboard

Just playing with map and portraits, inspired by the amazing works of Ed Fairburn

Not too much on the code side. I just create a flood fill function, so the program will pick up a random pixel and the fill the region around it. Although it feels more like photoshopping : Combine map image and the portrait using mask and blend modes. The code it self doesn’t alter the image at all. But I really enjoy watching the image being generated. Then I start to draw the map on my chalkboard wall, and the project these result on it, which looks really good.




And one of my colleague said that it will be interesting if the program could generate the city shape automatically, it reminds me the old substrate thing right away. I took a quick test and the result is very interesting as well. These are more like generate art to me, it still uses these portraits but could generate quite different result each time. There’s some more picture here.



Touch table

I build this projection / touch table for a while now, never got a chance to write about it until now. I got this idea last year and in the time I need a working table for myself so I think : why don’t I just build one for both working and projection ? The idea is simple : Just make the top of the table removable and keep the width / height ratio to 16 : 9 which is the aspect ratio of my projector.



Building the table

For the frames I got some pieces of wood left from my ikea shelves and found a big and thick piece of wood in my backyard which is perfect for the top. It took me about 2 days to build and I don’t have proper tool for this, it will be much faster with the right tool. And of course the quality will be much better too 😀

Projection and Touch

When I want to project I just remove the top and cover with a sheet.  The way that the touch works is that I put a Kinect under the table and facing straight up. So when I press on the sheet the Kinect can capture the depth difference of the press point. It’s not a complicate concept but just a lot of tweaking and calibre, e.g find the right distance range to detect, ignore the frames, noise reduction … etc. However there’s one thing does matter a lot, which is the sheet. I was using the bed sheet, it works but it’s not very flexible so when you press you also pull down quite a big area, therefore it’s not very accurate. Then later I found a really flexible piece of cloth that when you press it can create a small point which is perfect for position detection.


And then the next just use this point as a virtual mouse. Theoretically it could detect multitouch as long as the sheet can shows different points you press, but also need an algorithm to find all the different points. I haven’t tried openCV yet, maybe there’s some thing to use.



It’s a simple and silly idea, also the table is really shaky, but I really enjoy it.  I especially like the touch feeling, it’s very satisfying. And also building the table itself is a lot of fun too, I really enjoy building real stuff that I can actually touch it, it’s very different from code but both very interesting to me.

Blow : My Christmas Experiment this year
I was really surprised when I get the invitation from David to create one project for the Christmas experiment this year. I am a huge fan of them and always wondering if I can make my contribute to it. I cannot express how excited I am when I receive the email.

By that time I was working with some particles so I come up with this idea : to blow the particle ( sand ) away to reveal the image. Here is the first test :
xmas_xperiment_0I had a lot of fun building this, playing particles is always my favorite and It looks cool. However this looks more like Chinese paintings and I don’t know how to make it feel more holiday. Then my friend Bert come up with this design with golden particles and a pink background and suddenly it becomes very holiday like.


In this experiment I was still using the texture to save the particle positions and perform the calculation in the shader as my last post. In total there are 512 x 512 particles which is just the size of the image. I use a black/white image as a map, only the black part will stay and the white part will fly away. For the revealing I put a center in a random place and also combined with Perlin noise to give it more natural feeling. The last thing is the gold particles, which I just took it from an image and it works quite well. I think it could be more interesting with some point light effect but I ran out of time and the it already looks quite good to me so I didn’t try it in the end.


So That’s it, that’s how I build this experiment. It’s simple but I had a lot of fun building it. Especially after a very stressful project I feel I need to do something fun to release my pressure. Again I am very thankful for being part of this and really proud to stand with all other talented developers. I enjoy all the experiments and can’t wait to see the rest !

WebGL GPU Particle stream

I’ve once blogged about a project which I build an interactive particle stream in Cinder but I lost it when I move to new webspace. Now I rebuild it with WebGL and want to post again and with some tips that I learn while building it. First thing first, the live demo is here :

and also the source code is available here :


Saving data in the texture

This is a quite common technique when dealing a large particle system : Save the information of the particle on a texture ( such as the particle position and particle speed ) and perform the movement calculation in GPU. Then when you want to move the particles, you just need to modify this texture. The basic concept is that a pixel contains these 3 color channels : Red, Green and Blue, so we can use these 3 channels to save the x, y and z coordinates. It could be the x,y,z of a particle’s position or the x,y,z of a particle’s velocity. This idea is simple, but need some works to make it work. The first thing is how to map a position to a color, the range of the position could be anything from negative to positive, but the range of a color channel is only from 0 to 1. In order to make it work we need to set a range for the positions, and the zero point will be (0.5, 0.5, 0.5) anything smaller than .5 will be negative and positive if greater than .5. A simple example that convert a pixel color to a position  range from -100 to 100.

var range = 100;
position.x = ( color.r - .5 ) * range * 2.0;
position.y = ( color.g - .5 ) * range * 2.0;
position.z = ( color.b - .5 ) * range * 2.0;

And vice versa you can save a position to a color like this :

color.r = (position.x/range + 1.0 ) * .5;
color.g = (position.y/range + 1.0 ) * .5;
color.b = (position.z/range + 1.0 ) * .5;

So each pixel on the texture represent a set of x,y,z coordinate, that’s how we save the positions of all particles.



But how exactly we can write our data to a texture ? We need to use a framebuffer. Framebuffer allows your program to render things on a texture instead of render directly to your screen. It’s a very useful tool especially when dealing with post effects. For learning more about framebuffer you can check this post. With framebuffer now we can save the data to a texture, but here I meet the biggest problem in this experiment : Precision. Because we are working in the color space that all the numbers are really small, for example the speed of a particle could be only .01 and the acceleration of the particle will be even smaller. So when you multiply things together sometimes it gets too small and the pixel cannot hold the precision. This happens both to this experiment and the project that I mentioned about with Cinder. In WebGL by default(gl.UNSIGNED_BYTE) each color channel have 8 bits to store the data. In our case this is not enough, luckily there’s a solution for it : Using gl.FLOAT instead of gl.UNSIGNED_BYTE, gl.FLOAT will allow each color channel to have 32 bits to save the data. In order to use gl.FLOAT we need to do one extra step :

gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this.frameBuffer.width, this.frameBuffer.height, 0, gl.RGBA, gl.FLOAT, null);

This will enable WebGL to use gl.FLOAT and solve our problem with precision. Here is a screenshot of how the framebuffer look like in this experiment, I save the position of the particles on the left side of the framebuffer, and the velocity of the particle on the right.



Particle movements

The next step is to calculate the movement of the particle. It all base on this rule :

new velocity = old velocity + acceleration
new position = old position + velocity

So with our texture, on the left side which is the position of the particle, we just need to get its velocity and add it to the current position, don’t forget that the range of velocity is from 0-1 so need to subtract vec3(.5) from it

if(vTextureCoord.x < .5) {      //  POSITION
    vec2 coordVel       = vec2(vTextureCoord.x + .5, vTextureCoord.y);   // get the coordinate of the velocity pixel
    vec3 position       = texture2D(texture, vTextureCoord).rgb;         
    vec3 velocity       = texture2D(texture, coordVel).rgb;              
    position            += (velocity - vec3(.5) ) * velOffset;       

For right side (which is the velocity), I want to add a random force to the particle based on where the particle is. I found a very useful GLSL noise function here. So the shader code look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);

    velocity            += vec3(xAcc, yAcc, zAcc);

Where snoise is the noise function and I passed in time as well so it will keep changing constantly. But this is just roughly how it looks like, in the real life you need to tweak the value in order to get the natural movement feeling. The last thing is that you need to prepare 2 framebuffers and swap them every frame, so you can always get the result of last frame and update it to the other framebuffer.

this._vCal.render( this.fboCurrent.getTexture(), this.fboForce.getTexture() ); // Perform the calculation


var tmp = this.fboTarget;
this.fboTarget = this.fboCurrent;
this.fboCurrent = tmp;


Adding interaction

The final step is to add interaction to it. With Leap motion we can easily get the position and velocity of the hands, so we can easily determine a force with position of the hand, and its strength will be determined by the length of the hand velocity. As for the direction there are couple of options : the first one is to take the direction of  the velocity, which is the most common one. However it can be improved with using the direction of your palm, which leap motion is able to give us (hand.palmNormal). This will make it feel better when you do several movements in a roll, trying to push the particles to same place. And one final touch to this is to check the dot product of the hand velocity and this palmNormal, if the dot result is smaller than zero which means they move in different direction, we should set the strength to zero to avoid the weird movements.

To apply this force to our particles, first we need to create a force texture like this :


Again we use color to represent the force. Back to the shader, when we calculate the velocity of the particle we need to add this force as well. So the shader will look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);
    velocity            += vec3(xAcc, yAcc, zAcc);

    // get the force pixel by the position of the particle
    vec3 forceGesture   = texture2D(textureForce, position.xy).rgb;   

    // map the force value to -.5 to .5 and add it to velocity   
    velocity            += forceGesture - vec3(.5);                      


So that’s how I build this. The concept is not complicated, but there are a lot of small steps to take care. Also because everything happens in texture and shader which makes it hard to debug. Sometimes you just get a white or black texture and hard to tell which step went wrong. But once you got it all working and you can push for a huge amount of particles, that feeling is incredible. It’s a really good practice for learning framebuffer, shader and particle movements, I learn a lot and had a lot of fun when building it.

Here is a short video of the Samsung project I build if you are curious how it looks in motion :


Substrate Cube

I still amazed by Jared Tarbell’s work every time I go back to his site even if it’s created already 10 years ago.  I’ve try to recreate his substrate years ago in flash and it was so much fun to build it.

Last week I went back to his substrate again and wanted to recreate it in javascript. I haven’t done any generative coding for a while, it feels so good when I pick it up. I really like the feeling that you set up some rules and just let the code run. Every time you get an unexpected result and amazed by them. For this substrate experiment, the rules are simple :

1. Start a line and moving forward.
2. When hit the edge of the canvas or another line, stop.
3. If this line is longer than minimum length required, generate 2 more lines from this line.

It’s just this simple and it creates such an amazing result, of course there are few bits to make it looks better but this is the basic idea. So here is the javascript version I created :

substrate_0014_Screen Shot 2014-09-13 at 16.16.39

Few things about this experiment : First thing is the size of the canvas, I doubled the size of the canvas and resize back to normal. In this way you can get a lot details and feel less pixelate, especially those shadows. The second thing is I draw by directly modifying the image data of the canvas. The process of it is to get the position of the pixel that you want to modify in the big array of the image data. Change it and call context.putImageData. A tip for the performance is that doing putImageData for every time you change a pixel is super heavy. In every frame I need to update a lot of pixels, so the better way to do it is not to call putImageData until you’ve updated all the pixels you want to change, then just call it once every frame.


Put it on a cube

After I done this i get an idea to put it on a cube. I image it will be quite interesting to watch the line march over the edge from one side to another. So I started to create the texture like this :


It looks simple but actually quite a challenge to find out when the line hit the edge where it needs to appear again and what’s it’s new direction on the texture. I’m really glad I sort it all out eventually. Here is the result :

substrate_0004_Screen Shot 2014-09-15 at 11.06.10

I’ve also put some screenshots here.



I had so much fun building this. And it reminds me the talk of Mario Klingemann ( Quasimondo ) ‘s talk on the RTBC this year. I really like one thing he said : when you create something, most of the time you will find somebody already done it before, and sometimes it’s really really long time ago. But it doesn’t matter , the important thing is the process, you will always have new inspirations from building it or solving the problem. And for me that’s where the new idea begins. I feel so sad when I work with people trying to find new ideas and when there’s an idea being brought to the table someone just said : “It has been done before.” I’m not a thinker, it’s hard for me to just “think” something new. I need to build something and start from there, and after trying add new stuff to it or improve it for number times then I might be able to find a new idea. This is how it works for me, if you just ask me to think a new concept I will never able to  find it by just thinking.

That’s why I like to go back to these sites, they are old indeed. But they are timeless to me and are amazing. Almost every time it gives me some new ideas. So if you haven’t try it, I encourage you to do it on your own. You will find a lot of fun during the process, and you will enjoy every unexpected result it brings to you.

DIY Steampunk Keyboard

It is such a stupid idea but in the same time it is so much fun to build it.

Few month ago I saw this ( Qwerkywriter ) on the internet, it caught my eyes right away, it looks amazing and beautiful. However there’s one problem with it : It’s too expensive. Don’t get me wrong, I believe the quality of the final product will be amazing, and I believe he spend a lot of time and effort to build this and it looks great. But for me it’s just hard to spend 300 dollars on a keyboard. So I looked around and found actually there’s a lot of people doing their own customize vintage or steampunk style keyboard. At the end I found this one, it looks great and seems to be possible for me to build a similar one. So I decided to build one on my own.


Getting the parts

The first thing to do is to get the parts. I choose to buy a mechanical keyboard because it has a better type feeling and also the sound of hitting the key is closer to vintage typewriters. This is not too hard to find. And then it comes the challenge : the keys. It took me some time to finally settle with the metal buttons. I was searching for typewriter keys which already comes with the letters but they are quite expensive too, and also for a modern keyboard you have about 105 keys in total, for the vintage typewriter you only get 35-50 keys i think. Which means you need to buy 2 or 3 full sets of them and also need some customize jobs too. Therefore I switch to search for metal buttons, which you can find a lot on amazon or ebay. There are couple things you need to be careful : the first one is the size of the button, you don’t want it to be small but you don’t want it to be too big either, from my point of view i think between 14 ~ 16mm is the best. The second thing is that you want it to be flat, some buttons comes with a small ring in the back. I don’t have proper tool to remove it and bear in mind we are looking at over 100 keys. To remove it for everyone of them is going to be a huge amount of work. At the end I found these :


These buttons are perfect to me, they do have that small thing on the back but it’s really flat so it doesn’t matter. And I really like edge in the front, make it look like one of those vintage typewriter keys.


Building it

So finally we get all the things we need and can start building it. What I did is really simple : I remove the key from the keyboard, then cut the 4 sides of it and leave only the top. And then just use super glue to glue these buttons on to the key. There are things you can do to improve this such as minimum the surface the key and also make it thinner as well. As for me this is already good enough.


But these are only the small keys. For the bigger keys such as space bar, shift, backspace and enter, I don’t want to put just a button on it. It will looks empty and hard to type. So I decide just to remove the 4 sides and leave it like that, which looks quite ok to me to be honest. However there some extra work needs to be done for these big keys : I need to polish the edges. Because when I cut them it leaves a very ugly and uneven edge, I want to polish it and make it smoother.

Again I don’t have the right tool to do it, but I don’t want to spend some money on a tool that I won’t be using that often. So I asked myself : why not just build it myself, and I can have some fun with my LegoNxt ! And here it is, my DIY lego nxt polisher 😀


The button on the left is for turning on and off, the ultrasonic sensor is for detecting the distance from my hand to the wheels. The original idea is that the wheel will start itself automatically when my hands is close to the machine, and stop itself when I move away. It does work however I get some noise from the ultrasonic sensor ( return a lot of zeros ) and also I find actually easier just let it run. So at the end I just disable it but it’s still very fun to play with these sensors. It’s a simple thing which I spent about 2 hours to build it and make it work, but it’s perfect for polish my keys. Here is a short video of how it works  :


So that’s it, that’s my DIY steampunk keyboard, I’ve never feel so nerdy in my life 😀 There are still things can be done to make it better. But I kind of enjoy the look of it now so i’ll just leave it like this for now. To be honest it’s not very difficult to make one, I spend the most time on cutting the keys, but if you have proper tools it could save you a lot of time. Also I really enjoy building this mini robot. I’ve always been working with codes and haven’t explored hardwares that much. My next goal will be learning Arduino and build some awesome robot !