Home / Archive by category "Javascript"

Case study – Night Eye

Here comes my annual blog post 😛   I should make more posts but i got too busy/lazy.

Recently I was invited to take part of the Christmas experiment this year. Along with my colleague Clement and my friend Bertrand we got an idea of using abstract lines to recreate the shape of the animals in the forest.

You can check out the experiment here :
http://christmasexperiments.com/2016/01/night-eye/

Also if you happen to have HTC VIVE, give it a try in the latest chromium build. It’s also a WebVR project.
In this project Clement has taken care of the line animation while I was focusing on the environment and the VR part. The following section is the case study of my part.

 

Initial Designs

Here are some pictures of initial designs  with different colour theme:

nighteye1

nighteye2

 

Reflection Matrix

The idea started with my experiments with reflections. I’ve always wanted to understand how to create a proper reflection and failed so many times. But then I found a really good tutorial on youtube, walk through the process step by step. I highly recommend to have a look if you are interested in implement the reflection youself. The tutorial is in Java but it covers all the concept and explained it clearly, plus the shader doesn’t change ( too much ) .

https://www.youtube.com/playlist?list=PLRIWtICgwaX23jiqVByUs0bqhnalNTNZh
The only problem I got to follow this tutorial is the clipping plane, which I think webgl doesn’t support ( Please correct me if I am wrong ). So end up just using discard to do the simplest clipping. I’ve also find another really good presentation about rendering reflection in Webgl, in it it mentioned other ways to clip, you could have a look :
https://29a.ch/slides/2012/webglwater/

 

Editor


In order to get the best position and the right angle for the animals, we created a simple editor for us to place the animals and tweak the camera angles. It took a little bit extra time to build it, but it saves us a lot of time tweaking. It’s always easier when you can visualise your settings in live. After we have selected the positions and camera angles in the editor we just export a big json to the project and it’s done.

 

WebVR

In this project we want to try the latest WebVR API. Which is really amazing ! They make it really simple to implement. The first step is to get the VRDisplay and setup the frame data holder :  

vrDisplay = navigator.getVRDisplays();
frameData = new VRFrameData();

Then in the loop you can get the data by :

vrDisplay.getFrameData(frameData);

Rendering

For the rendering it’s become really simple. The WebVR now returns the view matrix and the projection matrix of both eyes to you.

setEye(mDir) {
    this._projection = this._frameData[`${mDir}ProjectionMatrix`];
    this._matrix = this._frameData[`${mDir}ViewMatrix`];
}

You can just pass it into your shader and you are ready to go. No need to setup the eye separation, no need to calculate the projection matrix. It’s just simple like that. And the code become really clean too : Set the scissoring, set the camera, render then it’s done.

GL.enable(GL.SCISSOR_TEST);
const w2 = GL.width/2;

//	get VR data
this.cameraVive.updateCamera(frameData);

//	left eye
this.cameraVive.setEye('left');
scissor(0, 0, w2, GL.height);
GL.setMatrices(this.cameraVive);
this._renderScene();


//	right eye
this.cameraVive.setEye('right');
scissor(w2, 0, w2, GL.height);
GL.setMatrices(this.cameraVive);
this._renderScene();


GL.disable(GL.SCISSOR_TEST);

The next is to present in the VR headset. Which they make it really simple too :

vrDisplay.requestPresent([{ source: canvas }])

Then at the end of your render call, add:

vrDisplay.submitFrame();

Then it’s on.

However one more thing need to do but a simple one : You’ll need to use vrDisplay.requestAnimationFrame instead of window.requestAnimationFrame in order to get the right frame rate.

The WebVR api is really awesome and easy to use. There are couple things to check but I’m pretty sure you can just group them into 1 tool class. Here is a simple checklist for you: 

  • Matrices : View matrix / Projection Matrix
  • Scissor for Stereo Rendering
  • VR frame rate
  • Present mode for VR

And don’t forget to check out the examples from https://webvr.info/ you got everything you need to start in there.

Controls


After rendering, the next step for us is to implement the control. The interaction of our project is simple : press a button to go to next step and press another button to drag the snow particle with your hand. We are using the gamepad API with WebVR. It’s really straightforward. Start with : 

navigator.getGamepads();

To get your gamepads. You might get multiple gamepads so do a check get the one you want. After this for the position and orientation are in the gamepad.pose. The button states are in the gamepad.buttons. And these are everything you need to create the interactions.

 

Summary

It has been a lot of fun to work on this project with friends. And a good challenge too for learning and using the latest WebVR API. Again like I mentioned they’ve made the API so easy to use and recommend everyone to give it a try. I am really surprised by it and also how little time it took me to convert my old projects into WebVR. If you are interested in the code, it’s here : https://github.com/yiwenl/Christmas_Experiment_2016/

So let’s it, hope you enjoy the read and I wish you a merry xmas and happy new year !

nighteye4

 

P.S. Some behind the scenes for the commits 😀

commits

Codevember and ray marching

yiwenl.github.io/Codevember/

 

Still can’t believe that I’ve made it, but really glad I did. I decided to do this because I feel I never pushed myself hard enough, and want to challenge myself. It was easier at the beginning, while you hare a lot of ideas from the past. And then as the time goes you start to run out of ideas, that’s where the panic starts. I want to say thank you to all my friends who provides me ideas and inspirations. In this month everyday is like this : finish the experiment of the day just before I go to bed, then start to think about what I can do next day. It’s really intense, however it helped me a lot. In order to create work quickly I need to gather tools first, and save more tools while building them. The more tools you have, the quicker you can build.

 

Ray marching

A great part of my codevember experiments are ray marchings. I really like it. It was a huge mystery to me and seems super complicated. I am lucky to come across this live coding tutorial just before the codevember starts.

I’m so glad that my french hasn’t completely gone so I am still able to understand the most part of it. It’s a really wonder tutorial that guide you step by step to build your first ray marching experiment. Once finished this you’ll be able to start understand better the codes on shadertoy.com. And need to mention this amazing blog post of iq. It has all the basic tools you need. With this you are already able to create some amazing stuff.

I really like ray marching. It’s really simple: everything happens in 1 fragment shader. All the effects you need is just one function call, e.g. AO, Shadow, Lighting ( Diffuse, Specular), Spherical reflections … etc. For me it feels much simpler and easier to deal with. Besides, there’s already tons of tools on shadertoy that you can use. All you need is just to figure out what the arguments you need to pass in to the function, and most of the time they are really simple.

Also here are some other interesting videos related to ray marching :

also some useful links :
http://mercury.sexy/hg_sdf/

http://barradeau.com/blog/?p=575

Reasons to be creative 2015

I wasn’t planed to write this, but somehow I feel I should write something about it. So here it is 2 months after the festival.

Reasons to be creative

I feel really different about this RTBC this year. Not only because it’s the first time for me to give a talk on a full session, but also meeting all the amazing people. I felt like a “guest” for the last 2 years with RTBC, but this time I feel “home”. During these 3 days I feel really relaxed ( apart from my talk) and enjoy all the talks. Most of all, I met a lot of friends and make a lot of new friends as well. It’s a weird feeling that I finally meet some twitter friends in the real life, it’s just make it much real instead of just some messages showing on Twitter. You guys are totally awesome and I really enjoy having conversations with you.

I’ve always enjoyed the talks in RTBC. It just feels different to me, I like the mixture between the Dev talks and Designer talks, they are all equally inspiring to me. I always feel motivated after the 3 days and got tons of idea that I want to make. This year is no exception. And plus there’s one special one : the talk by Stacey Mulcahy. She is an amazing speaker and build amazing stuff. There’s one thing touched me the most which is the young game makers. It’s such a wonderful idea and makes me thinking about doing something for the kids as well. After become a father, I keep thinking about what I can do for my kids ? I have some skills and what I can do with it ? That’s the reason makes me start building all these small experiments. I wish my children see me as a maker or a creator, rather than just sitting in front of the computer hitting the keyboard all day. I want them to understand that the computer is just a tool to help you building and creating, and we should focus more on the things we build and the idea/story behind it. They might be too young to get the idea but I’ll keep doing this. Not only I want to give them this idea but also because I enjoy all these moments of building, testing and finally get kids to play with them. When I saw the young game makers project, I was really excited. I saw a possibility that I might be able to bring my work and experiments to more kids and helping them building things. I still don’t have too much idea how to make it happen but at least now I got a goal.

 

On Stage

It’s such a wonderful experience to me to have my first full session. To be honest I feel it’s actually much easier than the elevator pitch 😀 Having a full hour gives me more room to make mistakes. However I am still nervous to death. I rehearsed like crazy the night before the very morning. I actually feels less nervous when I start talking. I have to thank all my friends give me the advices, you are totally right about everything. Nobody understand better than the thing your are talking about. Once I started talking it feels just like working through my process again. The other useful advice i got is that you can never get rid of your nervous, so just accept it and not trying to fight against it. I found this very useful and actually helped me relaxed before the talk. I know that I was still quite nervous on the stage, that’s why I finished about 5 mins earlier. But which is good as well, so I could have some QAs. And I have to say I am really really really flattered by one of the question asked me about the Kuafu project that I started last year. I am so glad that people still remember it, also I am embarrassed that I haven’t worked on it for a long time. But now I’ve made it my project next year. I’ll make it to a more presentable state.

001

002

 

The project : Dark forest

I got the idea of this project just after John offering me this chance to speak. I had a quite clear goal when I start this project. I know I want to learn the flocking behaviour, I know I want to make some small installations in my backyard. I know I want to test projection on the grass. At the end I made it, which is very important to me, that i set a goal and achieve it. Although the result is something I wasn’t expected in the first place, I didn’t know about the synchronised flashing behaviour, and I didn’t expect that I could find a way to simulate it. I expected the projection on the grass will looks better but it’s actually not. I have to admit it was all these unexpected success and failures make the most memory for me. Now looking back at it after 2 months, I saw lots of space for improvement, but also I still enjoy this project very much. And now I really like setting a goal and work your way to it and document the process of it. If you are interested in this project, i’ve put everything here :

darkforest.bongiovi.tw

I want to say thank you for all the people who helped me on this project. It meant a lot for me !

So a bit of random stuff but I’m glad I made it to the RTBC this year and meet all the amazing people. I’m glad I made the project and now I can move on to the next one !

Dark Forest – part 2

Here is the part 2 of this project. I start to explore different materials to project on. My first try is on a wall in my backyard. It does looks slightly better when in big scale but it’s not very interesting. So I move on to the grass. It creates some very different visuals. I like how it makes the grass shine. However I don’t have the right equipment that I can hang the projector high enough to cover a larger area, which is a bit of shame because i think it would make it looks much better. Also it’s not projecting from the top. It’s with some angles which sometimes make the particles looks like a short line instead of a dot. This depends where you stand as well.

Just when I was trying to move the projector I accidentally project the particles to the trees. And that’s something really interesting. It looks very similar to the fireflies I saw in Taiwan. The leaves gives it a really different view and serve as a randomness in the system. I really like the result of it. Here is a short video of the experiments I’ve made :

 

I got another related idea when I test the projection on the grass. I want to make an interactive version for my kids to play. The idea is simple : the firefly with gather to where you stand. I start a new branch in my code, I keep the particles but remove all the trees and make the camera stays in the front. Then I connect it to a kinect which I could capture the position of my kids. Here I tried openCV with kinect for the first time. The performance and accuracy is amazing. I was using the findContour method and it returns a very impressive result :

螢幕快照 2015-08-12 下午2.30.17

The next step is to remap the position to the flocking system and then create an attractor force to pull the particles closer to this point. I had a great fun building this. Not only because I’m playing with OpenCV and kinect, but also my kids reaction to this is just wonderful. During the weekend they keep asking me if they could play with the fireflies again tonight. And after I made it, my daughter just start dancing with the particles. It’s one of the best memory in my life. Here is a short video of that night :

I’ve made another test to project on my chalkboard as well :

 

Now I’m working on finishing the project. I start added the terrain, trees and the background. Here are some WIP screenshots :

04

05

06

07

08

I am really excited about it and glad to see things finally getting together. I’ll keep working on it and hope to see your at reasons to be creative !

 

Harpapong – Challenge of 400 pixels

Couple months ago my friend Owen approached me with this project. It was based on the great work of his harpa pong last year. The basic idea is that they turned the facades of the Harpa concert hall in ReykjavĂ­k into a huge canvas by putting a led light on each window. Last year they created a pong game on this enormous canvas that user could play with their phone. This year during the SĂłnar ReykjavĂ­k 2015 they want to put some audio visualisations of the music of the main stage on it. And my friend Owen asked my if I would like to make one visualisation to it. I was really excited about the idea and I said yes right away, and then here comes the challenge : there’s only about 400 pixels per facade. So how big exactly is this canvas ? about this big : Just that tiny thing in the centre.
02

This is definitely the smallest canvas that I have every worked. I’ve been used to create visuals on a big canvas but now suddenly we only have 400 pixels to make visuals, that’s a whole new challenge to me. At first I was testing with basic geometries such as lines and rectangles. But in the same time I was trying out some ripples for other projects and then I was wondering what will happen if I put the ripples on the canvas of this size ? will it still be recognisable ? This is the ripple I made :

03

When there’s a beat it will trigger a wave. In the fragment shader I add all the waves together and based on the height of the map I mapped it to different colour which I get them randomly from ColourLovers.

I put it to the platform tool Owen build and this is what I get, it’s hard to recognise the circle but the movement and the colour changing is really interesting. 04

06

 

So that’s my contribution to this project, you could check the live demo here :

http://www.bongiovi.tw/projects/harpa/

The project page :  harpapong.com and a short film to the project : https://vimeo.com/122900808

Again thanks Owen for inviting me to this project. I am really proud to be part of it, and also I had a great time to create visuals and play on such a small canvas.

 

Blow : My Christmas Experiment this year

christmasexperiments.com/experiments/8
I was really surprised when I get the invitation from David to create one project for the Christmas experiment this year. I am a huge fan of them and always wondering if I can make my contribute to it. I cannot express how excited I am when I receive the email.

By that time I was working with some particles so I come up with this idea : to blow the particle ( sand ) away to reveal the image. Here is the first test :
xmas_xperiment_0I had a lot of fun building this, playing particles is always my favorite and It looks cool. However this looks more like Chinese paintings and I don’t know how to make it feel more holiday. Then my friend Bert come up with this design with golden particles and a pink background and suddenly it becomes very holiday like.

xmas

In this experiment I was still using the texture to save the particle positions and perform the calculation in the shader as my last post. In total there are 512 x 512 particles which is just the size of the image. I use a black/white image as a map, only the black part will stay and the white part will fly away. For the revealing I put a center in a random place and also combined with Perlin noise to give it more natural feeling. The last thing is the gold particles, which I just took it from an image and it works quite well. I think it could be more interesting with some point light effect but I ran out of time and the it already looks quite good to me so I didn’t try it in the end.

xmas1

So That’s it, that’s how I build this experiment. It’s simple but I had a lot of fun building it. Especially after a very stressful project I feel I need to do something fun to release my pressure. Again I am very thankful for being part of this and really proud to stand with all other talented developers. I enjoy all the experiments and can’t wait to see the rest !

WebGL GPU Particle stream

I’ve once blogged about a project which I build an interactive particle stream in Cinder but I lost it when I move to new webspace. Now I rebuild it with WebGL and want to post again and with some tips that I learn while building it. First thing first, the live demo is here :
http://www.bongiovi.tw/projects/particleStream

and also the source code is available here :

https://github.com/yiwenl/WebGL_Particle_Stream

 

Saving data in the texture

This is a quite common technique when dealing a large particle system : Save the information of the particle on a texture ( such as the particle position and particle speed ) and perform the movement calculation in GPU. Then when you want to move the particles, you just need to modify this texture. The basic concept is that a pixel contains these 3 color channels : Red, Green and Blue, so we can use these 3 channels to save the x, y and z coordinates. It could be the x,y,z of a particle’s position or the x,y,z of a particle’s velocity. This idea is simple, but need some works to make it work. The first thing is how to map a position to a color, the range of the position could be anything from negative to positive, but the range of a color channel is only from 0 to 1. In order to make it work we need to set a range for the positions, and the zero point will be (0.5, 0.5, 0.5) anything smaller than .5 will be negative and positive if greater than .5. A simple example that convert a pixel color to a position  range from -100 to 100.

var range = 100;
position.x = ( color.r - .5 ) * range * 2.0;
position.y = ( color.g - .5 ) * range * 2.0;
position.z = ( color.b - .5 ) * range * 2.0;

And vice versa you can save a position to a color like this :

color.r = (position.x/range + 1.0 ) * .5;
color.g = (position.y/range + 1.0 ) * .5;
color.b = (position.z/range + 1.0 ) * .5;

So each pixel on the texture represent a set of x,y,z coordinate, that’s how we save the positions of all particles.

 

Framebuffer

But how exactly we can write our data to a texture ? We need to use a framebuffer. Framebuffer allows your program to render things on a texture instead of render directly to your screen. It’s a very useful tool especially when dealing with post effects. For learning more about framebuffer you can check this post. With framebuffer now we can save the data to a texture, but here I meet the biggest problem in this experiment : Precision. Because we are working in the color space that all the numbers are really small, for example the speed of a particle could be only .01 and the acceleration of the particle will be even smaller. So when you multiply things together sometimes it gets too small and the pixel cannot hold the precision. This happens both to this experiment and the project that I mentioned about with Cinder. In WebGL by default(gl.UNSIGNED_BYTE) each color channel have 8 bits to store the data. In our case this is not enough, luckily there’s a solution for it : Using gl.FLOAT instead of gl.UNSIGNED_BYTE, gl.FLOAT will allow each color channel to have 32 bits to save the data. In order to use gl.FLOAT we need to do one extra step :

gl.getExtension("OES_texture_float");
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this.frameBuffer.width, this.frameBuffer.height, 0, gl.RGBA, gl.FLOAT, null);

This will enable WebGL to use gl.FLOAT and solve our problem with precision. Here is a screenshot of how the framebuffer look like in this experiment, I save the position of the particles on the left side of the framebuffer, and the velocity of the particle on the right.

textureMap

 

Particle movements

The next step is to calculate the movement of the particle. It all base on this rule :

new velocity = old velocity + acceleration
new position = old position + velocity

So with our texture, on the left side which is the position of the particle, we just need to get its velocity and add it to the current position, don’t forget that the range of velocity is from 0-1 so need to subtract vec3(.5) from it

if(vTextureCoord.x < .5) {      //  POSITION
    vec2 coordVel       = vec2(vTextureCoord.x + .5, vTextureCoord.y);   // get the coordinate of the velocity pixel
    vec3 position       = texture2D(texture, vTextureCoord).rgb;         
    vec3 velocity       = texture2D(texture, coordVel).rgb;              
    position            += (velocity - vec3(.5) ) * velOffset;       
}

For right side (which is the velocity), I want to add a random force to the particle based on where the particle is. I found a very useful GLSL noise function here. So the shader code look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);

    velocity            += vec3(xAcc, yAcc, zAcc);
}

Where snoise is the noise function and I passed in time as well so it will keep changing constantly. But this is just roughly how it looks like, in the real life you need to tweak the value in order to get the natural movement feeling. The last thing is that you need to prepare 2 framebuffers and swap them every frame, so you can always get the result of last frame and update it to the other framebuffer.

his.fboTarget.bind();
this._vCal.render( this.fboCurrent.getTexture(), this.fboForce.getTexture() ); // Perform the calculation
this.fboTarget.unbind();

...

var tmp = this.fboTarget;
this.fboTarget = this.fboCurrent;
this.fboCurrent = tmp;

 

Adding interaction

The final step is to add interaction to it. With Leap motion we can easily get the position and velocity of the hands, so we can easily determine a force with position of the hand, and its strength will be determined by the length of the hand velocity. As for the direction there are couple of options : the first one is to take the direction of  the velocity, which is the most common one. However it can be improved with using the direction of your palm, which leap motion is able to give us (hand.palmNormal). This will make it feel better when you do several movements in a roll, trying to push the particles to same place. And one final touch to this is to check the dot product of the hand velocity and this palmNormal, if the dot result is smaller than zero which means they move in different direction, we should set the strength to zero to avoid the weird movements.

To apply this force to our particles, first we need to create a force texture like this :

gestureForce2

Again we use color to represent the force. Back to the shader, when we calculate the velocity of the particle we need to add this force as well. So the shader will look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);
    
    velocity            += vec3(xAcc, yAcc, zAcc);

    // get the force pixel by the position of the particle
    vec3 forceGesture   = texture2D(textureForce, position.xy).rgb;   

    // map the force value to -.5 to .5 and add it to velocity   
    velocity            += forceGesture - vec3(.5);                      
}

Summary

So that’s how I build this. The concept is not complicated, but there are a lot of small steps to take care. Also because everything happens in texture and shader which makes it hard to debug. Sometimes you just get a white or black texture and hard to tell which step went wrong. But once you got it all working and you can push for a huge amount of particles, that feeling is incredible. It’s a really good practice for learning framebuffer, shader and particle movements, I learn a lot and had a lot of fun when building it.

Here is a short video of the Samsung project I build if you are curious how it looks in motion : https://vimeo.com/92043935

 

Substrate Cube

I still amazed by Jared Tarbell’s work every time I go back to his site even if it’s created already 10 years ago.  I’ve try to recreate his substrate years ago in flash and it was so much fun to build it.

Last week I went back to his substrate again and wanted to recreate it in javascript. I haven’t done any generative coding for a while, it feels so good when I pick it up. I really like the feeling that you set up some rules and just let the code run. Every time you get an unexpected result and amazed by them. For this substrate experiment, the rules are simple :

1. Start a line and moving forward.
2. When hit the edge of the canvas or another line, stop.
3. If this line is longer than minimum length required, generate 2 more lines from this line.

It’s just this simple and it creates such an amazing result, of course there are few bits to make it looks better but this is the basic idea. So here is the javascript version I created :

substrate_0014_Screen Shot 2014-09-13 at 16.16.39

http://www.bongiovi.tw/experiments/substrate/

Few things about this experiment : First thing is the size of the canvas, I doubled the size of the canvas and resize back to normal. In this way you can get a lot details and feel less pixelate, especially those shadows. The second thing is I draw by directly modifying the image data of the canvas. The process of it is to get the position of the pixel that you want to modify in the big array of the image data. Change it and call context.putImageData. A tip for the performance is that doing putImageData for every time you change a pixel is super heavy. In every frame I need to update a lot of pixels, so the better way to do it is not to call putImageData until you’ve updated all the pixels you want to change, then just call it once every frame.

 

Put it on a cube

After I done this i get an idea to put it on a cube. I image it will be quite interesting to watch the line march over the edge from one side to another. So I started to create the texture like this :

substrate02_texture

It looks simple but actually quite a challenge to find out when the line hit the edge where it needs to appear again and what’s it’s new direction on the texture. I’m really glad I sort it all out eventually. Here is the result :

substrate_0004_Screen Shot 2014-09-15 at 11.06.10

http://www.bongiovi.tw/experiments/substrate3d/

I’ve also put some screenshots here.

 

Summary

I had so much fun building this. And it reminds me the talk of Mario Klingemann ( Quasimondo ) ‘s talk on the RTBC this year. I really like one thing he said : when you create something, most of the time you will find somebody already done it before, and sometimes it’s really really long time ago. But it doesn’t matter , the important thing is the process, you will always have new inspirations from building it or solving the problem. And for me that’s where the new idea begins. I feel so sad when I work with people trying to find new ideas and when there’s an idea being brought to the table someone just said : “It has been done before.” I’m not a thinker, it’s hard for me to just “think” something new. I need to build something and start from there, and after trying add new stuff to it or improve it for number times then I might be able to find a new idea. This is how it works for me, if you just ask me to think a new concept I will never able to  find it by just thinking.

That’s why I like to go back to these sites, they are old indeed. But they are timeless to me and are amazing. Almost every time it gives me some new ideas. So if you haven’t try it, I encourage you to do it on your own. You will find a lot of fun during the process, and you will enjoy every unexpected result it brings to you.

Chinese calligraphy in 3D and Reasons to be creative

I’ve been playing with Chinese brushes for a while now, besides the ink drops i created and use it to create mountains, I’ve created these strokes too. 03_strokeMy favorite part of these strokes is the gradient, I created them actually by accident. I find some amazing work of 張大千 and there is a lot of beautiful gradient in his work. I was wondering how to do it but it turns out not as complicated as I thought, of course there’s a lot of trials and errors, I throw out a lot fail tests before i get this. The trick is simple : put only color ink on your brush, when you are about to draw, just dip some black ink on the tip of your brush and there you go, really easy and you get a very nice gradient. I had a lot of fun trying with different portion of color ink, black ink and water, at the end it’s very hard for me to stop and to pick the one I’m going to use in my code, because each one of them is unique and have its own character. I think this is one of the reason why I like to create these ink textures so much. It’s a similar process as doing generative art : you have few controls, and you just let it run and enjoy the result, and then you go back to tweak these controls again and try to discover new controls or new settings. I’ve done this a lot in the code but it’s the first time for me to do it outside computer, really enjoy it.

 

Bringing it to code

After I created these strokes, actually I don’t really know what am I going to do with it, they look amazing but I don’t have any ideas how I can use it. Until one day I was building some prototypes for a project and doing some explorations, and one of the idea is to create ribbons, then suddenly this idea strike me : how about put the textures on the ribbon, and that’s how this started. The moment I put it on the ribbons, it feels totally fit with it. And really gives the feeling of the real brushes. I show it to my colleagues and they all love it. So I start to make some decorations for it : adding a texture background, some ink drops and lastly : a video layer to overlay on the textures, it makes the texture moves constantly, it just a small touch but make it feel different.

Calligraphy_01

A online demo could be found here.

 

Drawing a smooth ribbon in 3D space

One problem I had while building these ribbons is that it sometime twisted in 3D space.  Calligraphy_05

The one on the left is twisted, you can see the normal (  the purple lines ) are flipped to the other side. Luckily there is a solutions for this called Parallel Transport Frames , I was building this in cinder and it’s already part of the framework so it’s very straight forward to use. You can check the Tubular sample of cinder. Using this could generate a smooth ribbon ( the image on the right), you can see the normal is now all in the same side.

 

Another theme, another world

Two years ago when I went back to Taiwan and visited the national museum. I found something really fascinating that haunt my mind for years. Which is this old book, these golden characters are just so beautiful to me and also the dark blue background. And since after I’ve been always want to find a place to try out these colours.

Calligraphy_02

After I build the prototype this idea come to my mind again, therefore I put the golden colour on the strokes and put a dark blue background to give it a test. Surprisingly it works ! I did try to keep the gradient of the stroke but turning them into grey scales then overlay this golden colour on top of it. Also the video layer helps a lot as well.

Calligraphy_03

And it looks a little bit flat so I add some random shadows around it. I put this on my backyard with projector for my kids and they love it :

I actually did couple more prototypes based on this however they are build in C++ with cinder. I’ve put my source code on the github which you can find it here. It includes 2 versions in cinder, one is with leap motion, the other one is for the projection table i made, with kinect. And a version in web which is build with WebGL. I didn’t have too much time to go into details of these codes so if you have any questions please send me an email. And also the video texture that i use is too big to upload, so you might need to find one yourself or remove it from code.

 

Reasons to be creative

One of my resolutions this year is to give a speech on stage, so when I saw that Reasons to be creative is looking for elevator pitchers I didn’t hesitate too much then send them my proposal. I was thinking about to give a talk about my DevArt project with my friend together. But John (Davey) of RTBC replied it’s impossible to do that, the elevator pitch need to be solo, but he is very kind and said that he like both of our works and offer us one pitch each.  Therefore we decide that my friend Bertrand will still be presenting our Kuafu project, and I will be talking about these ink experiments i made ( strokes and ink drops )

It’s first time for me to step on the stage, I was scared to death. I am really glad that i didn’t do this alone, myself and bert we practiced a lot in our hotel room and time ourselves so we had a good sense of our timing. Also the crew from RTBC ( Chris and Andy ) helped a lot as well, they let us know all the details we need to take care of and always cheering for us. We rehearsed couple of times and at the end it all went well on the stage. It is such a great experience that I won’t ever forget. I encourage you to do the same if you haven’t done it before, it’s scary but also it’s a lot of fun ! Also I am really glad to meet all the other elevator pitchers as well, they are all very talented and amazing.

Calligraphy_04

I was at RTBC last year but i was working for the first day and get called back the last day. So this year is the first time for me doing the whole event. I’ve never been to other events like this and this is totally mind blowing. I enjoy almost every talk I went. Pretty much every talk implied one thing : work on your stuff, don’t wait things to happen. You might get a chance to use one of your old stuff someday. I think this is the biggest thing i get from these 3 days, keep doing the stuff you like and enjoy the process. I really feel full of energy and motivations to work on my stuff after this event. Now I’m just trying to find a way to keep the motivations not to be destroyed by the daily life.

 

Summary

I really glad I made this 3 minutes talk, and I really enjoy these 3 days. I’ve been working by myself for a long time and it’s really great to see that there are some people like my work. That gives me a lot of motivations to keep going. I hope i’ll get other chances to give speeches in the future, i’ll keep working my way to it. But the most important of all is to enjoy and have fun. That i’ll keep in my mind and create more stuff.