Home / Posts tagged "WebGL"

Case study – Night Eye

Here comes my annual blog post 😛   I should make more posts but i got too busy/lazy.

Recently I was invited to take part of the Christmas experiment this year. Along with my colleague Clement and my friend Bertrand we got an idea of using abstract lines to recreate the shape of the animals in the forest.

You can check out the experiment here :
http://christmasexperiments.com/2016/01/night-eye/

Also if you happen to have HTC VIVE, give it a try in the latest chromium build. It’s also a WebVR project.
In this project Clement has taken care of the line animation while I was focusing on the environment and the VR part. The following section is the case study of my part.

 

Initial Designs

Here are some pictures of initial designs  with different colour theme:

nighteye1

nighteye2

 

Reflection Matrix

The idea started with my experiments with reflections. I’ve always wanted to understand how to create a proper reflection and failed so many times. But then I found a really good tutorial on youtube, walk through the process step by step. I highly recommend to have a look if you are interested in implement the reflection youself. The tutorial is in Java but it covers all the concept and explained it clearly, plus the shader doesn’t change ( too much ) .

https://www.youtube.com/playlist?list=PLRIWtICgwaX23jiqVByUs0bqhnalNTNZh
The only problem I got to follow this tutorial is the clipping plane, which I think webgl doesn’t support ( Please correct me if I am wrong ). So end up just using discard to do the simplest clipping. I’ve also find another really good presentation about rendering reflection in Webgl, in it it mentioned other ways to clip, you could have a look :
https://29a.ch/slides/2012/webglwater/

 

Editor


In order to get the best position and the right angle for the animals, we created a simple editor for us to place the animals and tweak the camera angles. It took a little bit extra time to build it, but it saves us a lot of time tweaking. It’s always easier when you can visualise your settings in live. After we have selected the positions and camera angles in the editor we just export a big json to the project and it’s done.

 

WebVR

In this project we want to try the latest WebVR API. Which is really amazing ! They make it really simple to implement. The first step is to get the VRDisplay and setup the frame data holder :  

vrDisplay = navigator.getVRDisplays();
frameData = new VRFrameData();

Then in the loop you can get the data by :

vrDisplay.getFrameData(frameData);

Rendering

For the rendering it’s become really simple. The WebVR now returns the view matrix and the projection matrix of both eyes to you.

setEye(mDir) {
    this._projection = this._frameData[`${mDir}ProjectionMatrix`];
    this._matrix = this._frameData[`${mDir}ViewMatrix`];
}

You can just pass it into your shader and you are ready to go. No need to setup the eye separation, no need to calculate the projection matrix. It’s just simple like that. And the code become really clean too : Set the scissoring, set the camera, render then it’s done.

GL.enable(GL.SCISSOR_TEST);
const w2 = GL.width/2;

//	get VR data
this.cameraVive.updateCamera(frameData);

//	left eye
this.cameraVive.setEye('left');
scissor(0, 0, w2, GL.height);
GL.setMatrices(this.cameraVive);
this._renderScene();


//	right eye
this.cameraVive.setEye('right');
scissor(w2, 0, w2, GL.height);
GL.setMatrices(this.cameraVive);
this._renderScene();


GL.disable(GL.SCISSOR_TEST);

The next is to present in the VR headset. Which they make it really simple too :

vrDisplay.requestPresent([{ source: canvas }])

Then at the end of your render call, add:

vrDisplay.submitFrame();

Then it’s on.

However one more thing need to do but a simple one : You’ll need to use vrDisplay.requestAnimationFrame instead of window.requestAnimationFrame in order to get the right frame rate.

The WebVR api is really awesome and easy to use. There are couple things to check but I’m pretty sure you can just group them into 1 tool class. Here is a simple checklist for you: 

  • Matrices : View matrix / Projection Matrix
  • Scissor for Stereo Rendering
  • VR frame rate
  • Present mode for VR

And don’t forget to check out the examples from https://webvr.info/ you got everything you need to start in there.

Controls


After rendering, the next step for us is to implement the control. The interaction of our project is simple : press a button to go to next step and press another button to drag the snow particle with your hand. We are using the gamepad API with WebVR. It’s really straightforward. Start with : 

navigator.getGamepads();

To get your gamepads. You might get multiple gamepads so do a check get the one you want. After this for the position and orientation are in the gamepad.pose. The button states are in the gamepad.buttons. And these are everything you need to create the interactions.

 

Summary

It has been a lot of fun to work on this project with friends. And a good challenge too for learning and using the latest WebVR API. Again like I mentioned they’ve made the API so easy to use and recommend everyone to give it a try. I am really surprised by it and also how little time it took me to convert my old projects into WebVR. If you are interested in the code, it’s here : https://github.com/yiwenl/Christmas_Experiment_2016/

So let’s it, hope you enjoy the read and I wish you a merry xmas and happy new year !

nighteye4

 

P.S. Some behind the scenes for the commits 😀

commits

Blow : My Christmas Experiment this year

christmasexperiments.com/experiments/8
I was really surprised when I get the invitation from David to create one project for the Christmas experiment this year. I am a huge fan of them and always wondering if I can make my contribute to it. I cannot express how excited I am when I receive the email.

By that time I was working with some particles so I come up with this idea : to blow the particle ( sand ) away to reveal the image. Here is the first test :
xmas_xperiment_0I had a lot of fun building this, playing particles is always my favorite and It looks cool. However this looks more like Chinese paintings and I don’t know how to make it feel more holiday. Then my friend Bert come up with this design with golden particles and a pink background and suddenly it becomes very holiday like.

xmas

In this experiment I was still using the texture to save the particle positions and perform the calculation in the shader as my last post. In total there are 512 x 512 particles which is just the size of the image. I use a black/white image as a map, only the black part will stay and the white part will fly away. For the revealing I put a center in a random place and also combined with Perlin noise to give it more natural feeling. The last thing is the gold particles, which I just took it from an image and it works quite well. I think it could be more interesting with some point light effect but I ran out of time and the it already looks quite good to me so I didn’t try it in the end.

xmas1

So That’s it, that’s how I build this experiment. It’s simple but I had a lot of fun building it. Especially after a very stressful project I feel I need to do something fun to release my pressure. Again I am very thankful for being part of this and really proud to stand with all other talented developers. I enjoy all the experiments and can’t wait to see the rest !

WebGL GPU Particle stream

I’ve once blogged about a project which I build an interactive particle stream in Cinder but I lost it when I move to new webspace. Now I rebuild it with WebGL and want to post again and with some tips that I learn while building it. First thing first, the live demo is here :
http://www.bongiovi.tw/projects/particleStream

and also the source code is available here :

https://github.com/yiwenl/WebGL_Particle_Stream

 

Saving data in the texture

This is a quite common technique when dealing a large particle system : Save the information of the particle on a texture ( such as the particle position and particle speed ) and perform the movement calculation in GPU. Then when you want to move the particles, you just need to modify this texture. The basic concept is that a pixel contains these 3 color channels : Red, Green and Blue, so we can use these 3 channels to save the x, y and z coordinates. It could be the x,y,z of a particle’s position or the x,y,z of a particle’s velocity. This idea is simple, but need some works to make it work. The first thing is how to map a position to a color, the range of the position could be anything from negative to positive, but the range of a color channel is only from 0 to 1. In order to make it work we need to set a range for the positions, and the zero point will be (0.5, 0.5, 0.5) anything smaller than .5 will be negative and positive if greater than .5. A simple example that convert a pixel color to a position  range from -100 to 100.

var range = 100;
position.x = ( color.r - .5 ) * range * 2.0;
position.y = ( color.g - .5 ) * range * 2.0;
position.z = ( color.b - .5 ) * range * 2.0;

And vice versa you can save a position to a color like this :

color.r = (position.x/range + 1.0 ) * .5;
color.g = (position.y/range + 1.0 ) * .5;
color.b = (position.z/range + 1.0 ) * .5;

So each pixel on the texture represent a set of x,y,z coordinate, that’s how we save the positions of all particles.

 

Framebuffer

But how exactly we can write our data to a texture ? We need to use a framebuffer. Framebuffer allows your program to render things on a texture instead of render directly to your screen. It’s a very useful tool especially when dealing with post effects. For learning more about framebuffer you can check this post. With framebuffer now we can save the data to a texture, but here I meet the biggest problem in this experiment : Precision. Because we are working in the color space that all the numbers are really small, for example the speed of a particle could be only .01 and the acceleration of the particle will be even smaller. So when you multiply things together sometimes it gets too small and the pixel cannot hold the precision. This happens both to this experiment and the project that I mentioned about with Cinder. In WebGL by default(gl.UNSIGNED_BYTE) each color channel have 8 bits to store the data. In our case this is not enough, luckily there’s a solution for it : Using gl.FLOAT instead of gl.UNSIGNED_BYTE, gl.FLOAT will allow each color channel to have 32 bits to save the data. In order to use gl.FLOAT we need to do one extra step :

gl.getExtension("OES_texture_float");
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this.frameBuffer.width, this.frameBuffer.height, 0, gl.RGBA, gl.FLOAT, null);

This will enable WebGL to use gl.FLOAT and solve our problem with precision. Here is a screenshot of how the framebuffer look like in this experiment, I save the position of the particles on the left side of the framebuffer, and the velocity of the particle on the right.

textureMap

 

Particle movements

The next step is to calculate the movement of the particle. It all base on this rule :

new velocity = old velocity + acceleration
new position = old position + velocity

So with our texture, on the left side which is the position of the particle, we just need to get its velocity and add it to the current position, don’t forget that the range of velocity is from 0-1 so need to subtract vec3(.5) from it

if(vTextureCoord.x < .5) {      //  POSITION
    vec2 coordVel       = vec2(vTextureCoord.x + .5, vTextureCoord.y);   // get the coordinate of the velocity pixel
    vec3 position       = texture2D(texture, vTextureCoord).rgb;         
    vec3 velocity       = texture2D(texture, coordVel).rgb;              
    position            += (velocity - vec3(.5) ) * velOffset;       
}

For right side (which is the velocity), I want to add a random force to the particle based on where the particle is. I found a very useful GLSL noise function here. So the shader code look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);

    velocity            += vec3(xAcc, yAcc, zAcc);
}

Where snoise is the noise function and I passed in time as well so it will keep changing constantly. But this is just roughly how it looks like, in the real life you need to tweak the value in order to get the natural movement feeling. The last thing is that you need to prepare 2 framebuffers and swap them every frame, so you can always get the result of last frame and update it to the other framebuffer.

his.fboTarget.bind();
this._vCal.render( this.fboCurrent.getTexture(), this.fboForce.getTexture() ); // Perform the calculation
this.fboTarget.unbind();

...

var tmp = this.fboTarget;
this.fboTarget = this.fboCurrent;
this.fboCurrent = tmp;

 

Adding interaction

The final step is to add interaction to it. With Leap motion we can easily get the position and velocity of the hands, so we can easily determine a force with position of the hand, and its strength will be determined by the length of the hand velocity. As for the direction there are couple of options : the first one is to take the direction of  the velocity, which is the most common one. However it can be improved with using the direction of your palm, which leap motion is able to give us (hand.palmNormal). This will make it feel better when you do several movements in a roll, trying to push the particles to same place. And one final touch to this is to check the dot product of the hand velocity and this palmNormal, if the dot result is smaller than zero which means they move in different direction, we should set the strength to zero to avoid the weird movements.

To apply this force to our particles, first we need to create a force texture like this :

gestureForce2

Again we use color to represent the force. Back to the shader, when we calculate the velocity of the particle we need to add this force as well. So the shader will look like this now :

else { // vTextureCoord.x > .5
    vec2 coordPos       = vec2(vTextureCoord.x - .5, vTextureCoord.y);   // get the coordinate of the position pixel
    vec3 position       = texture2D(texture, coordPos).rgb;
    vec3 velocity       = texture2D(texture, vTextureCoord).rgb;

    float xAcc          = snoise(position.x, position.y, time);
    float yAcc          = snoise(position.y, position.z, time);
    float zAcc          = snoise(position.z, position.x, time);
    
    velocity            += vec3(xAcc, yAcc, zAcc);

    // get the force pixel by the position of the particle
    vec3 forceGesture   = texture2D(textureForce, position.xy).rgb;   

    // map the force value to -.5 to .5 and add it to velocity   
    velocity            += forceGesture - vec3(.5);                      
}

Summary

So that’s how I build this. The concept is not complicated, but there are a lot of small steps to take care. Also because everything happens in texture and shader which makes it hard to debug. Sometimes you just get a white or black texture and hard to tell which step went wrong. But once you got it all working and you can push for a huge amount of particles, that feeling is incredible. It’s a really good practice for learning framebuffer, shader and particle movements, I learn a lot and had a lot of fun when building it.

Here is a short video of the Samsung project I build if you are curious how it looks in motion : https://vimeo.com/92043935

 

WebGL – Depth texture

It took me a while but I finally get the depth texture working. Basically I just follow the instruction from this article and it worked. However I encounter some problems when trying to make this work :

After trying the technique in the article I mentioned, the first result I get is a total white screen, I thought I made an error so nothing got rendered on the screen but after a further research I found somebody said that he had the same situation but it turns out actually the depth texture is working, it’s just the value he got is very close to the max value therefore it looks all white. So i make a simple test to change the actual white (1.0, 1.0, 1.0 in shader) to red and this is the result i got (on the left is the all white screen, on the right is that i dye the actual white to red, you can see the shape of the mountains) : So it turns out the code is actually working, I just need to find a way to display it correctly. So I went back to do more search and i find this way to show the depth texture :

depthTexutre0

void main(void) {
	float n = 1.0;
	float f = 2000.0;
	float z = texture2D(uSampler0, vTextureCoord.st).x;
	float grey = (2.0 * n) / (f + n - z*(f-n));
	vec4 color = vec4(grey, grey, grey, 1.0);
	gl_FragColor = color;
}

n is the nearest depth and   is farthest depth, after applying this method the depth texture seems much more normal :

depthtexure0

But when I switch back to my render there is something wrong :
depthtexure1
You can see the depth is rendered weirdly, looks more like layers, and another thing I discovered is that the depth texture value i got is extremely close to 1 ( > .995). And the depth texture i get looks like layer too, but not like the depth texture inside the article that smooth. Therefor I have to go back to search again then I found out somebody says it’s because I didn’t have the right setup for the zNear and zFar. A too little zNear or a too big zFar will result a very wide range of depth results,and this will limit the precision of depth buffer. By this time the zNear and zFar I was using is .1/10000. So I scale everything down and also bring closer the camera so it looks the same in size, but the zNear and zFar become 1/2000. And this time everything works, the depth texture looks perfect, and the layer effect above is gone as well.

depthtexure2

So finally I got it working, I put up a little depth of field effect using the depth texture here, there is a checkbox for you to see how the depth texture looks like. It’s really a long way for me to finally get it. The main reason is there is not too much resource on this topic, however you will find more articles if you search for opengl depth texture instead of webgl texture. Here is a very good link tell you about the things you want to know about the depth texture. Again, all the code you need is in this article, enjoy the depth texture and now you can create some more advanced effects.

Chinese style mountains

A little experiment done last weekend, I got this idea on my way back friday night and can’t wait to build it.

The idea is simple, to record an ink drop video, and use it as a texture to create a mountain. In the end I just end up using static images instead of video because I haven’t got time to edit it. But already these images work really well, better than I originally imaged. And I enjoy to create things from real, the beauty of it is that it’s different every time and you can’t predict what’s going to happen. All you can do is to put some water on the paper, let the ink drop, and just wait and see the interesting shape form itself. You can definitely create these texture in photoshop but however record from the real thing makes it feels more “alive” to me and have more surprises as well.

for the code part it’s simple, the shape of the mountain is created using a sine curve plus some perlin noise, here is the demo link:

http://www.bongiovi.tw/experiments/webgl/mountains/

Star Canvas – Case study

Last thursday it was the 5th anniversary of B-reel London, we had a party on thursday night along with couple of our R&D projects. And I am working on this one : “Five”, it was based on one of my old prototype :

http://blog.bongiovi.tw/leap-motion-constellations/

The idea is to show all the constellations on the sky, and also we created a drawing tool so people can create their own constellation and send it to the sky. We find a way to build a dome and project it inside and we did it. It’s my first installation, really fun and a lot of learning, and I need to say thanks for my colleagues who makes this possible. I want to write a little bit more about this project, I think b-reel will put together a beautifully edited making-of video, so in this article I will be focusing on the tech/dev part.

The installation

Here is a diagram of the basic structure :

su_structure

 

Our developer Liam build the communication using node.js as the backend server for this project. Apart from the projection, we have 2 drawing tool running on iPad allow user to create their own constellation, and also a virtual keyboard on the iPad for people to use it to search the constellation they created. We don’t want to use a real keyboard because people can miss-trigger other applications, therefore we create this keyboard and limit the usage only to this project.

The projection is built using WebGL, at the beginning I was considering using Cinder to build this project, however due to the limit time and also because I had built a working prototype in webgl, I choose to use webgl in the end. I wasn’t too sure about it when I choose it, I didn’t know if the performance will be good enough and is it stable enough to run through all night with a lot of communications, but it turns out work quite well, I was thinking that i might need to restart the whole thing couple of times during the party but in the end i didn’t have to. I’m really happy with it and feel more security now with using WebGL for installations. And also i’m not using any webgl libraries but just some tool class i create, this is another big achievement for me as well.

The sound is another big part, done by our sound master Owen. All I did is send out the event base on the hand gesture and the constellation selected to the node server, then it gets re-send to pure data to generate the sound effect. When a constellation is selected, I calculate the number of lines in this constellation and max/min/avg distance of these lines and then send it out, and Owen create different melody based on these informations, so each constellation has its unique melody. I would really like to bring this feature to the web version, but i need to learn from master Owen first 😀

For the drawing tool we capture user’s strokes and then simplify them to fewer dots to create the “constellation like” kind of effect, then we insert this generated constellation to the database and in the same time send out a signal via node server to tell the projection to update the display.

And the keyboard is just simply transfer the event of which key being pressed to the projection, very straight forward.

 

The dome

We discover this article teaching people how to build a dome just using card boards, and the best is that it already has all the detail of triangles you need including the plan and sizes. Once I saw this i couldn’t help but want to build one myself. Me and my wife start right away to build a small scale one to test, here are some pictures :

It’s really easy to build, even with the actual dome we build is not that complicated either, however the problem is how to support it, the geometry itself is flexible, in order to make it stay in a perfect shape we use a lot of fish strings attached to almost every corner to hold the shape. This is actually the most difficult part, building the dome actually is much easier than this.

IMG_5262

 

The other challenge is how to project on to this dome, in the link they are using a hemispherical mirror which we did try, however the result is really not acceptable. The first problem is when reflected to the dome, it loses a lot of details and become very pixelated. The second one is the distortion is really hard to correct. Due to this 2 reasons we give up using the hemispherical mirror. Then I tried to project it directly on the dome and then correct the distortion in the code, however we find actually it looks better just leave the way it is without doing any correction. Maybe this is because the nature of this project that everything is already on a sphere so there is no need to do more to correct it. All I need to do is just create a gradient circle mask to mask out the part outside the dome.

IMG_3505

 

The constellations

This project is not actually super 3D heavy, the only 3D part of it is everything is on a sphere and that’s all, the stars are flat, the constellations are flat, almost everything is flat. This is the nature of this project, we don’t need super complicated 3D models for this. On the other hand, we want to push a little bit more to the visual parts, to push as many layers as possible, as we all know the more layer you have, the more detailed / beautiful it’s going to get. So here is a little demo of the project and you can see all the layers being rendered  :

http://www.bongiovi.tw/experiments/webgl/su/

We started from a basic skybox, however soon after we build it we think it will feel much better if we can have an animated background, therefore our motion designer create an animate background for me. At the beginning we just put it as a plane texture on top of the skybox, but then we discover it will look better to map it onto a sphere that moves with the skybox, this gives the animation a feeling that it’s the real sky instead of just an overlay.

I’ve already blogged how i make the stars and lines facing to center/screen in my previous post, it was in stage3D but it works in the same way.

su_screenshot

I’ve put a fake depth of field effect on the name of the constellation and the stars so it gets a little bit blurry/transparent when it gets closer to the edge. It’s a fake depth of field because i didn’t use the depth buffer, it’s just a simple calculation base on its x and y, however this is very suitable for this project.

constellationParticle

For the appearing of the constellation, I had a little fun with the fragment shader. I want to make a simple particle effect transition to show the constellation drawings. I found this useful glsl random function on shader toy :

float rand(vec2 co){
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

to create this effect is actually quite simple : When getting the texture coordinate, add this random number to the desired coordinate, and this will result getting a random position therefore it looks like particles. And to create the animation is just to tweak the amount of this offset :

targetPos = orgPos + offset * randomPos;

so you can see it here : the bigger the offset is, the more random it gets( more particle like), and if the the offset is 0 then we get the original position which will generate a clean drawing. So basically the animation is just to tween offset from some big value back to 0. Voila, that’s how simple it is. You can add more to this random position such as scale or rotation to give a more dramatic effect.

And in this project we used a lot of video textures, some of them we need transparency, here is a easy way to do it : using the blend mode, before you render it, set the blend mode to this :

this.gl.blendFunc(this.gl.SRC_ALPHA, this.gl.ONE);

and make sure the transparent part is pure black, then it will get removed when being rendered. This trick is not only for video but for all the textures, so you can save some file size if you use this wisely.

The video texture is quite heavy for the loading, some of them can be done in code but will be very difficult to get the detail as rendered video. I think this is the choice based on the project, in our case we are building an installation so we don’t need to care about the loading, and we use super powerful machine to run it, so the file size is not a problem. In this case I will choose to use videos to get the best detail and also easier to modify. However if we are building an online experience then we need to do more tests on the performance and loading. Anyway my point is : Choose the best solution for the project, I know i’ll have a lot of fun playing with shaders if I were building it in code, but it will be very time consuming and hard to change.

 

The navigation

This is the first time we use leap motion for a real project, it turns out working quite well. I won’t say that it’s going to replace the mouse but definitely it can provide an extra way to navigate. The part I like about leap motion is it’s really sensitive and responsive,  you can create a really good control with it. However some gesture are still very hard to use, especially everyone has its way to do the gesture. At the beginning or you can see my prototype video, i created this “Grab” gesture to navigate, to be honest i quite like it, it gives me the control of holding something, however some people find it difficult to use, and it’s really hard for me to improve the gesture because people have different ways of “grabbing”, it sounds a little bit funny but it’s what I encounter during this project. So in the end i have to remove this grab gesture and goes for full hand. If you have a leap motion you can have a play with the link i mentioned before. We have 3 gestures : full hand open to move around, 1 finger pointing to select, and clap, this one i’ll leave for you to discover 😀

There an interesting part of navigation : How I select the constellation i want ? do I need to compare the distance of every constellation to my mouse’s position ? this sounds quite heavy and need a lot of math. Luckily our ancestor has already solved this problem, basic astronomy : there are in total 88 constellations on the sky and have taken all the space. There’s a way to determine the boundary of each constellation using Right ascension andDeclination, which is the IAU constellation boundariesSo basically these scientists has already create the boundaries of all the constellations, and you can find a map like this (of course without the name, i put the names on just easier to see)

boundaries_map1

when you map it to a sphere you can see it fits perfectly with all the constellations. So what i did is to paint each region with different color, and then when the mouse event triggered ( mouse move or click), i will perform a gl.readPixels to get the pixel value of the mouse position, and because each region has an unique value, therefore i can know which one i just selected ( or roll over ). Just couple things to pay attention : when you doing the readPixels you don’t need to read from the whole render, you just need that 1 pixel under your mouse, this will save some performance. However the readPixels call is still heavy so make sure to skip it whenever you can ( e.g. when the mouse is not moved, so it’s the same result as last frame), secondly when you export this map make sure you export a png so it won’t get compressed and loose the value you set.

Summary

This is about the longest post i have ever made but i am really proud of this project. I am still working on the last part of this project with my friend, there’s something we want to do with this project and would like to bring the beauty of the constellations to everybody. We have discovered the beautiful drawings of  Johannes Hevelius and we want to show it to everybody. So stay tuned for the updates!

Since I started working on this project i fall in love with astronomy. Our sky is amazingly beautiful, every night if the sky is clear i will look up and try to find the constellation i know. And I realise no matter how much I do I cannot compete with the beauty of the nature, however I can get inspired by it. There is a lot of inspirations you can find just by looking at the nature around you.

Some pictures of the night

Texture in vertex shader

LINK

It’s such a shame that I only discover that you can get the texture data in the vertex shader recently. the texture2D method is not fragment shader only, this opens another world to me. Like this particle video example that i have rebuild for thousand times, now it’s extremely easy to build. Because i can just update the video texture and calculate the new position with the pixel, therefor i don’t need to re-upload the vertices position every frame. The resource saved allow me to do more e.g. using a texture particle, and create a simple dof effect.

here is how the vertex shader looks like :

vec4 colorVideo = texture2D(uSampler2, uvRef);
vec3 pos = aVertexPosition;
pos.y += colorVideo.y * offset;

when uploading the vertices you need to pass an extra uv coordinate so you know where to pick the color from the video texture, which is the uvRef in this example, then calculate the offset you want and add it to the position of the particle. That’s it, hope you enjoy it !

Flaming Flowers – WebGL + Projection

I have built this experiment for a while, and I’ve always wanted to project it on my back yard and see how it feels. Finally I borrowed a projector from a friend ( Thanks Henry !) this weekend and finally make it happen.

It goes better then I expected, now I’m really considering to buy a projector for my own. It’s really amazing how different it is when in different scale and environment, and also another amazing thing is that the moment i project it on my wall my kids start right away to touch it and try to interacte with it. I was a little afraid about how the kids will react with the projection, but now i know it’s going to be fine. So the next step will be adding the interaction using kinect, hope i can get it done before this summer ends, or they’ll only be able to play indoors.

the demo is here.

The main difference of this build and the last one is that I put everything in WebGL instead of create the animation in canvas then create the texture from it. The idea is still the same, use a video texture as a background, then apply a mask on top of it.

texture_mask

That’s it, it’s not super complicated, but require more work on preparing the assets and creating the simple IK bones. I’ll move on to the next stop to make it interactive and hopefully i’ll get enough time to finish it this summer 😀

P.S. I have record a small video of the projection, unfortunately my camera is not powerful enough so the quality is not the best, if you are still interested in how the projection looks, the video is here.


Leap Motion + Constellations

Another test with leap motion, i find it quite intuitive to use the grab gesture, especially it fits with navigate the constellations. I would like to experiment more on this prototype, to let the user select the constellation they want, but it will be difficult to find a good and precise gesture to do this i think.

and the prototype is here ( without leap motion library, you can play with your mouse )