Samsung Design Week Installation

I’ve been working on an installation project with Samsung for Milan design week for the past 2 month. It is for the first time I work so intensively with particle systems. The idea is simple : pushing particles to reveal / construct the image. This is like a dream project for all the developers, playing with particles for entire 2 month. And this is the first time for me to use cinder to build a project.

 

Convert to Cinder

The reason I choose cinder is because it is easy to setup. All the tool you need to work with opengl is pretty much there. Especially the framebuffer object is really useful, being able to render to multiple targets really saves my life.For the learning process, actually it doesn’t take really long for me to convert my code from WebGL/JS to cinder. Just few small things to watch out, i’ll point them out later in this post. The basic knowledge of 3D never change: vertex buffer, index buffer, textures and shaders, these are all re-usable from a WebGL/js project. The only thing i need to figure out is how to upload/bind them correctly, but this is really easy with the existing class in cinder such as VboMesh, GlslProg and Texture class. So this transition from WebGL/js to Cinder to me is quite painless, and now I’m really enjoy working in cinder for the performance i gain and also wider support for openGL.

 

The particle system 

In this project we build a particle system. Like all other particle systems it’s based on the basic physics : the position of the particle is determined by its velocity, and the velocity of the particle is determined by its acceleration. You can find a really good tutorials from the book : “The nature of code” by Daniel Shiffman ( link ) or the tutorial from cinder ( link ) by Robert Hodgin.

Basically, the particle movements follow these rules :

new position = old position + velocity;
new velocity = old velocity + acceleration;

In order to move the particles, you can either just change the position of the particles, or change the velocity of it, or the best is to change the acceleration of the particle to get the most natural movement. The acceleration of the particles might not very clear to what it does, there is another term : Force. This will make it easier to understand, when you want to move an object, you need to apply a force on it, so are our particles. We want to move it with our gestures, so we need to apply the force generated by the gesture to the particles. As for the force of your gesture, the leap motion already has the ability to find the velocity of the palms as a vector, we just need to multiply the right amount of force with this vector, than we got the acceleration we want. We’ve got couple of force more in this project, the first one is the constant wind force from left to right, the second one is the noise or we can see it as a turbulence force. Combine these 3 forces, we are able to create the particle behaviour we want.

 

The first prototype : The basic

There are different ways to build this particle system, the most straightforward way to build it is to create a particle class, with position, velocity and acceleration attributes. In my first attempt to build the system, I go with this way, the advantage of this method is that it’s easy to build and debug, therefore I can come up with a quick prototype so we know what’s achievable and we can start building visuals around it.

The particle movement looks nice, which says the way I generate the wind and turbulence is working, however the biggest problem about this version is : There is not enough particles. At maximum we can put  about 5000 particles in the particle stream, it’s not enough to construct the images, we need more particles.

 

The second prototype : Trying to balance the calculation to GPU

The heaviest part of the particle system is the calculation of particles’ position and velocity, in every frame you need to go through all the particles and do the same calculation for all the particles. This actually sounds quite familiar to one thing : the Fragment shader, each frame the fragment shader will go through all the pixels and calculate the desired color. So in order to put more particles in our system, we need to balance this calculation on to GPU using fragment shader. But how ? in 3D space, a position of a particle need 3 values : x, y and z. So we just take one color channel for one axis : red for x, green for y and blue for z, by doing this we can save the position of a particle using color, and then each pixel on the texture represent one particle. In every frame we just looping through all these pixels and update the color, same as looping through all the particles and update the position. Just one thing to bear in mind, the color value in shader goes from 0 to 1, and the position of your particle should go from -1 to 1 then we can multiply by the range we want. So as a result, taking X ( red channel ) for example, if the red value is 0, it will be at the left border of the range, 0.5 will be the center of the screen and 1 will be at the right border. And this rule applies to not only position, but also velocity and acceleration as well.

The benefit of doing this is that we can push to a very good amount of particles, with my macbook pro i can push to 1024×1024 = 1,048,576 particles running with 60 fps without any optimisation. With more powerful machine like iMac or mac pro you can push to 4096×4096 particles which is insane and much more than we need.

 

The final prototype : Working entirely on GPU

The second prototype demonstrate it’s possible to balance the calculation of the particle movement to GPU, and we can get a really good amount of particles. The next step will be building the actual particle system on top of GPU. Idea is simple : I create 2 textures, one for velocity, another for position. Each frame i update the velocity texture first with perlin noise as turbulence and the wind force. Then I update the position texture with this new velocity by just adding them together. This fits our model of “Velocity += Acceleration, Position += Velocity”, however it doesn’t work as I expected. The problem is that the range of position if from 0 to 1 only, the velocity is much smaller like .001 and the acceleration is even much smaller, so when I multiply all these together it just disappear because it’s too small. This bugged me for couple days until i find out this :

gl::Fbo::Format format;
format.setColorInternalFormat( GL_RGBA32F_ARB );

I need to set the color precision to a higher value ( in this case 32 bit ) instead of just the default 8 bit. After adding this new precision, all my particle movement is working now.

So now I can recreate the movement like the one i build only using CPU, and this time i can push to 256×256 = 65536 particles ( can go higher but it will make the particle stream feel too full , so we settle down with this value ). At this point we have a healthy particle system that we can start building our project.

 

The actual build

The first task come to hand is how to apply the gesture force to the particles. To do this we need to create another texture just to record the gesture force. Like this :

So when calculate the movement, i simply just add this force with other 2 forces together (wind/turbulence ).

Another thing i tried is to combine the texture for position and the texture for velocity into 1, as you can see the video above, on the left side is my position, on the right side is my velocity. This will save some framebuffer and 1 rendering call for me , as for performance i don’t know if it’s going to be better or worse, need to do more tests. The final shader code is quite complicated, combined with all the forces and also how to make particles go to their position form the image. In total I used 6 textures to store different informations and also output to 2 different target in order to track the completion rate of an image.

 

Summary

I’ve known this method of balance the calculation on to GPU for a long time, and there’s a lot of examples on the internet, just search for “GPU particles” you will get a lot. The basic idea of this technique is really easy to understand, but not until i really start building it I realise there’s  a lot of details to be taken care of. Overall like i said the concept is not complicated, you will spend more effort on converting the color and values to vector in your head, and finding out which is the right uv coordinate to get the right color. It’s hard to debug, you can’t get too much info from the shader. But once you got it right it gives your a great reward : good performance with good amount of particles. I am really glad that i got this chance to work on this project, it’s a part of a big installation, i’ll share the b-reel making-of video later when it’s done. All the developers love particles, i am really lucky to have this chance to work on a particle system for full 2 month, which gives me a good opportunity for me to clear my thoughts about particle system and testing several different methods. I only list 3 prototypes in this post but actually there’s about 30 prototypes in total, just progress a bit by a bit. And I’m really happy about using Cinder as well, for a first cinder project this is a great challenge. I learned a lot during this project, and I really proud to be part of this big and beautiful installation.

 

Some pictures of the project

DSCF6297DSCF6302DSCF6308DSCF6313DSCF6321DSCF6332DSCF6334DSCF6342DSCF6352DSCF6386DSCF6397DSCF6638DSCF6649DSCF6650DSCF6651DSCF6656DSCF6660DSCF6684DSCF6720DSCF6723DSCF6727DSCF6739DSCF6767DSCF6775

夸父 Kuafu

It begins sometime about half year ago, i was trying to recreate a chinese style 3D world. By the time the idea is really simple, just grabbing some chinese ink style mountain textures and trying to put them together. Then a good friend of mine saw this, start chatting with me about it. He said it would be interesting to use the real map data to construct the world, and also along with a good story. At that moment this ancient chinese myth “Kuafu” ( 夸父 ) come to my mind, it suits this idea perfectly, a giant chasing the sun across the land. We both like this story and get really excited about this project, we start right away playing the some experiences, not long after my friend came out with this amazing design :

Styleframe_2_50

 

It was so beautiful and went behind my imagination about this story. Later on I start to work on the first working prototype : LINK

We really like this prototype and get more ideas while play with this, so we start to make it a project. The first step is to create the storyboard :

Giant_1Giant_2Giant_3Giant_4Giant_5_0Giant_5Giant_6Giant_7Giant_8_2Giant_8Giant_9Giant_10Giant_11Giant_12Giant_13Giant_14Giant_15Giant_16Giant_17Giant_18Giant_19Giant_20Giant_21Giant_22Giant_23Giant_24Giant_25Giant_26

After this prototype and storyboard, we both caught up by works and kept us busy for some while until one weekend. It was a friday evening and I am on my way home. Suddenly I have this idea to record a ink drop and use it as a texture for mountain. Our first prototype looks good but the mountains are flat, I do want to make a real 3D mountain but I have no idea how to create the texture for it. And then this idea struck me, so i do a quick test on that night.

The result is better than I thought it will be, they make a really good looking chinese style mountain.

mountainsWithTextures

So I went back and create more textures, trying and playing with different colors and also test on different papers. I really enjoy this process of creating textures, it takes time to create them but sit there watching the ink flows create all kinds of interesting and beautiful shapes is really exciting. As a dev we all know the more randomness we throw into the code the more alive it will become and have more variety. But there’s nothing can compare with the actual thing. Every ink drop will create different shapes based on how thick is the ink, how high you drop them , how much water is on the paper, the flow of the water, the tiny different on the paper itself, it’s all these things that you cannot control which make it more beautiful. And also make it feel really different when you put it in 3D render.

010203040506070809101112

After these testings, the next step is to get the real elevation data and try to recreate the terrain. In the beginning I was using the google elevation service to get the data, which works perfectly, and the most amazing thing is that it even return the elevation underneath water. However I was afraid that we will hit the api call limit quite quickly as we are going to generate a good amount of mountains, so I switched to use an elevation map :earth_height

The idea and the way to do it is really simple : We translate the latitude and longitude to x and y on this map, read the pixel value of this coordinate of this point, the brighter the pixel is, the higher the mountain will be. And each time to create a mountain , I set a minimum height of the mountain and ignore the those lower than this, and also check if there is already a mountain nearby, if not, then I’ll create a new mountain. heightCombine

And in the same time my friend created these amazing style frames :
Styleframe_3 Styleframe_4

I was really excited with these stunning images he created, and we really like this landscape layout, so we start thinking : Why don’t we make it an installation ? We think it will looks better on a long panoramic format and also this will make it feel more immersive. So we tweak our direction from an online experience to an installation.

Just at this moment, google launch a new project call DevArt, we think it’s a really good chance for us to show our project to the world. So now we’ve put it on here. And also because now we are making it an installation, we think it will be great to have some sound effect in, and would be even better if it’s interactive. So we invited our other friend on board to work on the sound design part. He joins with lots of amazing ideas with sounds and make the sound design a big and interesting part of this project right away.

In order to make an installation, I switched to cinder now, combined with Node.js as our server and an HTML5/JS page as controller. This week I started to work on the communication part. Sending the data from the controller to Node.js server and then to the frontend using different technologies.

So now we are working to finishing the project, it’s really interesting to look back all these testings and prototypes. It seems that we have already done a lot. But the truth is there is still more ahead. I want to thank my friends who give us so many positive feedbacks after we announce this project. It’s definitely a good motivation for us to keep working on it. We will keep update our dev art project page, and I will keep updating my blog on this project too, even if the dev art is over.

WebGL – Depth texture

It took me a while but I finally get the depth texture working. Basically I just follow the instruction from this article and it worked. However I encounter some problems when trying to make this work :

 

After trying the technique in the article I mentioned, the first result I get is a total white screen, I thought I made an error so nothing got rendered on the screen but after a further research I found somebody said that he had the same situation but it turns out actually the depth texture is working, it’s just the value he got is very close to the max value therefore it looks all white. So i make a simple test to change the actual white (1.0, 1.0, 1.0 in shader) to red and this is the result i got (on the left is the all white screen, on the right is that i dye the actual white to red, you can see the shape of the mountains) : depthTexutre0So it turns out the code is actually working, I just need to find a way to display it correctly. So I went back to do more search and i find this way to show the depth texture :

void main(void) {
	float n = 1.0;
	float f = 2000.0;
	float z = texture2D(uSampler0, vTextureCoord.st).x;
	float grey = (2.0 * n) / (f + n - z*(f-n));
	vec4 color = vec4(grey, grey, grey, 1.0);
	gl_FragColor = color;
}

n is the nearest depth and   is farthest depth, after applying this method the depth texture seems much more normal :

depthtexure0

But when I switch back to my render there is something wrong :
depthtexure1
You can see the depth is rendered weirdly, looks more like layers, and another thing I discovered is that the depth texture value i got is extremely close to 1 ( > .995). And the depth texture i get looks like layer too, but not like the depth texture inside the article that smooth. Therefor I have to go back to search again then I found out somebody says it’s because I didn’t have the right setup for the zNear and zFar. A too little zNear or a too big zFar will result a very wide range of depth results,and this will limit the precision of depth buffer. By this time the zNear and zFar I was using is .1/10000. So I scale everything down and also bring closer the camera so it looks the same in size, but the zNear and zFar become 1/2000. And this time everything works, the depth texture looks perfect, and the layer effect above is gone as well.

depthtexure2

So finally I got it working, I put up a little depth of field effect using the depth texture here, there is a checkbox for you to see how the depth texture looks like. It’s really a long way for me to finally get it. The main reason is there is not too much resource on this topic, however you will find more articles if you search for opengl depth texture instead of webgl texture. Here is a very good link tell you about the things you want to know about the depth texture. Again, all the code you need is in this article, enjoy the depth texture and now you can create some more advanced effects.

Chinese style mountains

A little experiment done last weekend, I got this idea on my way back friday night and can’t wait to build it.

The idea is simple, to record an ink drop video, and use it as a texture to create a mountain. In the end I just end up using static images instead of video because I haven’t got time to edit it. But already these images work really well, better than I originally imaged. And I enjoy to create things from real, the beauty of it is that it’s different every time and you can’t predict what’s going to happen. All you can do is to put some water on the paper, let the ink drop, and just wait and see the interesting shape form itself. You can definitely create these texture in photoshop but however record from the real thing makes it feels more “alive” to me and have more surprises as well.

DSCF0417DSCF0418DSCF0420DSCF0421DSCF0423DSCF0426DSCF0434DSCF0436DSCF0438010203

 

for the code part it’s simple, the shape of the mountain is created using a sine curve plus some perlin noise, here is the demo link:

http://www.bongiovi.tw/experiments/webgl/mountains/

 

and here is a short video of the ink drop :

Star Canvas – Case study

Last thursday it was the 5th anniversary of B-reel London, we had a party on thursday night along with couple of our R&D projects. And I am working on this one : “Five”, it was based on one of my old prototype :

http://blog.bongiovi.tw/leap-motion-constellations/

The idea is to show all the constellations on the sky, and also we created a drawing tool so people can create their own constellation and send it to the sky. We find a way to build a dome and project it inside and we did it. It’s my first installation, really fun and a lot of learning, and I need to say thanks for my colleagues who makes this possible. I want to write a little bit more about this project, I think b-reel will put together a beautifully edited making-of video, so in this article I will be focusing on the tech/dev part.

 

The installation

Here is a diagram of the basic structure :

su_structure

Our developer Liam build the communication using node.js as the backend server for this project. Apart from the projection, we have 2 drawing tool running on iPad allow user to create their own constellation, and also a virtual keyboard on the iPad for people to use it to search the constellation they created. We don’t want to use a real keyboard because people can miss-trigger other applications, therefore we create this keyboard and limit the usage only to this project.

The projection is built using WebGL, at the beginning I was considering using Cinder to build this project, however due to the limit time and also because I had built a working prototype in webgl, I choose to use webgl in the end. I wasn’t too sure about it when I choose it, I didn’t know if the performance will be good enough and is it stable enough to run through all night with a lot of communications, but it turns out work quite well, I was thinking that i might need to restart the whole thing couple of times during the party but in the end i didn’t have to. I’m really happy with it and feel more security now with using WebGL for installations. And also i’m not using any webgl libraries but just some tool class i create, this is another big achievement for me as well.

The sound is another big part, done by our sound master Owen. All I did is send out the event base on the hand gesture and the constellation selected to the node server, then it gets re-send to pure data to generate the sound effect. When a constellation is selected, I calculate the number of lines in this constellation and max/min/avg distance of these lines and then send it out, and Owen create different melody based on these informations, so each constellation has its unique melody. I would really like to bring this feature to the web version, but i need to learn from master Owen first :)

For the drawing tool we capture user’s strokes and then simplify them to fewer dots to create the “constellation like” kind of effect, then we insert this generated constellation to the database and in the same time send out a signal via node server to tell the projection to update the display.

And the keyboard is just simply transfer the event of which key being pressed to the projection, very straight forward.

 

The dome

We discover this article teaching people how to build a dome just using card boards, and the best is that it already has all the detail of triangles you need including the plan and sizes. Once I saw this i couldn’t help but want to build one myself. Me and my wife start right away to build a small scale one to test, here are some pictures :

DSCF0508DSCF0516DSCF0523DSCF0537DSCF0541DSCF0547DSCF0551

It’s really easy to build, even with the actual dome we build is not that complicated either, however the problem is how to support it, the geometry itself is flexible, in order to make it stay in a perfect shape we use a lot of fish strings attached to almost every corner to hold the shape. This is actually the most difficult part, building the dome actually is much easier than this.

IMG_5262

 

The other challenge is how to project on to this dome, in the link they are using a hemispherical mirror which we did try, however the result is really not acceptable. The first problem is when reflected to the dome, it loses a lot of details and become very pixelated. The second one is the distortion is really hard to correct. Due to this 2 reasons we give up using the hemispherical mirror. Then I tried to project it directly on the dome and then correct the distortion in the code, however we find actually it looks better just leave the way it is without doing any correction. Maybe this is because the nature of this project that everything is already on a sphere so there is no need to do more to correct it. All I need to do is just create a gradient circle mask to mask out the part outside the dome.

IMG_3505

 

The constellations

This project is not actually super 3D heavy, the only 3D part of it is everything is on a sphere and that’s all, the stars are flat, the constellations are flat, almost everything is flat. This is the nature of this project, we don’t need super complicated 3D models for this. On the other hand, we want to push a little bit more to the visual parts, to push as many layers as possible, as we all know the more layer you have, the more detailed / beautiful it’s going to get. So here is a little demo of the project and you can see all the layers being rendered  :

http://www.bongiovi.tw/experiments/webgl/su/

We started from a basic skybox, however soon after we build it we think it will feel much better if we can have an animated background, therefore our motion designer create an animate background for me. At the beginning we just put it as a plane texture on top of the skybox, but then we discover it will look better to map it onto a sphere that moves with the skybox, this gives the animation a feeling that it’s the real sky instead of just an overlay.

I’ve already blogged how i make the stars and lines facing to center/screen in my previous post, it was in stage3D but it works in the same way.

su_screenshot

I’ve put a fake depth of field effect on the name of the constellation and the stars so it gets a little bit blurry/transparent when it gets closer to the edge. It’s a fake depth of field because i didn’t use the depth buffer, it’s just a simple calculation base on its x and y, however this is very suitable for this project.

 

constellationParticle

For the appearing of the constellation, I had a little fun with the fragment shader. I want to make a simple particle effect transition to show the constellation drawings. I found this useful glsl random function on shader toy :

float rand(vec2 co){
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

to create this effect is actually quite simple : When getting the texture coordinate, add this random number to the desired coordinate, and this will result getting a random position therefore it looks like particles. And to create the animation is just to tweak the amount of this offset :

targetPos = orgPos + offset * randomPos;

so you can see it here : the bigger the offset is, the more random it gets( more particle like), and if the the offset is 0 then we get the original position which will generate a clean drawing. So basically the animation is just to tween offset from some big value back to 0. Voila, that’s how simple it is. You can add more to this random position such as scale or rotation to give a more dramatic effect.

And in this project we used a lot of video textures, some of them we need transparency, here is a easy way to do it : using the blend mode, before you render it, set the blend mode to this :

this.gl.blendFunc(this.gl.SRC_ALPHA, this.gl.ONE);

and make sure the transparent part is pure black, then it will get removed when being rendered. This trick is not only for video but for all the textures, so you can save some file size if you use this wisely.

The video texture is quite heavy for the loading, some of them can be done in code but will be very difficult to get the detail as rendered video. I think this is the choice based on the project, in our case we are building an installation so we don’t need to care about the loading, and we use super powerful machine to run it, so the file size is not a problem. In this case I will choose to use videos to get the best detail and also easier to modify. However if we are building an online experience then we need to do more tests on the performance and loading. Anyway my point is : Choose the best solution for the project, I know i’ll have a lot of fun playing with shaders if I were building it in code, but it will be very time consuming and hard to change.

 

The navigation

This is the first time we use leap motion for a real project, it turns out working quite well. I won’t say that it’s going to replace the mouse but definitely it can provide an extra way to navigate. The part I like about leap motion is it’s really sensitive and responsive,  you can create a really good control with it. However some gesture are still very hard to use, especially everyone has its way to do the gesture. At the beginning or you can see my prototype video, i created this “Grab” gesture to navigate, to be honest i quite like it, it gives me the control of holding something, however some people find it difficult to use, and it’s really hard for me to improve the gesture because people have different ways of “grabbing”, it sounds a little bit funny but it’s what I encounter during this project. So in the end i have to remove this grab gesture and goes for full hand. If you have a leap motion you can have a play with the link i mentioned before. We have 3 gestures : full hand open to move around, 1 finger pointing to select, and clap, this one i’ll leave for you to discover :)

There an interesting part of navigation : How I select the constellation i want ? do I need to compare the distance of every constellation to my mouse’s position ? this sounds quite heavy and need a lot of math. Luckily our ancestor has already solved this problem, basic astronomy : there are in total 88 constellations on the sky and have taken all the space. There’s a way to determine the boundary of each constellation using Right ascension and Declination, which is the IAU constellation boundaries. So basically these scientists has already create the boundaries of all the constellations, and you can find a map like this (of course without the name, i put the names on just easier to see)

boundaries_map

when you map it to a sphere you can see it fits perfectly with all the constellations. So what i did is to paint each region with different color, and then when the mouse event triggered ( mouse move or click), i will perform a gl.readPixels to get the pixel value of the mouse position, and because each region has an unique value, therefore i can know which one i just selected ( or roll over ). Just couple things to pay attention : when you doing the readPixels you don’t need to read from the whole render, you just need that 1 pixel under your mouse, this will save some performance. However the readPixels call is still heavy so make sure to skip it whenever you can ( e.g. when the mouse is not moved, so it’s the same result as last frame), secondly when you export this map make sure you export a png so it won’t get compressed and loose the value you set.

 

Summary

This is about the longest post i have ever made but i am really proud of this project. I am still working on the last part of this project with my friend, there’s something we want to do with this project and would like to bring the beauty of the constellations to everybody. We have discovered the beautiful drawings of  Johannes Hevelius and we want to show it to everybody. So stay tuned for the updates!

Since I started working on this project i fall in love with astronomy. Our sky is amazingly beautiful, every night if the sky is clear i will look up and try to find the constellation i know. And I realise no matter how much I do I cannot compete with the beauty of the nature, however I can get inspired by it. There is a lot of inspirations you can find just by looking at the nature around you.

 

Some pictures of the night

DSCF0135DSCF0140DSCF0145DSCF0157DSCF0168DSCF0170DSCF0188DSCF0191DSCF0192DSCF0200DSCF0211DSCF0217

The future of programming by Bret Victor

I just realise i had more then 100 posts now on my blog, bravo to myself. The most difficult part is at the beginning, I need to force myself to post something, but now it gets easier and it makes me feel wrong if i haven’t post anything for a long time.

Recently I came across this video and it is really brilliant and very inspiring. So I’d like to share this with you and also for myself. Enjoy :)

 

P.S. If you enjoy this video, you should check out his website as well : http://worrydream.com/ there’s more stuff there.

Some prototypes with leap motion

I’ve just got my leap motion this week, and it’s really an amazing device, have a lot of new ideas now. here are 2 prototypes i build before, now adding some leap control / interaction to it, have a play !

 

Particle Sphere : http://www.bongiovi.tw/experiments/leap/sphere/

Just a simple control using leap motion, grab to drag to navigate.

 

Flaming Flowers : http://www.bongiovi.tw/experiments/leap/flamingFlower/

Swipe left or right to create the wind, and swipe up to clear and make new flowers.

 

Better way to navigate

While Building the prototype of the sphere, I tried to improve the way to control the 3D objects. What I used to do is to move the camera and make it point to the centre using the lookAt method, it has been working quite well but there’s some disadvantages, the biggest one is the limit of the camera angle. This is due to the lookAt method, in the lookAt method you need to pass in a “UP” vector, normally it’s fine, but there is a problem when you rotate on the x-axis, when you go over 90 degree or -90 degree, you need to invert the UP vector or you will see your camera suddenly rotate for 180 degree. The other issue is that when you rotate the camera on x, depends on the degree you will discover the behaviour change, especially when you are at 90 or -90 degree, you will see it’s more like rotate on the z axis. It’s quite hard to explain it in words, so have a look at this link : http://www.bongiovi.tw/1hour/worldshpere/ (ps I locked the rotation on X so you can’t go over 90 or -90 degree)

This flaw it’s normally ok as I don’t need to rotate too much my camera until recently. There is a project i need to be able to do a full rotation on all the axis, so I need to improve this. At the beginning i thought it is because i hit the notorious gimbal lock, so I tried to use the quaternion to solve this problem, however the problem is still there, then I realise it’s the problem with the lookAt method as I mentioned before. So i tried another way : don’t move the camera, move the object. In this case I don’t need to worry about the camera anymore as it’s always in the same position. I just need to apply a rotation matrix on the entire scene the it should do work.

 

Quaternion and rotation

After i start to build this matrix for entire scene I really hit the gimbal lock trying rotate only on x, y, z. So I changed to quaternion, it’s the first time for me to use quaternion and it’s hard to pickup all the math behind it. At the end I still haven’t fully understand how it works but i did find out how to use it : say you want to rotate certain angle around the axis, the quaternion for this is :

[sin(angle/2) * axis.x, sin(angle/2) * axis.y, sin(angle/2) * axis.z, cos(angle/2) ]

pretty easy to remember and super useful. So I begin to build the interaction, detecting the mouse drag and calculate the distanced dragged on x and y, and apply the rotation on x and y, however this is still not feel right. At the end I discover that actually i don’t need to do the rotation twice, and also i shouldn’t be doing it. I need do find the right axis to rotate, and the rotation angle will be the distance travelled. So what the rotation axis we need actually ? It’s turns out quite simple : when doing the mouse drag we can get a vector like ( xDistance, yDistance, 0), and the vector we need to rotate is the vector which is perpendicular to both this dragging vector and the z axis, how to get this ? cross product. the actual code look like this :

var v = vec3.create([this.diffX, this.diffY, 0]);  // create the mouse dragging vector
var axis = vec3.create();
vec3.cross(v, this._zAxis, axis);   // using cross product to get the vector we want.
vec3.normalize(axis);               // normalize
var angle = vec3.length(v) * this._offset;  // calculate the rotation angle wee need.

var quat = quat4.create( [Math.sin(angle) * axis[0], Math.sin(angle) * axis[1], Math.sin(angle) * axis[2], Math.cos(angle) ] ); 
// create the quaternion for the rotation
quat4.multiply(tempRotation, quat); // multiply with the old rotation

So that’s it, i think it’s a little bit hard to image the problem i had using words, so have a play with the prototypes and you will know what I am talking about(or not ? :P). And the solution is not that complicated but it took me a while to understand how all these thing work so I just want to share it hopefully can save some time for somebody having the same problem.

Texture in vertex shader

LINK

It’s such a shame that I only discover that you can get the texture data in the vertex shader recently. the texture2D method is not fragment shader only, this opens another world to me. Like this particle video example that i have rebuild for thousand times, now it’s extremely easy to build. Because i can just update the video texture and calculate the new position with the pixel, therefor i don’t need to re-upload the vertices position every frame. The resource saved allow me to do more e.g. using a texture particle, and create a simple dof effect.

here is how the vertex shader looks like :

vec4 colorVideo = texture2D(uSampler2, uvRef);
vec3 pos = aVertexPosition;
pos.y += colorVideo.y * offset;

when uploading the vertices you need to pass an extra uv coordinate so you know where to pick the color from the video texture, which is the uvRef in this example, then calculate the offset you want and add it to the position of the particle. That’s it, hope you enjoy it !