Adidas all for this case study – part 1

Recently I have been working on a project for Adidas called “All for this”, for more details of the project you can check out our case study here.

But now I’m going to share something more but doesn’t go ahead. At the beginning of the project we are really ambitious, we are thinking of doing the motion capture ourself using microsoft kinect. It’s such an interesting idea and a good challenge so I spend 2 days to build this simple prototype.

The first step is to get the data from kinect, I was using processing to do this. The data is consist of several frames, and for each frame it holds the position of all the points in 3D. After getting all the information i put it in a big json string then pass it to javascript. At this point the point clouds look like this :

pointCloud1

and you can check the actual action here.

And once you get these information it’s all up to you how to use it. In this case i use them as emit points for the particles, just randomly select some points to emit the particle every frame. you can check the result here.

Unfortunately due to limit budget and time at the end we go with a motion capture company instead of do it on our own, but it’s still an interesting experience for me. It feels like a mini installation to do the capture, you need to setup the environment and adjust you code such as reduce the noise … etc, these are very precious to me.  So this is the part one of the case study, the second part will be on the canvas, as you see these prototypes are build using webgl, but for the project we are not allow to use it, so we come up with another idea, stay tuned for the part 2!

Blossom

http://www.bongiovi.tw/experiments/webgl/blossom/

I was trying to create my obj loader and came out with this idea, the tree is loaded from obj, it’s not complicate to create this obj loader, but it takes me a while to figure it out how it works. One small thing about the obj file structure is that the vertex start from 1 instead of 0, I took a while to get this, and after this it’s pretty easy. However i don’t recommend to write the obj loader yourself, because there’s already one in three.js :
http://mrdoob.github.com/three.js/examples/webgl_loader_obj.html

http://mrdoob.github.com/three.js/examples/js/loaders/OBJLoader.js

The other thing i tried to do in this experience is to put the rotation of the particle in the shader. I haven’t done any performance test yet so i don’t know if it’s better to do it in javascript side or in the shader side, i just discover this approach the other day and i want to try it. so in the shader the code looks like this :

vec3 rotateX(vec3 pos, float alpha) {
    mat4 trans= mat4(   1.0, 0.0, 0.0, 0.0, 
                        0.0, cos(alpha), -sin(alpha), 0.0, 
                        0.0, sin(alpha), cos(alpha), 0.0, 
                        0.0, 0.0, 0.0, 1.0);
    return vec3(trans * vec4(pos, 1.0));
}

vec3 rotateY(vec3 pos, float alpha) {
    mat4 trans= mat4(   cos(alpha), 0.0, sin(alpha), 0.0, 
                        0.0, 1.0, 0.0, 0.0, 
                        -sin(alpha), 0.0, cos(alpha), 0.0, 
                        0.0, 0.0, 0.0, 1.0);
    return vec3(trans * vec4(pos, 1.0));
}

vec3 rotateZ(vec3 pos, float alpha) {
    mat4 trans= mat4(   cos(alpha), -sin(alpha), 0.0, 0.0, 
                        sin(alpha), cos(alpha), 0.0, 0.0, 
                        0.0, 0.0, 1.0, 0.0, 
                        0.0, 0.0, 0.0, 1.0);
    return vec3(trans * vec4(pos, 1.0));
}

This is really useful when you want to rotate a vector in shader, probably better to have a function that you can rotate the vector with any axis instead of just x, y and z axis, but for me these 3 are enough for my experience.

I am too tired to write detail on how i implement this, drop me a mail if you want to understand how it works. In fact all these are preparation for a personal project, once i finished it there will be a formal making-of and all the details.

Bubble Man

Just a little toy, i capture the motion using kinect and export them to javascript. Then recreate the motion using particles based on the data obtained from kinect. Rotate it to hear different musics.

Another thing I tried to do is the sound, i want the sound to change when user view it from different angles and transition must be smooth. To be honest it’s just to change the volume of each sound ( in this case i put 3 sounds ) with the position of the camera, the code look like this :

if(angle > (210 - range) && angle < 210 ) { 	 	
theta = (angle - ( 210 - range)) / range * PI2; 	 	this.gains[0].gain.value = Math.cos(theta); 	 	this.gains[1].gain.value = Math.cos(PI2 - theta); 	 	this.gains[2].gain.value = 0;  } else if(angle > (330 - range) && angle < 330 ) { 	 	theta = (angle - ( 330 - range)) / range * PI2; 	 	this.gains[0].gain.value = 0; 	 	this.gains[1].gain.value = Math.cos(theta); 	 	this.gains[2].gain.value = Math.cos(PI2 - theta);  } else if(angle > (90 - range) && angle < 90 ) {
	theta = (angle - ( 90 - range)) / range * PI2;
	this.gains[0].gain.value = Math.cos(PI2 - theta);
	this.gains[1].gain.value = 0;
	this.gains[2].gain.value = Math.cos(theta);
}

Really no big deal but it’s fun to play.

For the using of web audio, here is a very good article :
http://www.html5rocks.com/en/tutorials/webaudio/games/

Polygon world

http://www.bongiovi.tw/experiments/webgl/polygon/

Click on the map and choose 2 locations to reveal the terrain.

I use google map elevation service to get the altitude, the amazing thing of this service is that you can get even the altitude below the sea, try it yourself, it’s really interesting to see the terrain below.

For the 3D part, i didn’t use Three.js, i tried to start from scratch meaning setting the vertices and normals one by one, and write the shader code by myself. Definitely it takes longer but it helps me to understand how the webgl works. I’ve build a little 3D framework to help me build this project, compare to Three.js it’s really nothing, it’s more like a tool helping me to do the basic tasks, I will share it later on the GitHub. And again, i’m starting js only 6 months ago, so forgive me if the code is a little bit rubbish 😀

I’ve always interested in this kind of “low-poly” style, you should check this guy’s work, they are amazing :

cf01b63ab87ae3d1a606a77644637ebf

I am trying to get as close as i can to this, still a long way to go,but the thing i learned is that you can’t rely only one light to create this kind of feeling, in this experiment i put 3 lights, first one is the white light on top, and then one yellow light and one blue light on the side, it’s a smaller trick i learned from motion designers. I really enjoy watching them doing compositing, they put tons of layers in order to get the final result. Obviously we can’t do that in real time, but the more elements(lights, textures, post-processing … etc ) you put on, the better result you will get.

Light smoke

UPDATE:

http://www.bongiovi.tw/experiments/webgl/smokeVideo/

I tried to put this affect on a video, first i remove the darker part of the video to get the “light map”. Then do the exactly the same thing, the result is quite interesting, move left and right to play. Kind of psychedelic feel 😀

http://www.bongiovi.tw/experiments/webgl/lightsmoke/

Just trying to redo the smoke effect with js/webgl, i tried once but the result isn’t that good, then i found out that i have to set the blend function to

this.gl.blendFunc(this.gl.ONE, this.gl.ONE);

to get the best result.

 

Actually it’s quite easy to create this effect, all you need is a background, a light map and a noise. In each frame i just keep apply a displacement effect on the light map, so it gets more and more distorted and in the same time we dim the alpha a little bit in each frame, then you will get the fade out effect and that’s it, really simple, just don’t forget to add the light map again every frame, or the light will fade out very soon.

Video Texture and cross hatching in WebGL

I was watching “Castle in the sky” with my little daughter the other day, and i was fascinated by its opening title, i strongly recommend you to watch it, it’s amazingly beautiful.

castle

 

So i was wondering if it possible to create this kind of pencil sketch feel style with shader, it turns out easier then i thought, you can see an excellent post here.

I tried to apply this shader on a video, i think it’s always more interesting then a static image, and i did it. In fact it’s really easy to use video as a texture, but just make sure both the width and height of your video is power of 2 ( 256, 512 , 1024 …. )

Also I’m using the little class i wrote called GLTexture and GLTextureFilter to apply this effect. I am a huge fan of Pixel Bender, it is fragment shader basically and I always enjoy playing with pixels. So i decide to create this little tool to help me build the effect faster, you can find it here on my github. I’m just starting on Javascript so please let me know if i made some stupid mistakes 😀  For the usage there’s a simple example on the github. Hope you enjoy it 😀

Flaming Flowers Preview with Javascript / WebGL

UPDATE 01/07 :

I finally find out the problem with the problem of same shader but different result in processing and webgl, it’s because i didn’t set the blend function right, once i got it right the result is identical, you can check the link above. Besides, i use video as a texture instead of image of sequence, the mask which was done with global composite operation is now replace by another fragment shader, therefore reduce a lot of loadings.

Screen-Shot-2013-01-07-at-13.01.48

Just playing with mask on top of a video, surprisingly the effect is really beautiful, so I went back on to add more details to it. I started with javascript with simple mask using the “globalCompositeOperation” skill, then i switched to Processing to add more effects using GLSL Shaders, and then i went back to javascript trying to add the same effects in WebGL. It’s an interesting experiment and i learn a lot from it. I’m still working on the processing version to make it more complete, will have another post coming soon once i finished it. However i want to blog about the javascript version first and give a little preview 😛

 

Global Composite Operation

the first problem I have is how to mask a video, I find some articles but they are not working for me, the main reason is the graphics of the flower are from images, and they have alpha gradient inside. It is very important for me to keep these alpha gradients, it looks much better on the visual side. So at the end i use image sequence instead of video, and using context2d.globalCompositeOperation to mask it.

this.ctx.clearRect(0, 0, W, H);
this.ctx.globalCompositeOperation = "source-over";

//  Draw Flowers and Particles
this.ctx.globalCompositeOperation = "source-in";
this.ctx.drawImage(this.bgs[this.currentFrame], 0, 0, W, H);  // Draw the image sequence of the background video

In fact it’s very easy to create this effect, the only thing to be careful is don’t forget to reset the globalCompositeOperation every time you render :

this.ctx.globalCompositeOperation = "source-over";

or it will keep rendering in “source-in” then you won’t be able to see anything.

 

Drawing Images with different transparency on the same canvas

When trying to create the particles effect i encounter this problem : each of my particles have its own transparency, and I’m using image for my particles. The way to draw image with transparency on a canvas is using globalAlpha, but this will change the alpha of the canvas, not just one single particle. Luckily i found the solution : context2d.save() / context2s.restore(); I am very surprised when i realise this, i thought these 2 methods are for transformations, but apparently they are doing more then that :

var pp = this.particles[i];
this.ctx.save();
this.ctx.globalAlpha = pp.alpha;
this.ctx.drawImage(flamming.imgParticle, pp.position.x, pp.position.y, pp.scale, pp.scale);
this.ctx.restore();

so before you setting the globalAlpha, call the context.save(), and call context.restore() after you done drawing, simple as that !

 

GLTexture / GLTextureFilter

When i was building this in Processing, i am using the GLGraphic library, it’s very easy to use. These are the 2 classes inside, the GLTextureFilter take a source from a GLSL fragment shader, all you need to do is write your fragment shader , then call :

filter.apply(inputTextures, outputTexture);

and you have the result in the outputTexture, very easy to use, especially for compositing, you won’t lost track in swapping the textures and shader programs, and also both the texture and programs are reusable. So when i went back to javascript, the first thing i did is try to build something similar to this, which will save me a lot of time on trying fragment shaders and different compositing. I want to create an effect that it’s only clear in the center, and it gets blurry when moving away the center, here is the workflow :

1. render the flowers ( the original render)

2. using the original render and apply a vertical blur to get the first blur version

3. using this first blur version and apply a horizontal blur to get the full blur version

4. combine the original render and this full blur version, mix it with each pixel’s distance to the center.

So i need to create 3 different Filter : Vertical Blur Filter, Horizontal Blur Filter and the last one Mix Filter, in the code it’s like this :

//	Init
this.glTexture = new GLTexture(this.gl, this.canvas);
this.outputHBlur = new GLTexture(this.gl, null, 1024, 1024);
this.outputBlur = new GLTexture(this.gl, null, 1024, 1024);
this.outputEdgeBlur = new GLTexture(this.gl, null, 1024, 1024);

this.glFilterHBlur = new GLTextureFilter(this.gl, "shader-vs", "shader-fs-hblur");
this.glFilterHBlur.setParameter("h", "float", 1/1024);
this.glFilterVBlur = new GLTextureFilter(this.gl, "shader-vs", "shader-fs-vblur");
this.glFilterVBlur.setParameter("v", "float", 1/1024);
this.glFilterEdgeBlur = new GLTextureFilter(this.gl, "shader-vs", "shader-fs-edgeblur");

//	Render
this.glTexture.updateTexture(this.canvas);
this.glFilterHBlur.apply([this.glTexture], this.outputHBlur);
this.glFilterVBlur.apply([this.outputHBlur], this.outputBlur);
this.glFilterEdgeBlur.apply([this.outputBlur, this.glTexture], this.outputEdgeBlur);
renderImage(this.gl, this.outputEdgeBlur);

It’s really simple and easy, however these 2 classes are not done yet, still some work to do, i’ll put them on my github when i finish these tasks, and will keep on working for more features.

Another thing I love with WebGL is that it’s using the GLSL for shaders, and it’s the same with GLGraphics, this means i can basically use the same shader program for both, it’s amazing! Compare to AGAL of flash, AGAL more difficult to program and can be only used in flash, that’s pity.

 

Same Shader, Different result

Although i am using the same fragment shader for both processing and javascript version, the result is so different :blurCompare

as you can see on the left is the result in Processing/GLGraphics, which is much like the idea of blur, on the left is the result in javascript, it’s blurred yes, but i lose a lot on the alpha as well, still wondering why and try to find the answer.

 

Anyway that’s about it for right now, hope you enjoy it and stay tuned for the bigger version on Processing !