Home / 2013 / June

Adidas all for this case study – part 2

DEMO

Continue from the part 1, as I mentioned that we can’t use webgl in this project,  we still want to create a 3D feel like particles visualisation, so I start to run some test on the canvas, which is a whole new world to me, surprisingly, the performance is really good, even on iOS devices. So the basic idea is to create a huge canvas contains everything,  clean it and redraw all the particles back on every frame. It sounds quite scary with “clean everything” and “redraw everything” but actually it’s working quite well. So we solve the technology we are going to use, the next question is how to build a fake 3D illusion.

The power of matrix

At beginning I just trying to create an imaginary camera, so all i need to move is the camera instead of all the particle positions, which will saves a lot of work for me, so i create a camera matrix for that. Then i need to create the perspective projection so it will feel really close to what you will get in 3D. At this point i feel what i’m trying to do is really similar to what i did with WebGL : prepare the model, view and perspective matrix then pass in to the shader. So I took the matrices i used in my webgl project and it works ! Ok, not 100% but for the camera matrix it works perfectly, however i still don’t get the perspective, and then i discover this is the only extra work you need to do, you need to scale by yourself. It’s actually not that complicate because you already have the correct calculated position from those 2 matices, then all you need to do is to decide how much you want to scale by the depth, then call context.scale() and voila !

There are still a lot of work to do to make this one a complete 3D like framework, but in that case it’s probably easier just use Three.js. In our case this is already enough so we just stop here.

Tips for optimise performance

The first one it’s really depends on different project, in our case we are using add blend mode, so we don’t need depth sorting, this really saves us a lot of spaces. The second one is don’t render the thing you can’t see, like the particles with 0 opacity, or the particle with too small size ( <2px) just skip them and you will find you get a huge performance boost. And the last one is a tricky one : be careful with the size of your particle, during the development there’s once we had a huge drop on the performance and I don’t know why, after spending hours trying to find the problem at the end actually it’s because we shrink the size of the particle from 96 to 64, when I scale it back you 96 the fps goes up to it was before right away. It might be that i was trying to scale it too much in code, so when i put the image size back it works.

The performance of the canvas is really surprising, and especially when you see this fake 3D stuff working on the ipad and you can interact with your finger. I’m not a big fan of iPad because you can’t run webgl on it but  i have to say it feels quite nice to interact the 3D with touches, feels more intuitive. Really hoping the coming of the day that we can run webgl on the mobile / tablet devices.

Leap Motion + Constellations

Another test with leap motion, i find it quite intuitive to use the grab gesture, especially it fits with navigate the constellations. I would like to experiment more on this prototype, to let the user select the constellation they want, but it will be difficult to find a good and precise gesture to do this i think.

and the prototype is here ( without leap motion library, you can play with your mouse )

Adidas all for this case study – part 1

Recently I have been working on a project for Adidas called “All for this”, for more details of the project you can check out our case study here.

But now I’m going to share something more but doesn’t go ahead. At the beginning of the project we are really ambitious, we are thinking of doing the motion capture ourself using microsoft kinect. It’s such an interesting idea and a good challenge so I spend 2 days to build this simple prototype.

The first step is to get the data from kinect, I was using processing to do this. The data is consist of several frames, and for each frame it holds the position of all the points in 3D. After getting all the information i put it in a big json string then pass it to javascript. At this point the point clouds look like this :

pointCloud1

and you can check the actual action here.

And once you get these information it’s all up to you how to use it. In this case i use them as emit points for the particles, just randomly select some points to emit the particle every frame. you can check the result here.

Unfortunately due to limit budget and time at the end we go with a motion capture company instead of do it on our own, but it’s still an interesting experience for me. It feels like a mini installation to do the capture, you need to setup the environment and adjust you code such as reduce the noise … etc, these are very precious to me.  So this is the part one of the case study, the second part will be on the canvas, as you see these prototypes are build using webgl, but for the project we are not allow to use it, so we come up with another idea, stay tuned for the part 2!