Photoshop curve and gradient map in shader and texture

Recently I’ve been working on a project which I need to recreate some photoshop effects in code. Most of them are quite easy to do or to find sample shader codes, such as hue/saturation/brightness and contrast. However there are still 2 effects need a bit more work : Curve and Gradient map.

Screen Shot 2014-08-15 at 11.25.02  Screen Shot 2014-08-15 at 11.31.43

The way curve and gradient map works is to map your current color value to another one, so the question for us is how to save this mapping information and then pass it into our code. It seems quite complicated at first glance, we need to save a lot data such as all 3 RGB channels plus an overall one for curve. However there is an easy way to do it : using texture. What we need is to create a base gradient texture from black to white like this :

baseTextureand apply your curve or gradient map on this texture to get the mapping texture you need.

export02

The next step is to do the mapping in the fragment shader, for the curve :

uniform sampler2D   texture;
uniform sampler2D   textureCurve;

void main(void) {
    
    vec2 texCoord   = gl_TexCoord[0].st;
    vec4 color      = texture2D(texture, texCoord);
    
    color.r         = texture2D(textureCurve, vec2(gl_FragColor.r, 0)).r;
    color.g         = texture2D(textureCurve, vec2(gl_FragColor.g, 0)).g;
    color.b         = texture2D(textureCurve, vec2(gl_FragColor.b, 0)).b;
    
    gl_FragColor    = color;
}

and for the gradient map :

uniform sampler2D   texture;
uniform sampler2D   textureMap;

#define sq3         1.73205080757

void main(void) {
    
    vec2 texCoord   = gl_TexCoord[0].st;
    vec4 color      = texture2D(texture, texCoord);
    float lenColor  = length(color.rgb) / sq3;
    color.rgb       = texture2D(textureMap, vec2(lenColor, 0)).rgb;
    
    gl_FragColor    = color;
}

This is really nice and easy, we don’t need to find a way to save mapping data with lots of numbers, a simple texture will do the job , and it’s much easier to export the texture too.

Chalkboard wall

About a month ago, my wife have this idea of turning one of our walls into a chalkboard wall, so the kids can draw on that ( instead of on our furnitures …. ) We did it and it looks amazing,  now we even spend more time ourselves to draw than the kids. We use projector to help us to draw as we are not good enough yet to draw without any help, it’s cheating I know but it looks way better :D  Now more and more idea come to our head after we build this chalkboard wall, all kinds of different style we want to try, also we want to try projection mapping too with our drawings, lots of fun stuff to do.

Now I really recommend to all the people who want to decorate their house, it’s really easy to make a chalkboard wall, you just need to clean the wall and put the paint on, that’s it, and you have your own creative corner starting up. And also because it’s black ( or dark ) it actually makes your space feel larger. Search for chalkboard wall on the internet, you will find a lot of interesting stuff, and it’s a easy thing to do to make your space look different and fun !

Lego NXT / Processing

When I left my last job, my friends gave me a big farewell gift : LegoNxt. It’s an amazing gift, i’ve always loved lego. And when I was a boy the toy I love the most is always lego. And LegoNxt is even more awesome for me now as I become a programmer, combining program and lego is the sweetest dream for me.

So in order to return my thanks to my friends, I try to come up with a project using LegoNxt, I did have an idea a year ago and had a working prototype, but the result is not satisfying enough, however I really enjoy the process of building robots and control them through codes.

This time i had a simple idea : I want to record myself building another lego set, but i don’t want the camera stay still. I want it to move slowly from one side to another. So i begin to build this little lego car and control it by processing.

 

NXTComm

I found this NXTComm library and it’s very useful. It allows you to control your legoNxt via bluetooth, and it’s very easy to use. To setup you just need to do this :

_nxt = new LegoNXT(this, Serial.list()[4]);

As I recalled the only tricky thing is to find the right bluetooth port you are using which is the [4] means, you might need to test a little bit to find out which one is exactly the one you need.

After this you are ready to go, the library is very compact, you have all the api you need including both the sensors and motors. I didn’t need any sensor on this project but i did a quick test with ultrasonic sensor to get the distance from the sensor to the object, the result is very responsive, makes me want to do more with it. As for the motor it’s really simple to control as well :

_nxt.motorForwardLimit(LegoNXT.MOTOR_A, maxForce, 150); // moving with limit
_nxt.motorForward(LegoNXT.MOTOR_A, maxForce); // just keep moving, need to call _nxt.motorStop manually.

So this library covers pretty much all the controls you need, the only thing left is to build the actual robot. It’s so much fun to build it e.g. how to place the gears to control the speed so it won’t go too fast, some old knowledge from high school/university i thought i’ll never need them become very handy now :)

I’ve always worked on the coding part and didn’t get involved with the hardware part too much. But now i’m really interested in building robots, hope I’ll have more time to dig into this field and I’d like to learn arduino as well !

 

And last, a short video of the recording result using this little robot:

夸父 Kuafu

It begins sometime about half year ago, i was trying to recreate a chinese style 3D world. By the time the idea is really simple, just grabbing some chinese ink style mountain textures and trying to put them together. Then a good friend of mine saw this, start chatting with me about it. He said it would be interesting to use the real map data to construct the world, and also along with a good story. At that moment this ancient chinese myth “Kuafu” ( 夸父 ) come to my mind, it suits this idea perfectly, a giant chasing the sun across the land. We both like this story and get really excited about this project, we start right away playing the some experiences, not long after my friend came out with this amazing design :

Styleframe_2_50

It was so beautiful and went behind my imagination about this story. Later on I start to work on the first working prototype :LINK


We really like this prototype and get more ideas while play with this, so we start to make it a project. The first step is to create the storyboard :

After this prototype and storyboard, we both caught up by works and kept us busy for some while until one weekend. It was a friday evening and I am on my way home. Suddenly I have this idea to record a ink drop and use it as a texture for mountain. Our first prototype looks good but the mountains are flat, I do want to make a real 3D mountain but I have no idea how to create the texture for it. And then this idea struck me, so i do a quick test on that night.

The result is better than I thought it will be, they make a really good looking chinese style mountain.

mountainsWithTextures

So I went back and create more textures, trying and playing with different colors and also test on different papers. I really enjoy this process of creating textures, it takes time to create them but sit there watching the ink flows create all kinds of interesting and beautiful shapes is really exciting. As a dev we all know the more randomness we throw into the code the more alive it will become and have more variety. But there’s nothing can compare with the actual thing. Every ink drop will create different shapes based on how thick is the ink, how high you drop them , how much water is on the paper, the flow of the water, the tiny different on the paper itself, it’s all these things that you cannot control which make it more beautiful. And also make it feel really different when you put it in 3D render.

After these testings, the next step is to get the real elevation data and try to recreate the terrain. In the beginning I was using the google elevation service to get the data, which works perfectly, and the most amazing thing is that it even return the elevation underneath water. However I was afraid that we will hit the api call limit quite quickly as we are going to generate a good amount of mountains, so I switched to use an elevation map :

earth_height

The idea and the way to do it is really simple : We translate the latitude and longitude to x and y on this map, read the pixel value of this coordinate of this point, the brighter the pixel is, the higher the mountain will be. And each time to create a mountain , I set a minimum height of the mountain and ignore the those lower than this, and also check if there is already a mountain nearby, if not, then I’ll create a new mountain.

heightCombine

And in the same time my friend created these amazing style frames :
Styleframe_4

Styleframe_3

I was really excited with these stunning images he created, and we really like this landscape layout, so we start thinking : Why don’t we make it an installation ? We think it will looks better on a long panoramic format and also this will make it feel more immersive. So we tweak our direction from an online experience to an installation.

Just at this moment, google launch a new project call DevArt, we think it’s a really good chance for us to show our project to the world. So now we’ve put it on here. And also because now we are making it an installation, we think it will be great to have some sound effect in, and would be even better if it’s interactive. So we invited our other friend on board to work on the sound design part. He joins with lots of amazing ideas with sounds and make the sound design a big and interesting part of this project right away.

In order to make an installation, I switched to cinder now, combined with Node.js as our server and an HTML5/JS page as controller. This week I started to work on the communication part. Sending the data from the controller to Node.js server and then to the frontend using different technologies.

So now we are working to finishing the project, it’s really interesting to look back all these testings and prototypes. It seems that we have already done a lot. But the truth is there is still more ahead. I want to thank my friends who give us so many positive feedbacks after we announce this project. It’s definitely a good motivation for us to keep working on it. We will keep update our dev art project page, and I will keep updating my blog on this project too, even if the dev art is over.

WebGL – Depth texture

It took me a while but I finally get the depth texture working. Basically I just follow the instruction from this article and it worked. However I encounter some problems when trying to make this work :

After trying the technique in the article I mentioned, the first result I get is a total white screen, I thought I made an error so nothing got rendered on the screen but after a further research I found somebody said that he had the same situation but it turns out actually the depth texture is working, it’s just the value he got is very close to the max value therefore it looks all white. So i make a simple test to change the actual white (1.0, 1.0, 1.0 in shader) to red and this is the result i got (on the left is the all white screen, on the right is that i dye the actual white to red, you can see the shape of the mountains) : So it turns out the code is actually working, I just need to find a way to display it correctly. So I went back to do more search and i find this way to show the depth texture :

depthTexutre0

void main(void) {
	float n = 1.0;
	float f = 2000.0;
	float z = texture2D(uSampler0, vTextureCoord.st).x;
	float grey = (2.0 * n) / (f + n - z*(f-n));
	vec4 color = vec4(grey, grey, grey, 1.0);
	gl_FragColor = color;
}

n is the nearest depth and   is farthest depth, after applying this method the depth texture seems much more normal :

depthtexure0

But when I switch back to my render there is something wrong :
depthtexure1
You can see the depth is rendered weirdly, looks more like layers, and another thing I discovered is that the depth texture value i got is extremely close to 1 ( > .995). And the depth texture i get looks like layer too, but not like the depth texture inside the article that smooth. Therefor I have to go back to search again then I found out somebody says it’s because I didn’t have the right setup for the zNear and zFar. A too little zNear or a too big zFar will result a very wide range of depth results,and this will limit the precision of depth buffer. By this time the zNear and zFar I was using is .1/10000. So I scale everything down and also bring closer the camera so it looks the same in size, but the zNear and zFar become 1/2000. And this time everything works, the depth texture looks perfect, and the layer effect above is gone as well.

depthtexure2

So finally I got it working, I put up a little depth of field effect using the depth texture here, there is a checkbox for you to see how the depth texture looks like. It’s really a long way for me to finally get it. The main reason is there is not too much resource on this topic, however you will find more articles if you search for opengl depth texture instead of webgl texture. Here is a very good link tell you about the things you want to know about the depth texture. Again, all the code you need is in this article, enjoy the depth texture and now you can create some more advanced effects.

Chinese style mountains

A little experiment done last weekend, I got this idea on my way back friday night and can’t wait to build it.

The idea is simple, to record an ink drop video, and use it as a texture to create a mountain. In the end I just end up using static images instead of video because I haven’t got time to edit it. But already these images work really well, better than I originally imaged. And I enjoy to create things from real, the beauty of it is that it’s different every time and you can’t predict what’s going to happen. All you can do is to put some water on the paper, let the ink drop, and just wait and see the interesting shape form itself. You can definitely create these texture in photoshop but however record from the real thing makes it feels more “alive” to me and have more surprises as well.

for the code part it’s simple, the shape of the mountain is created using a sine curve plus some perlin noise, here is the demo link:

http://www.bongiovi.tw/experiments/webgl/mountains/

Star Canvas – Case study

Last thursday it was the 5th anniversary of B-reel London, we had a party on thursday night along with couple of our R&D projects. And I am working on this one : “Five”, it was based on one of my old prototype :

http://blog.bongiovi.tw/leap-motion-constellations/

The idea is to show all the constellations on the sky, and also we created a drawing tool so people can create their own constellation and send it to the sky. We find a way to build a dome and project it inside and we did it. It’s my first installation, really fun and a lot of learning, and I need to say thanks for my colleagues who makes this possible. I want to write a little bit more about this project, I think b-reel will put together a beautifully edited making-of video, so in this article I will be focusing on the tech/dev part.

The installation

Here is a diagram of the basic structure :

su_structure

 

Our developer Liam build the communication using node.js as the backend server for this project. Apart from the projection, we have 2 drawing tool running on iPad allow user to create their own constellation, and also a virtual keyboard on the iPad for people to use it to search the constellation they created. We don’t want to use a real keyboard because people can miss-trigger other applications, therefore we create this keyboard and limit the usage only to this project.

The projection is built using WebGL, at the beginning I was considering using Cinder to build this project, however due to the limit time and also because I had built a working prototype in webgl, I choose to use webgl in the end. I wasn’t too sure about it when I choose it, I didn’t know if the performance will be good enough and is it stable enough to run through all night with a lot of communications, but it turns out work quite well, I was thinking that i might need to restart the whole thing couple of times during the party but in the end i didn’t have to. I’m really happy with it and feel more security now with using WebGL for installations. And also i’m not using any webgl libraries but just some tool class i create, this is another big achievement for me as well.

The sound is another big part, done by our sound master Owen. All I did is send out the event base on the hand gesture and the constellation selected to the node server, then it gets re-send to pure data to generate the sound effect. When a constellation is selected, I calculate the number of lines in this constellation and max/min/avg distance of these lines and then send it out, and Owen create different melody based on these informations, so each constellation has its unique melody. I would really like to bring this feature to the web version, but i need to learn from master Owen first :D

For the drawing tool we capture user’s strokes and then simplify them to fewer dots to create the “constellation like” kind of effect, then we insert this generated constellation to the database and in the same time send out a signal via node server to tell the projection to update the display.

And the keyboard is just simply transfer the event of which key being pressed to the projection, very straight forward.

 

The dome

We discover this article teaching people how to build a dome just using card boards, and the best is that it already has all the detail of triangles you need including the plan and sizes. Once I saw this i couldn’t help but want to build one myself. Me and my wife start right away to build a small scale one to test, here are some pictures :

It’s really easy to build, even with the actual dome we build is not that complicated either, however the problem is how to support it, the geometry itself is flexible, in order to make it stay in a perfect shape we use a lot of fish strings attached to almost every corner to hold the shape. This is actually the most difficult part, building the dome actually is much easier than this.

IMG_5262

 

The other challenge is how to project on to this dome, in the link they are using a hemispherical mirror which we did try, however the result is really not acceptable. The first problem is when reflected to the dome, it loses a lot of details and become very pixelated. The second one is the distortion is really hard to correct. Due to this 2 reasons we give up using the hemispherical mirror. Then I tried to project it directly on the dome and then correct the distortion in the code, however we find actually it looks better just leave the way it is without doing any correction. Maybe this is because the nature of this project that everything is already on a sphere so there is no need to do more to correct it. All I need to do is just create a gradient circle mask to mask out the part outside the dome.

IMG_3505

 

The constellations

This project is not actually super 3D heavy, the only 3D part of it is everything is on a sphere and that’s all, the stars are flat, the constellations are flat, almost everything is flat. This is the nature of this project, we don’t need super complicated 3D models for this. On the other hand, we want to push a little bit more to the visual parts, to push as many layers as possible, as we all know the more layer you have, the more detailed / beautiful it’s going to get. So here is a little demo of the project and you can see all the layers being rendered  :

http://www.bongiovi.tw/experiments/webgl/su/

We started from a basic skybox, however soon after we build it we think it will feel much better if we can have an animated background, therefore our motion designer create an animate background for me. At the beginning we just put it as a plane texture on top of the skybox, but then we discover it will look better to map it onto a sphere that moves with the skybox, this gives the animation a feeling that it’s the real sky instead of just an overlay.

I’ve already blogged how i make the stars and lines facing to center/screen in my previous post, it was in stage3D but it works in the same way.

su_screenshot

I’ve put a fake depth of field effect on the name of the constellation and the stars so it gets a little bit blurry/transparent when it gets closer to the edge. It’s a fake depth of field because i didn’t use the depth buffer, it’s just a simple calculation base on its x and y, however this is very suitable for this project.

constellationParticle

For the appearing of the constellation, I had a little fun with the fragment shader. I want to make a simple particle effect transition to show the constellation drawings. I found this useful glsl random function on shader toy :

float rand(vec2 co){
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

to create this effect is actually quite simple : When getting the texture coordinate, add this random number to the desired coordinate, and this will result getting a random position therefore it looks like particles. And to create the animation is just to tweak the amount of this offset :

targetPos = orgPos + offset * randomPos;

so you can see it here : the bigger the offset is, the more random it gets( more particle like), and if the the offset is 0 then we get the original position which will generate a clean drawing. So basically the animation is just to tween offset from some big value back to 0. Voila, that’s how simple it is. You can add more to this random position such as scale or rotation to give a more dramatic effect.

And in this project we used a lot of video textures, some of them we need transparency, here is a easy way to do it : using the blend mode, before you render it, set the blend mode to this :

this.gl.blendFunc(this.gl.SRC_ALPHA, this.gl.ONE);

and make sure the transparent part is pure black, then it will get removed when being rendered. This trick is not only for video but for all the textures, so you can save some file size if you use this wisely.

The video texture is quite heavy for the loading, some of them can be done in code but will be very difficult to get the detail as rendered video. I think this is the choice based on the project, in our case we are building an installation so we don’t need to care about the loading, and we use super powerful machine to run it, so the file size is not a problem. In this case I will choose to use videos to get the best detail and also easier to modify. However if we are building an online experience then we need to do more tests on the performance and loading. Anyway my point is : Choose the best solution for the project, I know i’ll have a lot of fun playing with shaders if I were building it in code, but it will be very time consuming and hard to change.

 

The navigation

This is the first time we use leap motion for a real project, it turns out working quite well. I won’t say that it’s going to replace the mouse but definitely it can provide an extra way to navigate. The part I like about leap motion is it’s really sensitive and responsive,  you can create a really good control with it. However some gesture are still very hard to use, especially everyone has its way to do the gesture. At the beginning or you can see my prototype video, i created this “Grab” gesture to navigate, to be honest i quite like it, it gives me the control of holding something, however some people find it difficult to use, and it’s really hard for me to improve the gesture because people have different ways of “grabbing”, it sounds a little bit funny but it’s what I encounter during this project. So in the end i have to remove this grab gesture and goes for full hand. If you have a leap motion you can have a play with the link i mentioned before. We have 3 gestures : full hand open to move around, 1 finger pointing to select, and clap, this one i’ll leave for you to discover :D

There an interesting part of navigation : How I select the constellation i want ? do I need to compare the distance of every constellation to my mouse’s position ? this sounds quite heavy and need a lot of math. Luckily our ancestor has already solved this problem, basic astronomy : there are in total 88 constellations on the sky and have taken all the space. There’s a way to determine the boundary of each constellation using Right ascension andDeclination, which is the IAU constellation boundariesSo basically these scientists has already create the boundaries of all the constellations, and you can find a map like this (of course without the name, i put the names on just easier to see)

boundaries_map1

when you map it to a sphere you can see it fits perfectly with all the constellations. So what i did is to paint each region with different color, and then when the mouse event triggered ( mouse move or click), i will perform a gl.readPixels to get the pixel value of the mouse position, and because each region has an unique value, therefore i can know which one i just selected ( or roll over ). Just couple things to pay attention : when you doing the readPixels you don’t need to read from the whole render, you just need that 1 pixel under your mouse, this will save some performance. However the readPixels call is still heavy so make sure to skip it whenever you can ( e.g. when the mouse is not moved, so it’s the same result as last frame), secondly when you export this map make sure you export a png so it won’t get compressed and loose the value you set.

Summary

This is about the longest post i have ever made but i am really proud of this project. I am still working on the last part of this project with my friend, there’s something we want to do with this project and would like to bring the beauty of the constellations to everybody. We have discovered the beautiful drawings of  Johannes Hevelius and we want to show it to everybody. So stay tuned for the updates!

Since I started working on this project i fall in love with astronomy. Our sky is amazingly beautiful, every night if the sky is clear i will look up and try to find the constellation i know. And I realise no matter how much I do I cannot compete with the beauty of the nature, however I can get inspired by it. There is a lot of inspirations you can find just by looking at the nature around you.

Some pictures of the night