2011/06/27

mosaic

Yesterday I mentioned using images as a source for the points and orientations. And I managed to do something.

First, the original image (a South Park version of myself, before I lose some weight):



And here's the various mosaic:







I used various effects to control the orientation in the cell. Plus I tried a random point selection, except for the first output.

Which one do you prefer ?

Naturally, to produce the output, I'm still using CUDA. I didn't used any data structure to optimize, so it can be computed a bit faster.

More textures

In my research, I'm still working on my textures. Here's some results I produced today that I really like.








Those effects might be implemented in a next version of my live-wallpaper. I haven't decided yet. For the first two images, I put the orientation of the points right on the plane. Plus I tried a new function to compute the distance. This one allows some distance pixels to actually have a distance of zero when it's perfectly aligned with another points.

Next, I will try to use images as a source of points and orientations. To see if with this method it's possible to create mosaic from an image. Creating mosaic with the normal Voronoi diagram is something well known in the literature. But with this method of oriented points that creates cells that are not convex polygons might allow us to do something very unique.

Let me know what you think of these images.

2011/06/26

Live-Wallpaper on the Market (feedbacks)

In three days, the number of download is already over 100. Which is a good news.

But, I was hoping for more ratings than that. But I checked other apps, and some with 250000+ downloads only have around 1000-2000 ratings. So, less than 1% of users take the time to rate.

Anyway, it's not a reason to stop. I will post an update version very soon because I found a better way to distribute the orientations and I might try a new way to distribute the points too.

2011/06/23

Live-Wallpaper available on the Market

I finally managed to put a first version of my live-wallpaper on the market.

You can find it right here:

https://market.android.com/details?id=com.blogspot.widgg_research&feature=search_result


Post comment on this post to tell me what you think about it. It's a first version and there's a lot of work to do to create the professional version and also to improve this one.

For major problem, I will do my best to put a new version of the application on the Market as fast as possible.

The live-wallpaper is rough on the fragment shader that I created in OpenGL ES 2.0. So it's very important to control the execution of the application base on this. Fewer points and a lower FPS (frame per seconds) will give better results.

I'm also interested to know the performance of your Droid, particularly if they run on Honeycomb. So, I'd like you to post your device, version of Android, number of points and FPS that you used in your settings.

Here's some screenshots:









2011/06/21

Yin Yang Texture

Here's a post on texture.

Previously, we were using a method to control images with points and normals. Alone, a point and a normal split the plane in two, a positive and negative part. But combined with other points and normals, we can actually control curve shapes. To visualize it, we assign white to the positive part and black for the other.

The textures proposed here are made by first splitting the plane in two parts. The two parts must be symmetrical just like the Yin Yang symbol.

We then distribute points and normals inside one of the part. We then reflect the point into the second part, but not the normal.

The method create a contrast and a perfect symmetry. Here's some results:




I won't describe the method to render those images because it's very complex. Also, the process is very slow. Unlike the live-wallpaper I'm working on, or the previous images I showed, we cannot compute those in real-time.

Unfortunately, we haven't implement this with CUDA. With CUDA, it wouldn't be in real-time, but it would be much faster. Specially the per-pixel rendering part. Let me know what you think about these Yin-Yang images.

2011/06/20

Live Wallpaper (list of function)

I started to implement the settings and it's going well. Here, I will present a partial list of function use to compute the color of a particular pixel.

But first, the weird Voronoi diagram presented here are not so different from the original one. The cells and edges still have the same meaning. Therefore, when there's an edge, it means that points on this edge are equidistant from two sites. Even if the method used to compute the distance is not the Euclidean one.

The function that I present here all use value R, which is the ratio of d1 over d2. Where d1 is the distance to the closest point and d2 the distance to the second closest point. So if R = 0, it means it's directly on the site, and R = 1 is on an edge.

First, functions with exponent, there will be three models:

  1. R^n
  2. 1 - R^n
  3. (1-R)^n
where n is a value in {1/2, 1, 2, 10}.

The other possible functions are:
  1. sin(Pi * R)
  2. 1 - sin(Pi * R)
  3. 1 + log(R)
  4. -log(R)
In gray scale, a value from the function of 0 or less means black and a value of 1 or above means white.

In color, each site will have its color assigned. And that color will influence the color in the cell. The color will be randomly assigned to the points.

I hope to post a first version by the end of the week.

2011/06/16

Live Wallpaper (some progress)

So, I progressed a lot in developing my first Live Wallpaper for Android. For now, I call this app "Weird Voronoi" because it uses the concept of the Voronoi diagram with some tweaks to create various visual effects.

I had a bit of difficulties to convert directly my fragment shader used on my machine to a fragment shader that can be used on Android with OpenGL ES 2.0. There's some restriction over the language that requires a certain adaptation. But in the end, the result is mostly the same.

Other problem in development, the emulators available cannot run shaders. So I have to debug completely on my Galaxy S. This mean that I know it's working on my phone, but I don't know about any other. I suspect that any phones more recent than the Galaxy S won't have any problem running it.

For now, in the project, I made the majority of the test I wanted to do. To see what are the limits. And actually, compared to a desktop, it's very limited. On my phone, running more than 4 points can be tricky while on the desktop, hundred points is not really a problem. So the objective was also to use this limited amount of points to be able to create some nice effect and I think I managed to do this.

Here's a first images of what the Live Wallpaper will look like:

I took this images from an app I'm developing on my desktop, but the visual effect is mostly the same.

Now that I know the limitation and the possibilities, the main thing left to be able to produce a first version to put on the Android Market is the settings to control the Live Wallpaper.

For now, the common options would be to choose the number of points. Even if 4 on my phone is the limit, there's some benchmark showing that some phones might be able to handle 16 points without a problem. The users will be able to choose if the points are moving, and at which speed. Same thing of the spirals (or if they want a spiral or not).

A variety of functions will be available to choose the right distribution of color. And the possibility to choose between a colored version or a gray scale version.

When the app will be available on the Market, I will give more detail about the features.

Meanwhile, I might post other images of other functions to show other visual effects.

2011/06/14

Live Wallpaper

Playing with texture can be very nice. But when your objective is to find ways to generate them and to generate them, you need to find various methods to place points and other parameters, you notice that if some points or parameters changed a bit, the texture looks almost the same, but with a little difference.

Therefore, being able to change those parameters a little bit at the time, we are able to create animations. So instead of a static texture, we have a texture evolving with the time. And if it's done properly, this animation won't be the equivalent of an animated GIF that repeats itself forever. Each new images is unique and to have a close loop over that animation can take a while.

Live Wallpapers are a feature available sine Android 2.0. It allows you to put a wallpaper with some sort of animation, and some times interaction when you press on the screen or move your phone (if there's an accelerometer on it).

Here's some statics images produced earlier that can be converted into an animation for the live wallpaper:






Right now, there's no official date for a release. I'm taking my time to develop it properly, to be sure it won't drain batteries and it will be smooth enough.

With the limited power of a smartphone, the number of points used to control the data would be much lower. In the previous images, there's around 64 points, more or less... sometimes much more. But on a phone, 4 to 16 points might be the top. But what is important is to have enough options so the users can create the live wallpaper they want.

More information about this soon!

2011/06/07

Red and Cyan Stereoscopy (P.S.)

Just a special note, the method presented in the previous post can work on OpenGL ES. So if you're developing on Android or iOS, you can create stereoscopic scene that can be viewed on smartphones and tablets.

On Android, here's two apps (not necessary using OpenGL) that allow you to play with the stereoscopy:

2011/06/04

Red and Cyan Stereoscopy

Here's a small entry for any of you who has a pair of 3D glasses (the ones with a red and cyan lens) and are interested in creating a 3D environment with OpenGL.


I'm not giving a lot of detail, for this post, I assume that you're familiar with OpenGL. At least the basic. If you're not, you can follow the tutorials on NeHe Productions. Here, I'm more interested with the general strategy because there's more than one way to do this. In this example, I present an approach that makes everything you see coming out of the screen. The screen itself would be the farthest object in the scene.

Consider that you have a function called draw_scene() where all the geometry and the computation is done, except the initialization to render the scene like clearing the depth buffer or glFlush();.

So here's the approach to render your scene:

Step 1:

glColorMask is a function that control which color you want to modify. So calling it with the four parameters set to true will allow to write on the red, green, blue and alpha value of each pixel concerned.

glColorMask(true, true, true, true);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 
If you use the stencil buffer, or any other buffers, add them as parameter in glClear.

Step 2:

Now, we want to draw what the left eye (red lens) will see. Imagine that the variables position, look_at and up are a structure such as:
struct coordinate
{
  float x, y, z;
};
describing two locations in space (position and look_at) and a vector (up).
Here we use gluLookAt to create a matrix used to setup the point of view on the scene. But other functions or methods exist to create the initial viewpoint.
glColorMask(true, false, false, true);
glLoadIdentity();
gluLookAt(position.x, position.y, position.z, look_at.x, look_at.y, look_at.z, up.x, up.y, up.z);
draw_scene();
At this point, half of the drawing is done.

Step 3:

For the right eye, we need first to move a little bit the point of view to simulate the distance between our eyes. Let the variable eye_distance be that distance. The piece of code is very similar to the code in the previous step.
glColorMask(false, false, true, true);
glLoadIdentity();
glTranslatef(-eye_distance, 0.0f, 0.0f);
gluLookAt(position.x, position.y, position.z, look_at.x, look_at.y, look_at.z, up.x, up.y, up.z);
draw_scene();

First, we switch the mask to accept only the blue. But why the "-eye_distance" ? Remember that I want everything to go out of the screen. This mean that the same object that we see with the right eye needs to be to the left of the copy of that object that we see with our left eye.

To understand this phenomenon. Put I finger in front of you and close your right eye. Then switch to your left eye. You can observe that your finger seems to be moving to the left. And the closer you put your finger, the larger the distance is. It's this effect that allow your eyes, when they focus on the same object, to determine the distance.

Another proof of this, take the example at the bottom. Look at the image first with you glasses and see how closer it is to you than the screen. Then clique on the image to have the larger version. You will see that the object looks even closer. And this is because the distance between the red and blue version is larger and your eyes make you think the object is closer.

Step 4:

Compile (debug if necessary) and enjoy your work.

Bonus step:


Let say that you have a background that is suppose to be far far away, like the sky. Your eyes shouldn't see the difference because it should be right on the screen. So, to save time, you should draw it before both eyes with the red and blue color activated. This way, you draw this part only once.

Here's the kind of images you should have:

2011/06/02

Brother's blog

My brother started a blog a few weeks ago about his work. He's currently working  in the independent video game industry and draws for fun.

On his blog, he presents his side projects and some tutorials on his drawing and creation processes.

Caron's Carton

2011/06/01

The Background

The image I used as the background, at least the one used when I created this post, is a Voronoi diagram. To draw this diagram, we don't use any algorithm to find the vertices and edges of the diagram.

Our procedure consists first of splitting the unit square in YxY cells where each cells contains between 1 and X points. To render the image, that is a square of ZxZ pixels, we first convert the coordinate of the pixel into the unit square, then find in which cell the pixel is.

To draw the diagram, we need the two closest points from that pixel. To find it we first compare the pixel with other points in its cell. Then we also explore the surrounding cells to be sure to find the exact two points. When a cell is on the border of the square we use the cell on the other side to create a texture that can be tiled.

Let d1 and d2 be the distance from the pixel to the closest and second closest point respectively. Then, we can determined the gray color of that pixel as (d1/d2) * 255. Other method can be used too to determined the right color. The ratio d1/d2 gets a value of 0 when the pixel is right on the closest point and a value of 1 when the pixel is right between the two points. Therefore, in this case, the white pixels are part of a segment on the Voronoi diagram. The following image was created with this approach.



For our image in the background, we use an interpolation to determined the color for the pixel. Plus, we used d3, the distance from the third closest point to the pixel.

For the interpolation, we solve the following linear system:


| 0 0 1 || x |    | 1 |
| 1 1 1 || y | = | 0 |
| a b 0 || z |    | 0 |


where a and b are two different constants.

Then, the color of the pixel is determined by 255 * ((d1/d2) * x + (d1/d3) * y + z). Naturally, if the value is above 255 or below 0, we considered them as 255 or 0.

To program this procedure, we used CUDA. With CUDA, each pixel gets his own thread and even with an older version of CUDA (I just have CUDA 1.1 on my GeForce 9800 GTX+) the computation is done in less than a second for a 1024x1024 images.

The work load associated with a pixel is determined by the value X, the maximum number of point in a cell. To find the two (or three) closest point to a pixel, the maximum number of point to check is 9X. The memory required for the diagram is proportional to Y*Y*X.


REFERENCE


Steven Worley. 1996. A cellular texture basis function. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96). ACM, New York, NY, USA, 291-294. DOI=10.1145/237170.237267 http://doi.acm.org/10.1145/237170.237267

First Post

I will mainly use this blog to show results from my research in computer science. My current research is in procedural texturing. What is that ? First, a texture is what you see in 3D movies or games. You see a brick wall, then someone had to draw it. Then this texture is apply to a geometric surfaces, like a rectangle or more usually a triangle. So, if I want a box in 3D that is made of wood. First I create a cube, with 6 squares, being the geometric surfaces, and on each of the square, I apply a wood texture.

But if I use the same texture on the 6 squares, my box don't look natural. But do I really want to create 6 different textures by myself ? It can take some times.

The "procedural" part is an automated way to create the texture. For example, someone created an algorithm to create wood texture. Then, I can use this algorithm to create 6 textures for my box. 6 different textures created in few seconds.

The main objective of procedural texturing is to propose new method to create a texture by simply specifying some parameters. For the wood texture, as parameters, I might want a wood that looks more like maple or fir.

Sometimes, I will simply post some visual examples. In other post, I will take more time to explain the procedure used to create some outputs. I might also post some complete or partial source code.

This blog will also be used for side project and other research results that are not necessary in procedural texturing.

I welcome any comments on my work. Particularly if you have suggestion or another approach to suggest to achieve an particular goal.

Widgg