Friday, December 30, 2011

Google AI Challenge Ants 2011 - Post Mortem

As a 3 week hard work and sleepless nights, I took the 127th rank in Google AI Challenge Ants 2011 and 1st in my country India. I thought the experience was worth sharing in this blog. In-between the competition, I had some health issues that kept me out of reach of computer for a week. Of course health was worse after the competition started. However, I managed to make it normal with a week long vacation with family.

Initial Idea:
Initially when I first learnt about the competition, I thought it would be a simple problem to solve. The first thought that came to my mind was grouping ants together to fight enemies. Because, the aim of the competition sounded more like, just keeping your ants live. So I thought I could build a group of ants making a + sign with just 4 ants each.

But eventually, when I started working on the game, I learnt, it affected exploring and food gathering a lot. And decided to concentrate on food gathering and exploration first.

Exploration and Food Gathering:
I tried a different way of using scent maps for food gathering and exploration. Totally, I fill 3 different scent maps depending on priority. The first map is for food and exploration. This map will carry a scent until it finds the first ant, then the scent will stop spreading on all sides. This will ensure one ant for one target policy. This way of spreading is unique for food and exploration map. I tried this idea to avoid many ants moving towards the nearest target which is usual for scent maps. To the map, I add maximum integer to the source cell. Food and unseen cells are my initial source seeds. This will make it easier to explore as fast as you can. But there is a problem with this technique, the ants will not re-explore visited areas to search for new food. So I started seeding the cells those were not visible for the past 6 turns. Why 6 !?!? coz adding the digits of 42 gives 6. In simple words, I do not have a answer why I chose 6 but it worked out better than other numbers. 

To decide on the number I could have tried a simple genetic algorithm, but I didn't have enough time to setup local game servers for that. I'll keep such tasks for next contest. However, I tried letting same bots with different numbers (4 to 12) to each other to decide on this number. 

Finally, I can see my bot exploring and gathering food as fast as any top bots, irrespective of any type of maps. The scent maps helped a lot, especially after choosing a slow programming language like python.  At least 5 times, I considered poring my code to different language during this contest. 

Best Example for Exploration and Food Gathering: Game Link

Combat and escape:
After settling with a decent exploring and gathering code, I decide to choose the most efficient way of fighting with other ants. I first considered random sampling method explained by user a1k0n in the forum. However, it didn't impress me a lot. So I decided trying my own simple battle resolution method using 2 radii calculations. However, after implementing my method I learnt, I might have to calculate 3 radii for battle to resolve efficiently. And my code time-out even with 2 radii, so continuing this might be disastrous with the language I chose. So I decided to take another route by Memetix, in which all battle resolutions were pre-calculated. Now, my ants do not die in 1 vs 1 fights. 

Best Example for Combat: Game Link

3 scent maps
Like I said before, I used 3 scent maps according to priority. All ants first check the 1st maps that can is called hill scent map and is seeded with just enemy hills, if found. The scent travels to a distance of
The reason again being (42 => 4 + 2 = 6). The next priority maps is the food map that travels until it finds the nearest ant. The third map is just a worst case map, that is filled with enemy hills, food, and unseen cells. This map makes sure no ant is left idle, in any case. 


With a decent exploration, food gathering and combat resolution code, it was easy to reach top 500. But my aim was to get into the top 100 list which might need a lot more heuristic improvements to my bot. So I considered trying different strategies to boost the performance. One very important improvement I did was blocking the path for the enemies to enter my territory. All that I did to bring this is, seeded enemy ants and all ants including ally will block scents, so until the opening in the maze is closed the enemy ants will generate scent. And this scent is sent to the 3rd map, if the ant has no scent in 1st and 2nd maps.

Overall, the performance was much better at this point. However, my bot performed very poor in close neighbor multi hill mazes. So I decided to tune my bot to do some sacrifices near my hill within a certain radius. The radius, I chose was 14 (coz 42 was divisible by 14). This simple change in my combat code that does sacrifice ants in 1v1 combats saved my hills from being visible very early. So I my ants had enough time to breath and collect food.

Best Game for hill sacrifice: Game Link

Also, my bot sacrifices some of its hills to concentrate on exploring and food gathering. I didn't make it intensionally, but it was a bug in the other strategy but it was worth keeping it. However, some final change made the performance worse on other maps, which I didn't care much as I had no enough time to think about tuning further. I left my bot with that and sat back to watch it play.

During this competition, I learnt a lot of new topics that would help in future. Some of the topics were very surprising to me. I felt those topics were very much for scientific experiments and not for gaming. However, this contest changed my thought about such topics.

  • A* Shortest Path
  • BFS Shortest Path
  • Collaborative Diffusion
  • Scent/Distance Maps
  • Heuristic Decision Making
  • Random Sampling
  • MiniMax
  • Genetic Algorithm
  • Genetic Programming

and a lot more. Hope I'll keep those new interesting topics for my next contest. Lastly, I would like to thank the contest organizers, TCP serve hosts, Tool designer and co-participants.

Saturday, October 29, 2011

Hacking iPhone 4 into a Digital Microscope

I was always eager to zoom in to things to see if I can find microbes. This time I took a step forward and opened up one of my old DVD drives to remove the lens inside. I successfully removed lens from 2 old optical disk drives and both had different magnifying lenses. With one of the lens I could see the RGB component LEDs from every pixels of my iPhone 4 retina display.

Click to see full sized picture

Then I have now fixed it behind my iPhone 4 camera to shoot some pictures. See the following video that we compiled during the weekend.

I am planning to increase the magnification of the setup by adding 2 lenses in front of the camera to see if I can zoom into microbes level. My first target is to shoot a moving Amoeba.

Saturday, October 15, 2011

3d Scanner update 1.2

After some response from the Mac App Store reviews, I am planning to add some more features to the app. This version will have Texture mapping feature, that will support most of the file formats. Meanwhile, I am also thinking about adding support for keyframe animations if possible in this version. If it is too hard to export to keyframe animation, I'll keep it for next big update.

I just uploaded a new version to the App Store. v1.2

Whats New in version 1.2:
1) Two different Modes of rendering. You can now view the whole scene in 2d or use 3d rendering to rotate and zoom.
2) Fixed the reversed normals in exported mesh. The exported mesh ended up to have the normals reversed in the previous version.
3) Laplacian Mesh Smoothing Algorithm. The exported mesh can be can be smoothed using according to the slider threshold selected. Smoothing threshold 4 is recommended for best results in facial capturing. Keep the slider in the middle for threshold 4.
4) Optimized exported file size. Removed unreferenced and duplicated vertex.
5) Reduced Memory usage and CPU cycles to 60%. 

Friday, October 14, 2011

The Beautiful Moon and The Gorgeous Jupiter

October 14th 2011, 5:30AM: Something woke me up and thanks to our new house. The two windows in my bedroom usually show the Moon on the balcony if am early, and the Sun on the other window right into my face. Today, something surprised me killing the rest of my deep early morning sleep. It was not just the moon but someone who accompanied him. I took my phone to click some pictures and the battery was on its 5%. I had to quickly click as many as pictures so at least a few come out well. Thanks to my iPhone 4 it stood steady for more than 45 mins with that 5% battery. I had to search on the internet to find who's the new guy on the sky. After a quick search on the internet, I found him to be the Jupiter. Am not sure if I can see them together again. See some of the pictures below and feel free to use then if you wish. Nature is free and I don't believe in copy protection or putting my name on the pictures.

And yes, now you know I have another hobby. I wish I had a good telescope.

Thursday, October 6, 2011

3D Scanner App for Mac OSX

I just managed to develop an application for Mac OSX that scans the whole 3d scene or just the foreground and stores the 3d mesh into a ply file. The ply file has vertex coloring corresponding to the scene color. And to tell you something more about the format I chose. It is the usual file format for 3d scanners. Below is a screenshot of the app.

I am also planning to add facial expression capturing, 3d video capturing, 3d mesh deforming format and a lot more. It all depends on the response I see from the appstore. I'll also update the status of the app here.

Update: This app just got released in the Mac App Store (3D Scanner)

Update: I just submitted an update for the app. Now you can do a lot better using this app. The change log is listed below.
1) New interface with live preview of the actual 3D model going to be exported. Now you can rotate or zoom the 3D model before you export.
2) Shortcuts for improving productivity. Space bar - Capturing 3D snap shots, H - To toggle hide background, R - To reset Backgrund removal process and Orientation.
3) Improved algorithm for exact color matching in the 3D world.

Tuesday, September 27, 2011

Kinect + Box2D

Past few days I've been trying to arrange a marriage between my Kinect Algorithms and Box2d and it finally came up very well. As they both didn't really love each other I had to forcefully arrange an Indian style of marriage between them. Box2d accepts only shapes and that too only basic shapes with less than 8 sides. And it is impossible to draw a human shape in that restrictions. So I had to bring up a basic shape filled object in the shape of a human and I had to do it live for every frame. The interaction looked really wonderful and people really enjoyed it.

Friday, August 5, 2011

Update on Kinect and other projects

I've been busy for a whole on an other dream project Near Space Satellite Which is soon to be over after a few years of hard work. I'll keep updating about my Kinect project after finishing this project. I've been a bit slow on this kinect project after MS has released Kinect SDK and would render such works less useful. :)

Thursday, March 31, 2011

Clean Background removal with blob detection

Yes, am a little ahead in the background removal. I have now removed the background noise that escaped from my first layer background removal algorithm. I've added a second layer of background removal that removes smaller moving particles without hassle. Now the blob detection is more meaningful. I am also working on an advanced blob detection to solve some problem I am facing with the current one.

Am also wondering how I can detect overlapping blobs that lie in the same distance from the Kinect. Ofcourse, its easy to separate blobs that are at different distances, but when 2 users shake their hands, I am wondering if I can detect them as different blobs. 

Wednesday, March 30, 2011

Blob Detection in Kinect

After background removal, I've now completed the blob detection algorithm. After reading a bunch of blob detection papers, I ended up with my own (modified version of ) method that is much faster and uses low CPU. The original code from the link was buggy and was only meant for multi touch detection. I've modified this idea to work with the kinect's depth data. However, I still have some problem where overlapped users/objects are tracked as single object in realtime.

See this image that shows the output. After blob detection, I've removed smaller blobs created by the noise escaped in the background removal code. It was much easier for me to filter the small ones out. In this image on the large objects (humans) are tracked.

My next step would be, to detect the nodes of each users. I might be using some clustering algorithms to detect nodes in the user's body. I've no clue if there is any straight forward algorithm for this.

Tuesday, March 29, 2011

Background Removal for Kinect Depth data

I am wondering if am atleast 10% towards completion of the skeletonization code. However, here is the best background detection and removal, I could do. This code is not based on any standard bg removal algorithm. I tried to implement more than 5 standard algorithms for background detection but they didn't work well for Kinect's output. They were all meant more for rgb data, so I started working on my own bg detection algorithm. And here is the output.

Yes, there is still some small portions that escape my bg removal code. I've decided to deal with them in blob detection. As it sounds much easier to remove smaller blobs. I am now working on a fast blob detection algorithm.

I am yet to test this bg removal code on realtime conditions like larger room and many moving objects.

Tuesday, March 15, 2011

Skeleton detection code for Kinect

As I promised earlier, I am now working on a Skeleton detection code that can work independently on any device. It will mostly be on c and c++ so it can be ported on to any platform. I have decided to split the code into different modules.
1) Background removal: I have an idea on background removal technique that will self learn during run time and does not require capturing empty background. This technique is simple and I've started working on implementing this now. Will soon post some videos on this.
2) Blob detection: I am planning to adopt some algorithm for blob detections. I've not decided on which one yet, but I am going to start from edge detection algorithm.
3) Skeleton tracking: This part, am still clueless right now. But I am sure I'll find some decent way to atleast detect hands and head. This is still a dream right now.

The whole objective of this project is to make a Skeleton detection program that uses Kinect's output and does not require any calibration posture like in OpenNI. Will post the progress here soon.

Sunday, February 13, 2011

Kinect UIRT Opensource

I just successfully uploaded my code to the github repository.

The repository includes the openframework with limited addons required to just run the example I have written. The source also has the code to use it as Kinect Mouse. All you have to do is call a function and send the right hand location to the mouse function declared at the bottom.

Saturday, February 12, 2011

Controlling TV and Set-top box with Kinect

I finally made it work. Detected skeleton using OpenNI and detected some gestures using my own code. Now I also connected a USB-UIRT ( device to my Mac Mini in addition to the Kinect, to send signal to my TV and Set-top box to change channels and change volume. Checkout my video... Now I can change my channels and volume without using the remote control.

If you want to setup the same thing, you can contact me by email. I'll send you the code for both UIRT and kinect. :)

Friday, January 14, 2011

Skeleton tracking on Mac OSX

Finally I successfully integrated OpenNI and NITE beta on my MacOSx and tracked skeleton. Before I tried OpenNI, I was trying to build my own algorithm to detect just the fingers. Though it worked, I was not satisfied by the side-effects it had. So I ended up thinking about Skeleton tracking.

Saturday, January 8, 2011

Kinect libraries integrated with Cocoa in XCode

Now, I just integrated OpenKinect into my xCode project and can read the meshes into my opengl scene.

As a next step am going use my own algorithm to find fingers. :) Great challenge ahead I guess...

My first step in Kinect and Mac Mini

I just successfully compiled the openKinect and tested my new device Kinect. As I don't have XBox with me, I had to wait until I can make this code work on Mac OSX.

Here is the screenshot of the OpenKinect sample code.

Friday, January 7, 2011

How much does it take to start a blog

I had done enough blogging before. But every time I started I had a specific topic as the blog name and once the topic is not interesting anymore, I am out of blogging. This time, I planned to start on a personal blog so that I just post random ideas that come to my mind.

Starting a blog is nowadays a few click away...