Monday, 31 March 2014

Compiling OpenCV-3.0 with Matlab Support

A big uppercase HELLO to everyone!  I am back and after a long time (yet again) I am going to write a tutorial. The thing I am able to achieve here is awesome for us computer vision researchers. Yes! you heard it correct, exciting stuff.

I have been using OpenCV for quite sometime now. As good as it is for real-time computer vision applications, it can also be time consuming when it comes to exploring and implementing new research designs. Matlab on the other hand has always been flexible and a quick work around to achieve my research goals. The only problem, though, with matlab is that it is not real-time or even worse is that if you plan to implement code in OpenCV for real-time application, you would have to write the algorithms all over again as the usage of Matlab toolboxes is different than using the same methods in OpenCV.

Now comes the fun part, what if you can access OpenCV function calls within Matlab code? What if you can have easily transferable code from Matlab to C++?  This is all possible now with the OpenCV 3.0 Dev including matlab mex wrappers, which really is a good big step in the right direction. So lets start compiling the code.

Tuesday, 18 February 2014

Creating "Mood lights" animation with OpenCV

The other day I went on a typical London walk near Thames, and as always loved the lights, reflections and the view. It was amazing! One thing I really liked was the RGB mood lights on the bridge that transformed from every possible color in a way that it made the whole experience amazing!! Here is a glimpse from my instagram.



Since there was a sequence of colors involved, I thought I would at least try to replicate these mood lights using OpenCV. Turns out its not very difficult to make this animation at all. I wrote an algorithm for doing this using some clever tricks that did make it simple and interesting. Here is a gif showing how cool the animation looks when you execute the code.



Saturday, 15 February 2014

Interview experience with Google

This may sound unbelievable but it all started a few months back in late 2013, when I saw a really cool movie called "The internship". Of course I had been thinking about applying for a software eng. intern position at Google for a long time, but it was this movie which made me get up from my couch of epic laziness and start preparing for the experience that was to follow! Unlike the movie, my story doesn't end with me getting a position at all, but it certainly is about the interesting path for a learning experience. So after watching the movie, I started preparing my application and submitted it within a few days. Pretty standard details were required for submission at this point. 

Wednesday, 5 February 2014

Algorithm to check Sudoku puzzle!

I have got a couple of interviews this week, which I love preparing for as it a good way to refresh my C++ and at the same time, I get to implement some pretty interesting algorithms.

As most of us would do, I have been searching for the past few days about frequently asked interview questions and have been trying to solve most of the algorithm design questions myself. This post is about an interview question asked by google interviewers for an internship position. The question is about checking if a Sudoku Solution is correct or not.


Puzzle picture taken from: www.puzzles411.com

Thursday, 16 January 2014

Reading a Kinect Depth Image in OpenCV

While reading a captured Kinect Depth Image, it is important to read it in a way that retains the original data. Using a regular cv::imread function call can significantly modify the data stored in a Kinect Depth Image. This is because of the fact that a regular cv::imread function call uses default method, which assumes:
  1. The input image is color (three channels: RGB)
  2. The depth (number of bits per pixel) of input image is UCHAR (CV_8U) or 8bits/pixel
This is, obviously, not true for a Kinect Depth Image, for this image is a special type of grayscale image. The Kinect Depth Image contains only one channel (like any other grayscale image), however the depth of this image is actually UINT16 or unsigned int (CV_16UC) instead of UCHAR. This difference in depth is because of the fact that a Kinect Depth Image contains more values and hence it requires more bits per pixel to store this information (i.e. 16bits/pixel). Now that we know what makes it different, lets see how it can be read inside OpenCV code.

To read a Depth Image use cv::imread function with CV_LOAD_IMAGE_UNCHANGED flag. This will not change the data, reading it in its original state.
e.g. cv::imread("DepthInput.png", CV_LOAD_IMAGE_UNCHANGED);

I will be updating this post later to include details on how to capture and store a data stream from Kinect using OpenCV.