Color Detection & Object Tracking


Object detection and segmentation is the most important and challenging fundamental task of computer visionIt is a critical part in many applications such as image search, scene understanding, etc. However it is still an open problem due to the variety and complexity of object classes and backgrounds.

The easiest way to detect and segment an object from an image is the color based methods . The object and the background should have a significant color difference in order to successfully  segment objects using color based methods.


Simple Example of Detecting a Red Object


In this example, I am going to process a video with a red color object and create a binary video by thresholding the red color. (Red color area of the video is assigned to '1' and other area is assigned to '0' in the binary image so that you will see a white patch wherever the red object is in the original video)

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace cv;
using namespace std;

 int main( int argc, char** argv )
 {
    VideoCapture cap(0); //capture the video from web cam

    if ( !cap.isOpened() )  // if not success, exit program
    {
         cout << "Cannot open the web cam" << endl;
         return -1;
    }

    namedWindow("Control", CV_WINDOW_AUTOSIZE); //create a window called "Control"

int iLowH = 0;
int iHighH = 179;

int iLowS = 0; 
int iHighS = 255;

int iLowV = 0;
int iHighV = 255;

//Create trackbars in "Control" window
cvCreateTrackbar("LowH", "Control", &iLowH, 179); //Hue (0 - 179)
cvCreateTrackbar("HighH", "Control", &iHighH, 179);

cvCreateTrackbar("LowS", "Control", &iLowS, 255); //Saturation (0 - 255)
cvCreateTrackbar("HighS", "Control", &iHighS, 255);

cvCreateTrackbar("LowV", "Control", &iLowV, 255); //Value (0 - 255)
cvCreateTrackbar("HighV", "Control", &iHighV, 255);

    while (true)
    {
        Mat imgOriginal;

        bool bSuccess = cap.read(imgOriginal); // read a new frame from video

         if (!bSuccess) //if not success, break loop
        {
             cout << "Cannot read a frame from video stream" << endl;
             break;
        }

Mat imgHSV;

cvtColor(imgOriginal, imgHSV, COLOR_BGR2HSV); //Convert the captured frame from BGR to HSV

Mat imgThresholded;

inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded); //Threshold the image
      
//morphological opening (remove small objects from the foreground)
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 

//morphological closing (fill small holes in the foreground)
dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );

imshow("Thresholded Image", imgThresholded); //show the thresholded image
imshow("Original", imgOriginal); //show the original image

        if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
       {
            cout << "esc key is pressed by user" << endl;
            break
       }
    }

   return 0;

}
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

You can download this OpenCV visual c++ project from here.






Explanation


OpenCV usually captures images and videos in 8-bit, unsigned integer, BGR format. In other words, captured images can be considered as 3 matrices; BLUE, GREEN and RED (hence the name BGR) with integer values ranges from 0 to 255.

The following image shows how a color image is represented using 3 matrices.


How BGR image is formed using 3 matrices which represent blue, green and red planes
How BGR image is formed
In the above image, each small box represents a pixel of the image. In real images, these pixels are so small that human eye cannot differentiate.

Usually, one can think that BGR color space is more suitable for color based segmentation. But HSV color space is the most suitable color space for color based image segmentation. So, in the above application, I have converted the color space of original image of the video from BGR to HSV image.

HSV color space is also consists of 3 matrices, HUE, SATURATION and VALUE. In OpenCV, value range for  HUESATURATION  and VALUE  are respectively 0-179, 0-255 and 0-255. HUE represents the color, SATURATION  represents the amount to which that respective color is mixed with white and VALUE  represents the  amount to which that respective color is mixed with black.
 
In the above application, I have considered that the red object has HUESATURATION and VALUE in between 170-180, 160-255, 60-255 respectively. Here the HUE is unique for that specific color distribution of that object. But SATURATION and VALUE may be vary according to the lighting condition of that environment.

Hue values of basic colors
    • Orange  0-22
    • Yellow 22- 38
    • Green 38-75
    • Blue 75-130
    • Violet 130-160
    • Red 160-179
These are approximate values. You have to find the exact range of HUE values according to the color of the object. I found that the range of 170-179 is perfect for the range of hue values of my object. The SATURATION and VALUE is depend on the lighting condition of the environment as well as the surface of the object. 

How to find the exact range of HUE, SATURATION and VALUE for a object is discussed later in this post.

After thresholding the image, you'll see small white isolated objects here and there. It may be because of noises in the image or the actual small objects which have the same color as our main object. These unnecessary small white patches can be eliminated by applying morphological opening. Morphological opening can be achieved by a erosion, followed by the dilation with the same structuring element.

Thresholded image may also have small white holes in the main objects here and there. It may be because of noises in the image. These unnecessary small holes in the main object can be eliminated by applying morphological closingMorphological closing can be achieved by a dilation, followed by the erosion with the same structuring element.

Now let's discuss new OpenCV methods in the above application.
  • void inRange(InputArray src, InputArray lowerb, InputArray upperb, OutputArray dst);
Checks that each element of 'src'  lies between 'lowerb' and 'upperb'. If so, that respective location of  'dst' is assigned '255' , otherwise '0'. (Pixels with value 255 is shown as white whereas pixels with value 0 is shown as black)

Arguments -
    • InputArray src - Source image
    • InputArray lowerb - Inclusive lower boundary (If lowerb=Scalar(x, y, z), pixels which have values lower than x, y and z for HUE, SATURATION and VALUE respectively is considered as black pixels in dst image)
    • InputArray upperb -  Exclusive upper boundary (If it is upperb=Scalar(x, y, z), pixels which have values greater or equal than x, y and z for HUE, SATURATION and VALUE respectively is considered as black pixels in dst image)
    • OutputArray dst -  Destination image (should have the same size as the src image and should be 8-bit unsigned integer, CV_8U)

  • void erode( InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
This function erode the source image and stores the result in the destination image. In-place processing is supported. (which means you can use the same variable for the source and destination image). If the source image is multi-channel, all channels are processed independently and the result is stored in the destination image as separate channels.

Arguments -
    • InputArray src - Source image
    • OutputArray dst - Destination image (should have the same size and type as the source image)
    • InputArray kernel - Structuring element which is used to erode the source image
    • Point anchor - Position of the anchor within the kernel. If it is Point(-1, -1), the center of the kernel is taken as the position of anchor
    • int iterations - Number of times erosion is applied
    • int borderType - Pixel extrapolation method in a boundary condition
    • const Scalar& borderValue - Value of the pixels in a boundary condition if borderType = BORDER_CONSTANT


  • void dilate( InputArray src, OutputArray dst, InputArray kernel, 
  • Point anchor=Point(-1,-1), int iterations=1, 
  • int borderType=BORDER_CONSTANT, 
  • const Scalar& borderValue=morphologyDefaultBorderValue() );
This function dilate the source image and stores the result in the destination image. In-place processing is supported. (which means you can use the same variable for the source and destination image). If the source image is multi-channel, all channels are processed independently and the result is stored in the destination image as separate channels.

    • InputArray src - Source image
    • OutputArray dst - Destination image (should have the same size and the type as the source image)
    • InputArray kernel - Structuring element which is used to dilate the source image
    • Point anchor - Position of the anchor within the kernel. If it is Point(-1, -1), the center of the kernel is taken as the position of anchor
    • int iterations - Number of times dilation is applied
    • int borderType - Pixel extrapolation method in a boundary condition
    • const Scalar& borderValue - Value of the pixels in a boundary condition if borderType = BORDER_CONSTANT

  • void cvtColor( InputArray src, OutputArray dst, int code, int dstCn=0 )
This function convert a source image from one color space to another. In-place processing is supported. (which means you can use the same variable for the source and destination image)
    • InputArray src - Source image
    • OutputArray dst - Destination image (should have the same size and the depth as the source image)
    • int code - Color space conversion code (e.g - COLOR_BGR2HSV, COLOR_RGB2HSV, COLOR_BGR2GRAY, COLOR_BGR2YCrCb, COLOR_BGR2BGRA, etc)
    • int dstCn - Number of channels in the destination image. If it is 0, number of channels is derived automatically from the source image and the color conversion code.

All other OpenCV methods in the above application have been discussed in early OpenCV tutorials




Simple Example of Tracking Red objects


In the previous example, I showed you how to detect a color object. In the following example, I'll show you how to track a color object. There are 3 steps involving to achieve this task.

  1. Detect the object
  2. Find the exact position (x, y coordinates) of the object
  3. Draw a line along the trajectory of the object
Here is how it is done with OpenCV / C++.

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace cv;
using namespace std;

 int main( int argc, char** argv )
 {
    VideoCapture cap(0); //capture the video from webcam

    if ( !cap.isOpened() )  // if not success, exit program
    {
         cout << "Cannot open the web cam" << endl;
         return -1;
    }

    namedWindow("Control", CV_WINDOW_AUTOSIZE); //create a window called "Control"

int iLowH = 170;
int iHighH = 179;

int iLowS = 150; 
int iHighS = 255;

int iLowV = 60;
int iHighV = 255;

//Create trackbars in "Control" window
createTrackbar("LowH", "Control", &iLowH, 179); //Hue (0 - 179)
createTrackbar("HighH", "Control", &iHighH, 179);

createTrackbar("LowS", "Control", &iLowS, 255); //Saturation (0 - 255)
createTrackbar("HighS", "Control", &iHighS, 255);

createTrackbar("LowV", "Control", &iLowV, 255);//Value (0 - 255)
createTrackbar("HighV", "Control", &iHighV, 255);

int iLastX = -1; 
int iLastY = -1;

//Capture a temporary image from the camera
Mat imgTmp;
cap.read(imgTmp); 

//Create a black image with the size as the camera output
Mat imgLines = Mat::zeros( imgTmp.size(), CV_8UC3 );;


    while (true)
    {
        Mat imgOriginal;

        bool bSuccess = cap.read(imgOriginal); // read a new frame from video



         if (!bSuccess) //if not success, break loop
        {
             cout << "Cannot read a frame from video stream" << endl;
             break;
        }

Mat imgHSV;

cvtColor(imgOriginal, imgHSV, COLOR_BGR2HSV); //Convert the captured frame from BGR to HSV

Mat imgThresholded;

inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded); //Threshold the image
      
//morphological opening (removes small objects from the foreground)
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 

//morphological closing (removes small holes from the foreground)
dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );

//Calculate the moments of the thresholded image
Moments oMoments = moments(imgThresholded);

double dM01 = oMoments.m01;
double dM10 = oMoments.m10;
double dArea = oMoments.m00;

// if the area <= 10000, I consider that the there are no object in the image and it's because of the noise, the area is not zero 
if (dArea > 10000)
{
//calculate the position of the ball
int posX = dM10 / dArea;
int posY = dM01 / dArea;        
        
if (iLastX >= 0 && iLastY >= 0 && posX >= 0 && posY >= 0)
{
//Draw a red line from the previous point to the current point
line(imgLines, Point(posX, posY), Point(iLastX, iLastY), Scalar(0,0,255), 2);
}

iLastX = posX;
iLastY = posY;
}

imshow("Thresholded Image", imgThresholded); //show the thresholded image

imgOriginal = imgOriginal + imgLines;
imshow("Original", imgOriginal); //show the original image

        if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
       {
            cout << "esc key is pressed by user" << endl;
            break
       }
    }

   return 0;
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// 

You can download this OpenCV visual c++ project from here.

how to track object using color based method with OpenCV
Object Tracking



Explanation


In this application, I use moments to calculate the position of the center of the object. We have to calculate 1st order spatial moments around x-axis and y-axis and the 0th order central moments of the binary image.

0th order central moments of the binary image is equal to the white area of the image in pixels.



  • X coordinate of the position of the center of the object  =  1st order spatial moment around x-axis /  0th order central moment
  • Y coordinate of the position of the center of the object  =  1st order spatial moment around y-axis / 0th order central moment
If there are 2 or more objects in the image, we cannot use this method. And noise of the binary image is also should be at minimum level to get accurate results.

In the above application, I considered that if the white area of the binary image is less than or equal to 10000 pixels, there are no objects in the image because my object is expected to have an area more than 10000 pixels.


Now, let's discuss new OpenCV methods that can be found in the above application.



  • Moments moments( InputArray array, bool binaryImage=false )
This OpenCV function calculates all of the spatial moments up to the third order and returns a Moments object with the results.
    • InputArray array - Single channel image
    • bool binaryImage - If this is true, all non zero pixels are considered as ones when calculating moments.

  • void line(Mat& img, Point pt1, Point pt2, const Scalar& color, int thickness=1, int lineType=8, int shift=0)
This function draws a line between two points on a given image
    • Mat& img - image which you want to draw the line
    • Point pt1 - First point of the line segment
    • Point pt2 - Other point of the line segment
    • const Scalar& color - Color of the line (values of Blue, Green and Red colors respectively)
    • int thickness - Thickness of the line in pixels

  • static MatExpr zeros(Size size, int type)
This function returns a black image (with pixels with zero values) with a given size and type.
    • Size size - Size of the required image ( Size(No of columns, No of rows) )
    • int type - Type of the image (e.g - CV_8UC1, CV_32FC4, CV_8UC3, etc)





How to Find Exact Range for 'Hue', 'Saturation' and 'Value' for a Given Object



How to adjust ranges of H, S, V to detect the object with minimum noise with OpenCV



Finding the optimum HUE, SATURATION and VALUE  ranges for an object is a 4 step process.


  1. Track bars should be placed in a separate window so that ranges for HUE, SATURATION and VALUE can be adjusted. And set the initial ranges for HUE, SATURATION and VALUE as 0-179, 0-255 and 0-255 respectively. So, we will see a complete white image in the 'Control' window.
  2. First, adjust 'LowH' and 'HighH' track bars so that the gap between 'LowH' and 'HighH' is minimized. Here you have to be careful that white area in 'Ball' window that represents the object should not be affected, while you are trying to minimize the gap.
  3. Repeat the step 2 for 'LowS' and 'HighS' trackbars
  4. Repeat the step2 for 'LowV' and 'HighV' trackbars


Now you can find the  optimum HUE, SATURATION and VALUE ranges for the object. It is 163-179, 126-217 and 68-127 in my case as you can see in the below picture.


H, S and V are properly adjusted to detect the object with lesser noise with OpenCV








Next Tutorial : Object Detection & Shape Recognition using Contours

Previous Tutorial : Rotate Image & Video


60 comments:

  1. Thanks for sharing this tutorial.

    ReplyDelete
  2. Thanks for the tutorial, I want to ask how to calculate the number of moving vehicle from the contour?
    1 vehicle will detect in some frame...
    Thanks before, anyone could help? please...

    ReplyDelete
  3. Really awesome tutorial, i really want to ask about how to handle mouse click events and mouse movements with my finger using object tracking . thanks in advance!!

    ReplyDelete
  4. Guys, I'm looking for an updated object recognition API that has the ability to count objects such as cars, as well as facial recognition. Any suggestions?

    ReplyDelete
  5. can some one tell how to install stdafx at linux?

    ReplyDelete
    Replies
    1. That header file is automatically generated by the visual studio. You can simply ignore that line

      Delete
  6. I'm trying to use OpenCV to detect the movement (including locations) of a person's feet (heel and toes) while they walk. Is there a way to do this with OpenCV?

    ReplyDelete
  7. Hi, in my project i want to identify objects and then detect what are they(water bottle, ball, etc).
    So can you pls suggest me a method for it. I thought of identifying the objects in the image and then match that object with a object database using SURF method. But the problem is to identfy the whether my image has a object or not. Thanks

    ReplyDelete
  8. hello, i have a picture "leaf color chart", you may search in internet what it look like..
    how can i crop the picture onto 6 part based on the green color automatically.. actually i want to deploy it in android using open cv.. but overall i want to know, is it possible in openCV ? pls help me :(
    thanks

    ReplyDelete
  9. I have had problem with the first example. At first I couldn't to compile for following code "IplImage* frame=0;", I delete it and rewrite it and it works. Later after compiling, the video capture didn't work, when I debug the program and the debugger gave the following message "Bad argument in unknown function, modules\core\Sic\array.cpp". I guess that I failed to link to one of the header. Have you any experience with this error and do you know how to fix it?

    ReplyDelete
    Replies
    1. Hello again, I fixed the problem. Just I added cvRetrieveFrame(capture) before frame = cvQueryFrame(capture);

      Delete
  10. A special thanks for this informative post. I definitely learned a few new things here.

    ReplyDelete
  11. HI, How can threshold the motion of a single person using opencv.

    ReplyDelete
    Replies
    1. If your background is not changing, you can use background subtraction algorithm to detect the motion

      Delete
  12. If we want to track two object what function we have to use instead of moments. Can you please explain with code

    ReplyDelete
  13. Excellent explanation.After tracking the red object can i use the object to move into the 3d room done using opengl.Room construction is done using opengl & i want to control the movement using red object.Thanks in advance

    ReplyDelete
  14. i am running this code to track the red color ball but it is not tracking ball window is showing black not detecting

    ReplyDelete
  15. I try to use some codes to character segment by using OpenCV 2.3.1. But it doesnot work is there any code that help me?

    ReplyDelete
  16. Dude these tutorials could be in a very expensive book. I stumbled across your site after searching for something to help me get started with OpenCV. Thanks so much for helping me to get started with the image processing I'm doing.

    ReplyDelete
  17. Nice bro !! Could you please make a tutorial for Pseudocolor using opencv?? very grateful if you can make it. I'm a newbie in this field. Hope you can teach me..

    Thank you

    ReplyDelete
  18. Hello

    i' have implemented your code and it's working great.
    But when i combine it with my another code, it's getting laggy

    May i ask why you use malloc ?

    Thank you

    ReplyDelete
    Replies
    1. Above OpenCV examples have been written in C. I will convert to C++ soon. (malloc belongs to C style)

      Delete
  19. First of all Fernando thank you very much for this tutorial its by far the best I've found. but i was wondering when will you be updating the rest of the tutorials to C++ I got very confused when I got to color detection. one more thing if you have the time and want to, if you could do a small tutorial on how to get video from an ip camera. Once more thank you very much for all this it has been very very helpful

    ReplyDelete
  20. Hi, i am working on automatic number plate recognition and i need to separate the number plate contour from the rest of the image.Can somebody help me out with this ? Thanks in advance.

    ReplyDelete
  21. hey, it's showing this error
    error C3861: 'cvCvtColor': identifier not found
    error C3861: 'cvCvtColor': identifier not found
    may u tell me how to solve it

    ReplyDelete
  22. hello ,very useful blog for me ! Another question is what if I want to track two red apples ? How to get each posX and posY ?
    would you please teach me how to do this ? Thank you in advance, learningpro.dong@gmail.com.

    ReplyDelete
  23. simplyfing, how could I pass from IplImage* to Mat and what are the differences between these two types, I mean, a Mat, like you've explained, it's a matrix type where you can put an image , but the type IplImage* it's an image? andhow can i transform one to the other, thanks

    ReplyDelete
    Replies
    1. Codes in this lessons are written in C. If you want to do it in C++, you can convert it very easily. But you should convert every OpenCV/C functions to OpenCV/C++ equivalent.

      Delete
  24. Hello sir, first I want to thank you for all your guides.
    I am using these codes to make a color detector for my robot. How can i change the "Simple Example of Tracking Red objects" so that: IF it tracks the red objet it does smth, ELSE it does smth else. What i mean is that i put the tracking event in a IF-ELSE loop. Thanks ^^

    ReplyDelete
  25. thanks for this great tutorial

    ReplyDelete
  26. can you suggest me function or macros that can control moment of cursor i.e. I want to control cursor position with my application not by mouse?Your help would be appreciated
    Thank you
    -Jay Kothari

    ReplyDelete
  27. Hi I have an image with apple and other background. I would like to extract only apple. When i do it with cvInRangeS of OpenCV, the resultant image is a binary image, which i dont want. I want only the apple with its original color. Could anyone help me out..?

    ReplyDelete
  28. hi i have my final year project to use windows phone by habd gesture movements like scrolling,selecting,sliding mean whole windowsphone(Os) use by finger movements is there opencv work with that????? any idea??? i implemented it to control mouse function in windows destop but i want it for phone camera.......

    ReplyDelete
  29. thanks for the great tutorial :)

    I was wondering if it is possible to modify the code of opencv functions. recently i ve been playing around with houghLines function to detect lines. This functions searches for any lines that have an angle between 0 and 180 degrees, however I know that the lines im looking for will be in the range between 80 and 110 degrees. Is there a way to customise the function to reduce unnecessary processing?

    ReplyDelete
  30. I was wondering, how can I use the data that I got from C++ and use in C#? Is that possible?

    ReplyDelete
  31. I have successfully got your example working on my linux ubuntu laptop . I could compile and run the code after I commented the line #include stdafx.h . Now I am replicating the same thing with raspberry pi and raspicamera but it is giving capture failure . any ideas?

    ReplyDelete
  32. Sir I have a doubt What will happen if the other objects in the screen also have red color?

    ReplyDelete
  33. Thanks for the tutorial :) it works in c++ with a little of change
    I want to ask how to get an average X and Y position of moving object?
    BR

    ReplyDelete
  34. Thank you very much for good tutorial.

    ReplyDelete
  35. How would i make this return a boolean value if a colour is detected?

    ReplyDelete
  36. How to detect size of object detected, say if its a square red box

    ReplyDelete
  37. What about if we've done background subtraction using MOG or MOG2. Moving objects are identified but we want to track them aswell and classify as either vehicle or a person??

    ReplyDelete
  38. Hi, how come each time I test each one of the codes it crashes? Right after the 'Control' window opens, it says not responding.

    ReplyDelete
  39. Thank you for the tutorial; one of the best I found online.

    ReplyDelete
  40. Hello everyone can someone tell me how to remove that tracking line after taking object away from screen that red line exist over there and kindly tell me how to use alarm when the red object is detected thanks

    ReplyDelete
    Replies
    1. if (dArea > 10000)
      {
      //Object is detected. Start the alarm
      }

      Inside this if loop, do whatever you want.
      And don't add imgLines to imgOriginal. Then you will see the tracking line

      Delete
  41. the first code doesnt detect the red object,i kept the hue value from 170 to 179,but it is not detecting,i tried with different shades of red and eroded it,dilated it,but still it is not detecting,my outer lighting condition is good enough,please help me

    ReplyDelete
  42. can u help me write this code in ubuntu because when i tried it showed errors

    ReplyDelete
  43. Good tutorial.

    Thank you very much,

    ReplyDelete
  44. Hello everyone,
    int posX = dM10 / dArea;
    int posY = dM01 / dArea;
    the posX and posY are in pixels, can someone help me to convert these value into mm. Please help me.

    ReplyDelete
  45. OpenCV with Python---li8bot.wordpress.com

    ReplyDelete
  46. If i would like to CvRect the object i get, what should i do?

    ReplyDelete
  47. i got problem
    Cannot read a frame from video stream
    please someone help me
    use opencv2.4.9 and vc12

    ReplyDelete
  48. Hi all,
    I am newbie with opencv. I try to follow the "Simple Example of Detecting a Red Object", when I build this program, it 's not any error, The program runs but it stops with an error message saying:
    Windows has triggered a breakpoint in Show_img.exe.
    This may be due to a corruption of the heap, which indicates a bug in Show_img.exe or any of the DLLs it has loaded.
    This may also be due to the user pressing F12 while Show_img.exe has focus.
    The output window may have more diagnostic information.
    I would appreciate it if somebody could help me.
    Thanks you so much!

    ReplyDelete