Vehicle Detection with OpenCV using Contours + Background Subtraction

Vehicle Detection with OpenCV using Contours + Background Subtraction

Watch the Full Video Here:

In this video we will explore how you can perform tasks like vehicle detection using a simple but yet an effective approach of background-foreground subtraction. You will be learning about using background-foreground subtraction along with contour detection in OpenCV and how you tune different parameters to achieve better results.

Download the code for the video by clicking the button below:

Contour Detection 101: Contour Analysis (Pt:3)

Contour Detection 101: Contour Analysis (Pt:3)

Watch the Full Video Here:

So far in our Contour Detection 101 series, we have made significant progress unpacking many of the techniques and tools that you will need to build useful vision applications. In part 1, we learned the basics, how to detect and draw the contours, in part 2 we learned to do some contour manipulations.

Now in the third part of this series, we will be learning about analyzing contours. This is really important because by doing contour analysis you can actually recognize the object being detected and differentiate one contour from another. We will also explore how you can identify different properties of contours to retrieve useful information. Once you start analyzing the contours, you can do all sorts of cool things with them. The application below that I made, uses contour analysis to detect the shapes being drawn!

You can build this too! In fact, I have an entire course that will help you master contours for building computer vision applications, where you learn by building all sorts of cool vision applications!

This post will be the third part of the Contour Detection 101 series. All 4 posts in the series are titled as:

  1. Contour Detection 101: The Basics  
  2. Contour Detection 101: Contour Manipulation
  3. Contour Detection 101: Contour Analysis (This Post) 
  4. Vehicle Detection with OpenCV using Contours + Background Subtraction

So if you haven’t seen any of the previous posts make sure you do check them out since in this part we are going to build upon what we have learned before so it will be helpful to have the basics straightened out if you are new to contour detection.

Alright, now we can get started with the Code.

Download Code

Import the Libraries

Let’s start by importing the required libraries.

Read an Image

Next, let’s read an image containing a bunch of shapes.

Detect and draw Contours

Next, we will detect and draw external contours on the image using cv2.findContours() and cv2.drawContours() functions that we have also discussed thoroughly in the previous posts.

The result is a list of detected contours that can now be further analyzed for their properties. These properties are going to prove to be really useful when we build a vision application using contours. We will use them to provide us with valuable information about an object in the image and distinguish it from the other objects.

Below we will look at how you can retrieve some of these properties.

Image Moments

Image moments are like the weighted average of the pixel intensities in the image. They help calculate some features like the center of mass of the object, area of the object, etc. Finding image moments is a simple process in OpenCV which we can get by using the function cv2.moments() that returns a dictionary of various properties to use.

Function Syntax:

retval = cv.moments(array)

Parameters:

  • array – Single-channel, 8-bit or floating-point 2D array

Returns:

  • retval – A python dictionary containing different moments properties

{‘m00’: 28977.5, ‘m10’: 4850112.666666666, ‘m01’: 15004570.666666666, ‘m20’: 878549048.4166666, ‘m11’: 2511467783.458333, ‘m02’: 7836261882.75, ‘m30’: 169397190630.30002, ‘m21’: 454938259986.68335, ‘m12’: 1311672140996.85, ‘m03’: 4126888029899.3003, ‘mu20’: 66760837.58548939, ‘mu11’: 75901.88486719131, ‘mu02’: 66884231.43453884, ‘mu30’: 1727390.3746643066, ‘mu21’: -487196.02967071533, ‘mu12’: -1770390.7230567932, ‘mu03’: 495214.8310546875, ‘nu20’: 0.07950600793808808, ‘nu11’: 9.03921532296414e-05, ‘nu02’: 0.07965295864597088, ‘nu30’: 1.2084764986041665e-05, ‘nu21’: -3.408407043976586e-06, ‘nu12’: -1.238559397771768e-05, ‘nu03’: 3.4645063088656135e-06}

The values returned represent different kinds of image movements including raw moments, central moments, scale/rotation invariant moments, and so on.

For more information on image moments and how they are calculated, you can read this Wikipedia article. Below we will discuss how some of the image moments can be used to analyze the contours detected.

Find the center of a contour

Let’s start by finding the centroid of the object in the image using the contour’s image moments. The X and Y coordinates of the Centroid are given by two relations of the central image moments, Cx=M10/M00 and Cy=M01/M00.

Centroid: (167,517)

Let’s repeat the process for the rest of the contours detected and draw a circle using cv2.circle() to indicate the centroids on the image.

Finding Contour Area

We are already familiar with one way of finding the area of contour in the last post, using function cv2.contourArea().

Area: 28977.5

Additionally, you can also find the area using the m00 moment of the contour which contains the contour’s area.

Area: 28977.5

As you can see, both of the methods give the same result.

Contour Properties

When building an application using contours, information about the properties of a contour is vital. These properties are often invariant to one or more transformations such as translation, scaling, and rotation. Below, we will have a look at some of these properties.

Let’s start by detecting the external contours of an image.

Now using a custom transform() function from the transformation.py module which you will find included with the code for this post, we can conveniently apply and display different transformations to an image.

Function Syntax:

transformations.transform(translate=True, scale=False, rotate=False, path='media/sword.jpg', display=True)

By default, only translation is applied but you may scale and rotate the image as well.

Applied Translation of x: 44, y: 30
Applied rotation of angle: 80
Image resized to: 95.0

Aspect ratio

Aspect ratio is the ratio of width to height of the bounding rectangle of an object. It can be calculated as AR=width/height. This value is always invariant to translation.

Aspect ratio initially 0.9442231075697212
Applied Translation of x: -45 , y: -49
Aspect ratio After Modification 0.9442231075697212

Extent 

Another useful property is the extent of a contour which is the ratio of contour area to its bounding rectangle area. Extent is invariant to Translation & Scaling.

To find the extend we start by calculating the contour area for the selected contour using the function cv2.contourArea(). Next, the bounding rectangle is found using cv2.boundingRect(). The area of the bounding rectangle is calculated using rectarea=width×height. Finally, the extent is then calculated as extent=contourarea/rectarea.

Extent intitially 0.2404054667406324
Applied Translation of x: 38 , y: 44
Image resized to: 117.0%
Extent After Modification 0.24218788234718347

Equivalent Diameter 

Equivalent Diameter is the diameter of the circle whose area is the same as the contour area. It is Invariant to Translation & Rotation. The equivalent diameter can be calculated by first getting the area of contour with cv2.boundingRect(), the area of the circle is given by area=2×π×d2/4 where d is the diameter of the circle.

So to find the diameter we just have to make d the subject in the above equation, giving us: d=  √(4×rectArea/π).

Equi diameter intitially 134.93924087995146
Applied Translation of x: -39 , y: 38
Applied rotation of angle: 38
Equi diameter After Modification 135.06184863765444

Orientation 

Orientation is simply the angle at which an object is rotated.

Applied Translation of x: 48 , y: -37
Applied rotation of angle: 176

Now Let’s take a look at an elliptical angle on the sword contour above

Elliptical Angle is 46.882904052734375

Below method also gives the angle of the contour by fitting a rotated box instead of an ellipse

RotatedRect Angle is 45.0

Note: Don’t be confused by why all three angles are showing different results, they all calculate angles differently, for e.g ellipse fits an ellipse and then calculates the angle that an ellipse makes, similarly the rotated rectangle calculates the angle the rectangle makes. For triggering decisions based on the calculated angle you would first need to find what angle the respective method is making at the given orientations of the object.

Hu moments

Hu moments are a set of 7 numbers calculated using the central moments. What makes these 7 moments special is the fact that out of these 7 moments, the first 6 of the Hu moments are invariant to translation, scaling, rotation and reflection. The 7th Hu moment is also invariant to these transformations, except that it changes its sign in case of reflection. Below we will calculate the Hu moments for the sword contour, using the moments of the contour.

You can read this paper if you want to know more about hu-moments and how they are calculated.

[[5.69251998e-01]
[2.88541572e-01]
[1.37780830e-04]
[1.28680955e-06]
[2.45025329e-12]
[3.54895392e-07]
[1.69581763e-11]]

As you can see the different hu-moments have varying ranges (e.g. compare hu-moment 1 and 7) so to make the Hu-moments more comparable with each other, we will transform them to log-scale and bring them all to the same range.

Next up let’s apply transformations to the image and find the Hu-moments again.

Applied Translation of x: -31 , y: 48
Applied rotation of angle: 122
Image resized to: 87.0%

The difference is minimal because of the invariance of Hu-moments to the applied transformations.

Summary

In this post, we saw how useful contour detection can be when you analyze the detected contour for its properties, enabling you to build applications capable of detecting and identifying objects in an image.

We learned how image moments can provide us useful information about a contour such as the center of a contour or the area of contour.

We also learned how to calculate different contour properties invariant to different transformations such as rotation, translation, and scaling. 

Lastly, we also explored seven unique image moments called Hu-moments which are really helpful for object detection using contours since they are invariant to translation, scaling, rotation, and reflection at once.

This concludes the third part of the series. In the next and final part of the series, we will be building a Vehicle Detection Application using many of the techniques we have learned in this series.

You can reach out to me personally for a 1 on 1 consultation session in AI/computer vision regarding your project. Our talented team of vision engineers will help you every step of the way. Get on a call with me directly here.

Ready to seriously dive into State of the Art AI & Computer Vision?
Then Sign up for these premium Courses by Bleed AI

Contour Detection 101: Contour Manipulation (Pt:2)

Contour Detection 101: Contour Manipulation (Pt:2)

Watch the Full Video Here:

In the previous post of this series, we introduced you to contour detection, a popular computer vision technique that you must have in your bag of tricks when dealing with computer vision problems. We discussed how we can detect contours for an object in an image, the pre-processing steps involved to prepare the image for contour detection, and we also went over many of the useful retrieval modes that you can use when detecting contours.

But detecting and drawing contours alone won’t cut it if you want to build powerful computer vision applications. Once detected, the contours need to be processed to make them useful for building cool applications such as the one below.

This is why in this second part of the series we will learn how we can perform different contour manipulations to extract useful information from the detected contours. Using this information you can build application such as the one above and many others, once you learn about how to effectively use and manipulate contours, in fact, I have an entire course that will help you master contours for building computer vision applications

This post will be the second part of our new Contour Detection 101 series. All 4 posts in the series are titled as:

  1. Contour Detection 101: The Basics  
  2. Contour Detection 101: Contour Manipulation (This Post)
  3. Contour Detection 101: Contour Analysis 
  4. Vehicle Detection with OpenCV using Contours + Background Subtraction

So if you haven’t seen the previous post and you are new to contour detection make sure you do check it out.

In part two of the series, the contour manipulation techniques we are going to learn will enable us to perform some important tasks such as extracting the largest contour in an image, sorting contours in terms of size, extracting bounding box regions of a targeted contour, etc. These techniques also form the building blocks for building robust classical vision applications.

So without further Ado, let’s get started.

Import the Libraries

Let’s start by importing the required libraries.

Read an Image

Detect and draw Contours

Next, we will detect and draw contours on the image as before using the cv2.findContours() and cv2.drawContours() functions.

Extracting the Largest Contour in the image

When building vision applications using contours, often we are interested in retrieving the contour of the largest object in the image and discard others as noise. This can be done using the built-in python’s max() function with the contours list. The max() function takes in as input the contours list along with a key parameter which refers to the single argument function used to customize the sort order. The function is applied to each item on the iterable. So for retrieving the largest contour we use the key cv2.contourArea which is a function that returns the area of a contour. It is applied to each contour in the list and then the max() function returns the largest contour based on its area.

Sorting Contours in terms of size

Extracting the largest contour worked out well, but sometimes we are interested in more than one contour from a sorted list of contours. In such cases, another built-in python function sorted() can be used instead. sorted() function also takes in the optional key parameter which we use as before for returning the area of each contour. Then the contours are sorted based on area and the resultant list is returned. We also specify the order of sort reverse = True i.e in descending order of area size.

Drawing a rectangle around the contour

Bounding rectangles are often used to highlight the regions of interest in the image. If this region of interest is a detected contour of a shape in the image, we can enclose it within a rectangle. There are two types of bounding rectangles that you may draw:

  • Straight Bounding Rectangle
  • Rotated Rectangle

For both types of bounding rectangles, their vertices are calculated using the coordinates in the contour list.

Straight Bounding Rectangle

A straight bounding rectangle is simply an upright rectangle that does not take into account the rotation of the object. Its vertices can be calculated using the cv2.boundingRect() function which calculates the vertices for the minimal up-right rectangle possible using the extreme coordinates in the contour list. The coordinates can then be drawn as rectangle using cv2.rectangle() function.

Function Syntax:

x, y, w, h = cv2.boundingRect(array)

Parameters:

  • array – It is the Input gray-scale image or 2D point set from which the bounding rectangle is to be created

Returns:

  • x – It is X-coordinate of the top-left corner
  • y – It is Y-coordinate of the top-left corner
  • w – It is the width of the rectangle
  • h – It is the height of the rectangle

However, the bounding rectangle created is not a minimum area bounding rectangle often causing it to overlap with any other objects in the image. But what if the rectangle is rotated so that the area can be minimized to tightly fit the shape.

Rotated Rectangle

As the name suggests the rectangle drawn is rotated at an angle so that the area it occupies to enclose the contour is minimum. The function you can use to calculate the vertices of the rotated rectangle is cv2.minAreaRect(), which uses the contour point set to calculate the coordinates of the top left corner of the rectangle, its width, height, and the angle of rotation of the rectangle.

Function Syntax:

retval = cv2.minAreaRect(points)

Parameters:

  • array – It is the Input gray-scale image or 2D point set from which the bounding rectangle is to be created

Returns:

  • retval – A tuple consisting of:
    • x,y coordinates of the top-left vertex
    • width, height of the rectangle
    • angle of rotation of the rectangle

Since the cv2.rectangle() function can only draw up-right rectangles we cannot use it to draw the rotated rectangle. Instead, we can calculate the 4 vertices of the rectangle using the cv2.boxPoints() function which takes the rect tuple as input. Once we have the vertices, the rectangle can be simply drawn using the cv2.drawContours() function.

rect: ((514.5311279296875, 560.094482421875), (126.98052215576172, 291.55517578125), 53.310279846191406)

The four corner points that formed the contour for our the rotated rectangle:

array([[359, 596],
[593, 422],
[669, 523],
[435, 698]])

This is the minimum area bounding rectangle possible. Let’s see how it compares to the straight bounding box.

Drawing Convex Hull

A convex hull is another useful way to represent the contour onto the image. The function cv2.convexHull() checks a curve for convexity defects and corrects it. Convex curves are curves that are bulged out. And if it is bulged inside (Concave), it is called convexity defects.

Finding the convex hull of a contour can be very useful when working with shape matching techniques. This is because the convex hull of a contour highlights its prominent features while discarding smaller variations in the shape.

Function Syntax:

hull = cv2.convexHull(points, hull, clockwise, returnPoints)

Parameters:

  • points – Input 2D point set. This is a single contour.
  • clockwise – Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.
  • returnPoints – Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. By default it is True.

Returns:

  • hull – Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.

Summary

In today’s post, we familiarized ourselves with many of the contour manipulation techniques. These manipulations help make contour detection an even more useful tool applicable to a wider variety of problems

We saw how you can filter out contours based on their size using the built-in max() and sorted() functions, allowing you to extract the largest contour or a list of contours sorted by size.

Then we learned how you can highlight contour detections with bounding rectangles of two types, the straight bounding rectangle and the rotated one.

Lastly, we learned about convex hulls and how you can find them for detected contours. A very handy technique that can prove to be useful for object detection with shape matching.

This sums up the second post in the series. In the next post, we’re going to learn various analytical techniques to analyze contours in order to classify and distinguish between different Objects.

So if you found this lesson useful, let me know in the comments and you can also support me and the Bleed AI team on patreon here.

If you need 1 on 1 Coaching in AI/computer vision regarding your project, or your career then you reach out to me personally here

services-siteicon

Hire Us

Let our team of expert engineers and managers build your next big project using Bleeding Edge AI Tools & Technologies

Docker-small-icon
unity-logo-small-icon
Amazon-small-icon
NVIDIA-small-icon
flutter-small-icon
OpenCV-small-icon
Contour Detection 101 : The Basics (Pt:1)

Contour Detection 101 : The Basics (Pt:1)

Watch the Full Video Here:

Let’s say you have been tasked to design a system capable of detecting vehicles on a road such as the one below,

Surely, this seems like a job for a Deep Neural Network, right?

This means you have to get a lot of data, label them, then train a model, tune its performance and serve it on a system capable of running it in real-time.

This sounds like a lot of work and it begs the question: Is deep learning the only way to do so?

What if I tell you that you can solve this problem and many others like this using a much simpler approach, and with this approach, you will not have to spend days collecting data or training deep learning models. And the approach is also lightweight so it will run on almost any machine! 

Alright, what exactly are we talking about here? 

This simpler approach relies on a popular Computer Vision technique called Contour Detection. A handy technique that can save the day when dealing with vision problems such as the one above. Although not as generalizable as Deep Neural Networks, contour detection can prove robust under controlled circumstances, requiring minimum time investment and effort to build vision applications.

Just to give you a glimpse of what you can build with contour detection, have a look at some of the applications that I made using contour detection.

and there are many other applications that you can build too, once you learn about what contour Detection is and how you can use it, in fact, I have an entire course that will help you master contours for building computer vision applications

Since Contours is a lengthy topic, I’ll be breaking down Contours into a 4 parts series. This is the first part where we discuss the basics of contour detection in OpenCV, how to use it, and go over various preprocessing techniques required for Contour Detection. 

All 4 posts in the series are titled as:

  1. Contour Detection 101: The Basics  (This Post)
  2. Contour Detection 101: Contour Manipulation
  3. Contour Detection 101: Contour Analysis 
  4. Vehicle Detection with OpenCV using Contours + Background Subtraction

So without further Ado, let’s start.

Download Code

What is a Contour?

A contour can be simply defined as a curve that joins a set of points enclosing an area having the same color or intensity. This area of uniform color or intensity forms the object that we are trying to detect, and the curve enclosing this area is the contour representing the shape of the object. So essentially, contour detection works similarly to edge-detection but with the restriction that the edges detected must form a closed path.

Still confused? Just have a look at the GIF below. You can see four shapes, which on a pixel level forms when certain pixels share the same color which is distinct from the background color. Contour detection identifies each of the border pixels that is distinct from the background forming an enclosed, continuous path of pixels that form the line representing the contour.

Now, let’s see how this works in OpenCV.

Import the Libraries

Let’s start by importing the required libraries.

Read an Image

Next, we will read an image below containing a bunch of shapes for which we will find the contours.

OpenCV saves us the trouble of writing lengthy algorithms for contour detection and provides a handy function findContours() that analyzes the topological structure of the binary image by border following, a contour detection technique developed in 1985.

The findContours() function takes a binary image as input. The foreground is assumed to be white, and the background is assumed to be black. If that is not the case, then you can invert the image pixels using the cv2.bitwise_not() function.

Function Syntax:

contours, hierarchy = cv2.findContours(image, mode, method, contours, hierarchy, offset)

Parameters:

  • image – It is the input image (8-bit single-channel). Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. You can use compare, inRange, threshold, adaptiveThreshold, Canny, and others to create a binary image out of a grayscale or color one.
  • mode – It is the contour retrieval mode, ( RETR_EXTERNAL, RETR_LIST, RETR_CCOMP, RETR_TREE )
  • method – It is the contour approximation method. ( CHAIN_APPROX_NONE, CHAIN_APPROX_SIMPLE, CHAIN_APPROX_TC89_L1, etc )
  • offset – It is the optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI, and then they should be analyzed in the whole image context.

Returns:

  • contours – It is the detected contours. Each contour is stored as a vector of points.
  • hierarchy – It is the optional output vector containing information about the image topology. It has been described in detail in the video above.

We will go through all the important parameters in a while. For now, let’s detect some contours in the image that we read above.

Since the image read above only contains a single channel instead of three, and even that channel is in the binary state (black & white) so it can be directly passed to the findContours() function without requiring any preprocessing.

Number of contours found = 5

Visualizing the Contours detected

As you can see the cv2.findContours() function was able to correctly detect the 5 external shapes in the image. But how do we know that the detections were correct? Thankfully we can easily visualize the detected contours using the cv2.drawContours() function which simply draws the detected contours onto an image.

Function Syntax:

cv2.drawContours(image, contours, contourIdx, color, thickness, lineType, hierarchy, maxLevel, offset)

Parameters:

  • image – It is the image on which contours are to be drawn.
  • contours – It is point vector(s) representing the contour(s). It is usually an array of contours.
  • contourIdx – It is the parameter, indicating a contour to draw. If it is negative, all the contours are drawn.
  • color – It is the color of the contours.
  • thickness – It is the thickness of lines the contours are drawn with. If it is negative (for example, thickness=FILLED ), the contour interiors are drawn.
  • lineType – It is the type of line. You can find the possible options here.
  • hierarchy – It is the optional information about hierarchy. It is only needed if you want to draw only some of the contours (see maxLevel ).
  • maxLevel – It is the maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is a hierarchy available.
  • offset – It is the optional contour shift parameter. Shift all the drawn contours by the specified offset=(dx, dy).

Let’s see how it works.

This seems to have worked nicely. But again, this is a pre-processed image that is easy to work with, which will not be the case when working with a real-world problem where you will have to pre-process the image before detecting contours. So let’s have a look at some common pre-processing techniques below.

Pre-processing images For Contour Detection

As you have seen above that the cv2.findContours() functions takes in as input a single channel binary image, however, in most cases the original image will not be a binary image. Detecting contours in colored images require pre-processing to produce a single-channel binary image that can be then used for contour detection. Ideally, this processed image should have the target objects in the foreground.

The two most commonly used techniques for this pre-processing are:

  • Thresholding based Pre-processing
  • Edge Based Pre-processing

Below we will see how you can accurately detect contours using these techniques.

Thresholding based Pre-processing For Contours

So to detect contours in colored images we can perform fixed level image thresholding to produce a binary image, that can be then used for contour detection. Let’s see how this works.

First, read a sample image using the function cv2.imread() and display the image using the matplotlib library.

We will try to find the contour for the tree above, first without using any thresholding to see how the result will look like.

The first step is to convert the 3-channel BGR image to a single-channel image using the cv2.cvtColor() function.

Contour detection on the image above will only result in a contour outlining the edges of the image. This is because the cv2.findContours() function expects the foreground to be white, and the background to be black, which is not the case above so we need to invert the colors using cv2.bitwise_not().

The inverted image with a black background and white foreground can now be used for contour detection.

The result is far from ideal. As you can see the contours detected poorly align with the boundary of the tree in the image. This is because we only fulfilled the requirement of a single-channel image, but we did not make sure that the image was binary in colors, leaving noise along the edges. This is why we need thresholding to provide us a binary image.

Thresholding the image

We will use the function cv2.threshold() to perform thresholding. The function takes in as input the gray-scale image, applies fixed level thresholding, and returns a binary image. In this case, all the pixel values below 100 are set to 0(black) while the ones above are set to 255(white). Since the image has already been inverted, cv2.THRESH_BINARY is used, but if the image is not inverted, cv2.THRESH_BINARY_INV should be used. Also, the fixed level used must be able to correctly segment the target object as foreground(white).

The resultant image above is ideally what the cv2.findContours() function is expecting. A single-channel binary image with Black background and white foreground.

The difference is clear. Binarizing the image helps get rid of all the noise at the edges of objects, resulting in accurate contour detection.

Edge Based Pre-processing For Contours

Thresholding works well for simple images with fewer variations in colors, however, for complex images, it’s not always easy to do background-foreground segmentation using thresholding. In these cases creating the binary image using edge detection yields better results.

Image

Let’s read another sample image.

It’s obvious that simple fixed level thresholding wouldn’t work for this image. You will always end up with only half the chess pieces in the foreground only. So instead we will use the function cv2.Canny() for detecting the edges in the image. cv2.Canny() returns a single channel binary image which is all we need to perform contour detection in the next step. We also make use of the cv2.GaussianBlur() function to smoothen any noise in the image.

Now, using the edges detected, we can perform contour detection.

In comparison, if we were to use thresholding as before it would yield a poor result that will only manage to correctly outline half of the chess pieces in the image at a time. So for a fair comparison, we will use cv2.adaptiveThreshold() to perform adaptive thresholding which adjusts to different color intensities in the image.

As can be seen above, using canny edge detection results in finer contour detection.

Drawing a selected Contour

So far we have only drawn all of the detected contours in an image, but what if we want to only draw certain contours?

The contours returned by the cv2.findContours() is a python list where the ith element is the contour for a certain shape in the image. Therefore if we are interested in just drawing one of the contours we can simply index it from the contours list and draw the selected contour only.

Now let’s modify our code using a for loop to draw all of the contours separately.

Retrieval Modes

Function cv2.findContours() does not only returns the contours found in an image but also returns valuable information about the hierarchy of the contours in the image. This hierarchy encodes how the contours may be arranged in the image, e.g, they may be nested within another contour. Often we are more interested in some contours than others. For example, you may only want to retrieve the external contour of an object.

Using the Retrieval Modes specified, the cv2.findContours() function can determine how the contours are to be returned or arranged in a hierarchy.

For more information on Retrieval modes and contour hierarchy Read here.

Some of the important retrieval modes are:

  • cv2.RETR_EXTERNAL – retrieves only the extreme outer contours.
  • cv2.RETR_LIST – retrieves all of the contours without establishing any hierarchical relationships.
  • cv2.RETR_TREE – retrieves all of the contours and reconstructs a full hierarchy of nested contours.
  • cv2.RETR_CCOMP – retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level.

Below we will have a look at how each of these modes return the contours.

cv2.RETR_LIST

cv2.RETR_LIST simply retrieves all of the contours without establishing any hierarchical relationships between them. All of the contours can be said to have no parent or child relationship with another contour.

Number of Contours Returned: 8

cv2.RETR_EXTERNAL

cv2.RETR_EXTERNAL retrieves only the extreme outer contours i.e the contours not having any parent contour.

Number of Contours Returned: 5

cv2.RETR_TREE

cv2.RETR_TREE retrieves all of the contours and constructs a full hierarchy of nested contours.

Number of Contours Returned: 8

cv2.RETR_CCOMP

cv2.RETR_CCOMP retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the object. At the second level, there are boundaries of the holes in object. If there is another contour inside that hole, it is still put at the top level.

To visualize the two levels we check for the contours that do not have any parent i.e the fourth value in their hierarchy [Next, Previous, First_Child, Parent] is set to -1. These contours form the first level and are represented with green color while all other are second-level contours represented in red.

Number of Contours Returned: 8

Summary

Alright, In the first part of the series we have unpacked a lot of important things about contour detection and how it works. We discussed what types of applications you can build using Contour Detection and the circumstances under which it works most effectively.

We saw how you can effectively detect contours and visualize them too using OpenCV.

The post also went over some of the most common pre-processing techniques used prior to contour detection.

  • Thresholding based Pre-processing
  • Edge Based Pre-processing

which will help you get accurate results with contour detection. But these are not the only contour detection techniques, ultimately, it depends on what type of problem you are trying to solve, and figuring out the appropriate pre-processing to use is the key to contour detection. Start by asking what properties distinguish the target object from the rest of the image? Then figure out the best way you can use to isolate those properties.

We also went over the important Retrieval Modes that can be used to change how the hierarchy of the detected contours works or even change what type of contours are detected. Additionally, you can always use the hierarchy to only extract the contours you want, e.g. retrieve only the contours at level three.

This was the first part of the series but there are four more to come in which we will dive deeper into contour detection with OpenCV and also build a cool vision application using contour Detection!

You can reach out to me personally for a 1 on 1 consultation session in AI/computer vision regarding your project. Our talented team of vision engineers will help you every step of the way. Get on a call with me directly here.

Ready to seriously dive into State of the Art AI & Computer Vision?
Then Sign up for these premium Courses by Bleed AI

Contour Detection 101 : The Basics (Pt:1)

(Video) Contour Detection 101: The Basics (Pt:1)

Watch the Full Video Here:

https://www.youtube.com/watch?v=JfaZNiEbreE

This video is a part of our upcoming Building Vision Applications with Contours and OpenCV course. In this video, I’ve covered all the basics of contours you need to know. You will learn how to detect and visualize contours, the various image pre-processing techniques required before detecting contours, and a lot more.

The course will be released in a couple of weeks on our site and will contain quizzes, assignments, and walkthroughs of high-level Jupyter notebooks which will teach you a variety of concepts.

Download the code for the video by clicking the button below: