Watch the Full Video Here:
Vehicle detection has been a challenging part of building intelligent traffic management systems. Such systems are critical for addressing the ever-increasing number of vehicles on road networks that cannot keep up with the pace of increasing traffic. Today many methods that deal with this problem use either traditional computer vision or complex deep learning models.
Popular computer vision techniques include vehicle detection using optical flow, but in this tutorial, we are going to perform vehicle detection using another traditional computer vision technique that utilizes background subtraction and contour detection to detect vehicles. This means you won’t have to spend hundreds of hours in data collection or annotation for building deep learning models, which can be tedious, to say the least. Not to mention, the computation power required to train the models.
This post is the fourth and final part of our Contour Detection 101 series. All 4 posts in the series are titled as:
- Contour Detection 101: The Basics
- Contour Detection 101: Contour Manipulation
- Contour Detection 101: Contour Analysis
- Vehicle Detection with OpenCV using Contours + Background Subtraction (This Post)
So if you are new to the series and unfamiliar with contour detection, make sure you check them out!
In part 1 of the series, we learned the basics, how to detect and draw the contours, in part 2 we learned to do some contour manipulations and in the third part, we analyzed the detected contours for their properties to perform tasks like object detection. Combining these techniques with background subtraction will enable us to build a useful application that detects vehicles on a road. And not just that but you can use the same principles that you learn in this tutorial to create other motion detection applications.
So let’s dive into how vehicle detection with background subtraction works.
Import the Libraries
Let’s First start by importing the libraries.
import numpy as np
import matplotlib.pyplot as plt
Car Detection using Background Subtraction¶
Background subtraction is a simple yet effective technique to extract objects from an image/video. Consider a highway on which cars are moving, and you want to extract each car. One easy way can be that you take a picture of the highway with the cars (called foreground image) and you also have an image saved in which the highway does not contain any cars (background image) so you subtract the background image from the foreground to get the segmented mask of the cars and then use that mask to extract the cars.
But in many cases you don’t have a clear background image, an example of this can be a highway that is always busy, or maybe a walking destination that is always crowded. So in those cases, you can subtract the background by other means, for example, in the case of a video you can detect the movement of the object, so the objects which move can be foreground and the other part that remain static can be the background.
Several algorithms have been invented for this purpose. OpenCV has implemented a few such algorithms which are very easy to use. Let’s see one of them.
BackgroundSubtractorMOG2 is a Background/Foreground Segmentation Algorithm, based on two papers by Z.Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction” (IEEE 2004) and “Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction” (Elsevier BV 2006). One important feature of this algorithm is that it provides better adaptability to varying scenes due to illumination changes, which benefits you from having to worry about maintaining a fixed background. Let’s see how it works.
history(optional) – It is the length of the history. Its default value is 500.
varThreshold(optional) – It is the threshold on the squared distance between the pixel and the model to decide whether a pixel is well described by the background model. It does not affect the background update and its default value is 16.
detectShadows(optional) – It is a boolean that determines whether the algorithm will detect and mark shadows or not. It marks shadows in gray color. Its default value is True. It decreases the speed a bit, so if you do not need this feature, set the parameter to false.
object– It is the MOG2 Background Subtractor.
# load a video
cap = cv2.VideoCapture('media/videos/vtest.avi')
# you can optionally work on the live web cam
# cap = cv2.VideoCapture(0)
# create the background object, you can choose to detect shadows or not (if True they will be shown as gray)
backgroundobject = cv2.createBackgroundSubtractorMOG2( history = 2, detectShadows = True )
ret, frame = cap.read()
if not ret:
# apply the background object on each frame
fgmask = backgroundobject.apply(frame)
# also extracting the real detected foreground part of the image (optional)
real_part = cv2.bitwise_and(frame,frame,mask=fgmask)
# making fgmask 3 channeled so it can be stacked with others
fgmask_3 = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2BGR)
# Stack all three frames and show the image
stacked = np.hstack((fgmask_3,frame,real_part))
k = cv2.waitKey(30) & 0xff
if k == 27:
The second frame is the original video, on the left we have the background subtraction result with shadows, while on the right we have the foreground part produced using the background subtraction mask.
Creating the Vehicle Detection Application
Alright once we have our background subtraction method ready, we can build our final application!
Here’s the breakdown of the steps we need to perform the complete background Subtraction based contour detection.
1) Start by loading the video using the function
cv2.VideoCapture() and create a background subtractor object using the function
3) Next, we will apply thresholding on the mask using the function
cv2.threshold() to get rid of shadows and then perform Erosion and Dilation to improve the mask further using the functions
4) Then we will use the function
cv2.findContours() to detect the contours on the mask image and convert the contour coordinates into bounding box coordinates for each car in the frame using the function
cv2.boundingRect(). We will also check the area of the contour using
cv2.contourArea() to make sure it is greater than a threshold for a car contour.
5) After that we will use the functions
cv2.putText() to draw and label the bounding boxes on each frame and extract the foreground part of the video with the help of the segmented mask using the function
# load a video
video = cv2.VideoCapture('media/videos/carsvid.wmv')
# You can set custom kernel size if you want.
kernel = None
# Initialize the background object.
backgroundObject = cv2.createBackgroundSubtractorMOG2(detectShadows = True)
# Read a new frame.
ret, frame = video.read()
# Check if frame is not read correctly.
if not ret:
# Break the loop.
# Apply the background object on the frame to get the segmented mask.
fgmask = backgroundObject.apply(frame)
#initialMask = fgmask.copy()
# Perform thresholding to get rid of the shadows.
_, fgmask = cv2.threshold(fgmask, 250, 255, cv2.THRESH_BINARY)
#noisymask = fgmask.copy()
# Apply some morphological operations to make sure you have a good mask
fgmask = cv2.erode(fgmask, kernel, iterations = 1)
fgmask = cv2.dilate(fgmask, kernel, iterations = 2)
# Detect contours in the frame.
contours, _ = cv2.findContours(fgmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Create a copy of the frame to draw bounding boxes around the detected cars.
frameCopy = frame.copy()
# loop over each contour found in the frame.
for cnt in contours:
# Make sure the contour area is somewhat higher than some threshold to make sure its a car and not some noise.
if cv2.contourArea(cnt) > 400:
# Retrieve the bounding box coordinates from the contour.
x, y, width, height = cv2.boundingRect(cnt)
# Draw a bounding box around the car.
cv2.rectangle(frameCopy, (x , y), (x + width, y + height),(0, 0, 255), 2)
# Write Car Detected near the bounding box drawn.
cv2.putText(frameCopy, 'Car Detected', (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0,255,0), 1, cv2.LINE_AA)
# Extract the foreground from the frame using the segmented mask.
foregroundPart = cv2.bitwise_and(frame, frame, mask=fgmask)
# Stack the original frame, extracted foreground, and annotated frame.
stacked = np.hstack((frame, foregroundPart, frameCopy))
# Display the stacked image with an appropriate title.
cv2.imshow('Original Frame, Extracted Foreground and Detected Cars', cv2.resize(stacked, None, fx=0.5, fy=0.5))
#cv2.imshow('initial Mask', initialMask)
#cv2.imshow('Noisy Mask', noisymask)
#cv2.imshow('Clean Mask', fgmask)
# Wait until a key is pressed.
# Retreive the ASCII code of the key pressed
k = cv2.waitKey(1) & 0xff
# Check if 'q' key is pressed.
if k == ord('q'):
# Break the loop.
# Release the VideoCapture Object.
# Close the windows
This seems to have worked out well, that too without having to train large-scale Deep learning models!
Vehicle Detection is a popular computer vision problem. This post explored how traditional machine vision tools can still be utilized to build applications that can effectively deal with modern vision challenges.
We used a popular background/foreground segmentation technique called background subtraction to isolate our regions of interest from the image.
We also saw how contour detection can prove to be useful when dealing with vision problems. The pre-processing and post-processing that can be used to filter out the noise in the detected contours.
Although these techniques can be robust, they are not as generalizable as Deep learning models so it’s important to put more focus on deployment conditions and possible variations when building vision applications with such techniques.
This post concludes the four-part series on contour detection. If you enjoyed this post and followed the rest of the series do let me know in the comments and you can also support me and the Bleed AI team on patreon here.
If you need 1 on 1 Coaching in AI/computer vision regarding your project, or your career then you reach out to me personally here
Let our team of expert engineers and managers build your next big project using Bleeding Edge AI Tools & Technologies