If you’re looking for a single stand-alone Tutorial that will give you a good overall taste of the exciting field of Computer Vision using OpenCV then this is it. This Tutorial will serve as a Crash Course to learn the basics of OpenCV Library.
What is OpenCV:
OpenCV (Open Source Computer Vision ) is the biggest library for Computer Vision which contains more than 2500 optimized algorithms that can be used to do face detection, action recognition, image stitching, extracting 3d models, generating point clouds, augmented reality and a lot more.
So if you’re planning to perform Computer Vision weather on a deep learning project or on a Raspberry pie or you want to make a career in it then at some point you will definitely cross paths with this library. So it’s better that you get started with it today.
About this Crash Course Course:
Since I’m a member of the Official OpenCV.org Course team and this blog Bleed AI is all about making you master Computer Vision, so I feel I’m in a very good position to teach you about this library and that too in a single post.
Of Course, we won’t be able to cover a whole lot as I said it contains over 2500 algorithms, still, after going through this course you will be able to get a grip on fundamentals and built some interesting things.
Prerequisite: To follow along with this course it’s important that you are familiar with Python language & you have python installed in your system.
Make sure to download the Source code below to try out the code.
The easiest way to install OpenCV is by using a package manager like e.g. with pip.
So you can just Open Up the command prompt and run the following command:
pip install opencv-contrib-python
By doing the above, you will install opencv along with its contrib package which contains some extra algorithms. If you don’t need the extra algorithms then you can also run the following command:
pip install opencv-python
Make sure to install Only one of the above packages, not both. There are also some headless versions of OpenCV which do not contain any GUI functions, you can look those here.
The other Method to install OpenCV is installing it from the source. Now installing from the source has its perks but it’s much harder and I recommend only people who have prior experience with OpenCV attempting this. You can look at my tutorial of installing from the source here.
Note: Before you can install OpenCV, you must have numpy library installed on your system. You can install numpy by doing:
pip install numpy
After Installing OpenCV you should check your installation by opening up the command prompt or anaconda prompt, launching python interpreter by typing `python` and then importing OpenCV by doing: import cv2
Reading & Displaying an Image:
After installing OpenCV we will see how we can read & display an image in OpenCV. You can start by running the following cell in the jupyter notebook you downloaded from the source code section.
In OpenCV you can read an image by using the cv2.imread()function.
Note: The Square brackets i.e. [ ] are used to denote optional parameters
Params:
filename: Name of the image or path to the image.
flag: There are numerous flags but three most important ones are these: 0,1,-1
If you pass in 1the image is read in Color, if 0 is passed the image is read in Grayscale (Black & white) and if -1 is used to read transparent images which we will learn in the next chapter, If no parameter is passed the image will be read in Color.
# This is how you import the Opencv Library
import cv2
# We will also import numpy library as np
Import numpy as np
# Read the Image
img = cv2.imread('Media/M1/two.png',0)
# Print the image
print(img)
Line 1-5: Importing Opencv and numpy library. Line 8: We are reading our image in grayscale, this function will read the image in a numpy array format. Line 11: We are printing our image.
Output:
Now just by looking at the above output, you can get a lot of information about the image we used.
Take a guess on what’s written in the image.
Go ahead …I’ll wait.
If you guessed the number 2 then congrats you’re right. In fact, there is a lot more information that you can extract from the above output. For e.g, I can tell that the size of the image is (21x22). I can also say that number 2 is written in white on a black image and is written in the middle of the image.
How was I able to get all that…especially considering I’m no Sherlock?
The size of the image can easily be extracted by counting the number of rows & columns. And since we are working with a single channel grayscale image, the values in the image represent the intensity of the image, meaning 0 represents black and 255 white color, and all the numbers between 0 and 255 are different shades of gray.
You can look at the colormap below of a Grayscale image to understand it better.
Beside’s counting the rows and columns, you can just use the `shape` function on a numpy array to find its shape
Img.shape
Output:
(21, 22)
The values returned are in rows, columns or you can call it height,width, or x,y. If we were dealing with a color image then img.shape would have returned height, width, channels.
Now it’s not ideal to print images, especially when they are 100s of pixels in width and height, so let’s see how we can show images with OpenCV.
To display an Image there are generally 3 steps. There are generally 3 steps involved in displaying an image properly.
Window_Name: Any custom name you assign to your window
img: Your image either be in uint8 datatype or float datatype having range 0-1.
Step 2: Also with cv2.imshow() you will have to use the cv2.waitKey() function. This function is a keyboard binding function. Its argument is the time in milliseconds. The function waits for specified milliseconds. If you press any key in that time frame, the program continues. If0is passed, it waits indefinitely for a keystroke. This function returns the ASCII value of the keyboard key pressed, for e.g. if you press ESC key then it will return 27 which is the ASCII value for the ESC key. For now, we won’t be using this returned value.
Note: The default delay is0 which means wait forever until the user presses a key.
Step 3:The last step is to destroy the window we created so the program can end, now this is not required to view the image but if you don’t destroy the window then you can’t proceed to end the program and it can crash, so to destroy the windows you will do:
This will destroy all present image windows, there is also a function to destroy a specific window. Now let’s see the full code in action.
# Read the image
img = cv2.imread('Media/two.png',0)
# Resize the image to 1000% in size since its too small
img = cv2.resize(img, (0,0), fx=10, fy=10)
# Display the image
cv2.imshow("Image",img)
# Wait for the user to press any key
cv2.waitKey(0)
# Destroy all windows
cv2.destroyAllWindows()
Line 5: I’m resizing the image by 1000% or by 10 times in both x and y direction using the function cv2.resize() since the original size of the image is too small. I will later discuss this function. Line 8-14: Showing the image and waiting for a keypress. Destroying the image when there is a keypress.
Output:
Accessing & Modifying Image Pixels and ROI:
For this example, I will be reading this image which is from one of my favorite Animeseries.
You can access individual pixels of the image and modify them. Now before we get into that lets understand how an image is represented in OpenCV. We already know it’s a numpy array. But besides that you can find out other properties of the image.
img = cv2.imread('Media/naruto.jpg',1)
print('The data type of the Image is: {}'.format(img.dtype))
print('The dimensions of the Image is: {}'.format(img.ndim))
Output:
The data type of the Image is: uint8 The dimensions of the Image is: 3
So the datatype of images you read with OpenCV is uint8 and if its a color image then it’s a 3-dimensional image. Let’s talk about these dimensions. First 2 are the width and the height and the 3rd are the image channels. Now these are B (blue), G (green), & R (red) channels. In OpenCV due to historical reasons, colored images are stored in BGR instead of the common RGB format.
You can access any individual pixel value bypassing its x,y location in the image.
print(img [300, 300])
Output:
[143 161 168]
The tuple output above means that at location (300,300) the value for the blue channel is 143, the green channel is 161 and the red channel is 168.
Just like we can read individual image pixels, we can modify them too.
img[300,300] = 0
I’ve just made the pixel at location (300,300) black. Because I’ve only modified a single pixel the change is really hard to see. So now we will modify an ROI (Region of Interest) of the image so that we can see our changes.
Modifying a whole ROI is pretty similar to modifying a pixel, you just need to select a range instead of a single value.
# Make a copy of the original Image
Img_copy = img.copy()
# Modify the ROI
Img_copy[100:150,80:120] = 0
# Display image
cv2.imshow("Fixed Resizing",Img_copy)
cv2.waitKey(0)
cv2.destroyAllWindows()
Line 1-2: We are making a copy of the image so we don’t end up modifying the original.
Line 4-5: We are setting all pixels in x range 100-150 and in y range 80-120 equal to 0 (meaning black). Now, this should give us a black box in the image.
Line 8-10 : Showing the image and waiting for a keypress. Destroying the image when there is a keypress.
Output:
Resizing an Image:
You can use cv2.resize() function to resize an image. You have 2 ways of resizing the image, either by passing in the new width & height values or by passing in percentages of those values using fx and fy params. We have already seen how we used the second method to resize the image to 10x its size so now I’m going to show you the first method.
# Read image in color
img = cv2.imread('Media/narutosage.jpg',1)
# Resizing image to a fix 300x300 size
resized = cv2.resize(img, (300,300))
# Display image
cv2.imshow("Fixed Resizing", resized);
cv2.waitKey(0)
cv2.destroyAllWindows()
You can see below both the original and the resized version of the image.
Result:
An obvious problem with the above approach you can see is that it’s not maintaining the aspect ratio of the image which is why the image looks distorted to you. A better approach would be to resize a single dimension at a time and shrink or expand the other dimension accordingly.
Resizing While Keeping the Aspect Ratio Constant:
So let’s resize the image while keeping the aspect ratio constant. This time we are going to resize the width to 300 and modify the height respectively.
# Read image in color
img = cv2.imread('Media/narutosage.jpg',1)
# Get the width and height of image
height, width = img.shape[:-1]
# Compute ratio for the new height taking into account the 300 px width of image
r = 300.0 / width
# Get the new height
new_height = int(height * r)
# Resize the image with 300 width and the new height.
resized = cv2.resize(img, (300, new_height))
# Display Resized Image
cv2.imshow("Aspect Ratio Resize", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()
Line 4-5: We are extracting the shape of the image, [:-1]indicates that I don’t want channels returned, just height and width. This ensures your code works for both color and grayscale images.
Line 7-11: We are calculating the ratio of the new width to the old width and then multiplying this ratio value with the height for getting the new value of the height. The logic behind is this, if we resized a 600 px width image to 300 px width then we would get a ratio of 0.5 and if the height was 200 px then by multiplying 0.5 with the height, we would get a new value of 100 px, by using these new values we won’t get any distortions.
Result:
Geometric Transformations:
You can apply different kinds of transformations to the image, now there are some complex transformations but for this post, I will only be discussing translation & rotation. Both of these are types of Affine transformations. So we will be using a function called warpAffine() to transform them.
borderMode: pixel extrapolation method (see BorderTypes) by default its constant border.
borderValue: value used in case of a constant border; by default, it is 0, which means replaced values will be black.
Now you pass in a 2×3 Matrix into the warpaffine function which does the required transformation, the first two 2 columns of the matrix control, rotation, scale and shear, and the last column encodes translation (shift) of image.
Again, we will only focus on translation and rotation in this post.
Translation:
Translation is the shifting of an object’s location, meaning the movement of image in x, y-direction. Suppose you want the image to movetxamount of pixels in the x-direction and ty amount of pixels in y-direction then you will construct below transformation matrix accordingly and pass it into the warpAffine function.
So now you just need to change the tx and ty values here for translation in x and y direction.
# Read Image
img = cv2.imread('media/naruto.jpg')
rows, cols, channels = img.shape
# Construct the translation matrix
M = np.float32([
[1,0, 120],
[0,1, -40]
])
# Apply the warpAffine function
translated = cv2.warpAffine(img, M, (cols,rows))
# Display image
cv2.imshow("Translated Image",translated);
cv2.waitKey(0)
cv2.destroyAllWindows()
Line 6-10: We’re constructing the translation matrix so we move120 px in x-direction and 40 px in the negative y-direction.
Output:
Rotation
Similarly, we can also rotate an image, by passing a matrix into the warpaffine function. Now instead of designing a matrix for rotation, I’m going to use a built-in function called cv2.getRotationMatrix2D() which will return a rotation matrix according to our specifications.
center: This is the center of the rotation in the source image.
Angle: The Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the – top-left corner).
Scale: scaling factor.
# Set angle to 45 degrees.
angle = 45
# Rotate image from center of image with an angle of 45 degrees at the same scale.
rotation_matrix = cv2.getRotationMatrix2D((cols/2,rows/2), angle, 1)
# Apply the transformation
rotated = cv2.warpAffine(img, rotation_matrix, (cols,rows))
# Display image
cv2.imshow("Rotated Image",rotated);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Note: If you don’t like the black pixels that appear after translation or rotation then you can use a different border filling method, look at the available borderModes here.
Drawing on Image:
Let’s take a look at some drawing operations in OpenCV. So now we will learn how to draw a line, a circle, and a rectangle on an image. We will also learn to put text on the image. Since each drawing function will modify the image so we will be working on copies of the original image. We can easily make a copy of image by doing: img.copy()
Most of the drawing functions have below parameters in common.
img : Your Input Image
color : Color of the shape for a BGR image, pass it as a tuple i.e. (255,0,0), for Grayscale image just pass a single scalar value from 0-255.
thickness : Thickness of the line or circle etc. If -1 is passed for closed figures like circles, it will fill the shape. default thickness = 1
lineType : Type of line, popular choice is cv2.LINE_AA .
Drawing a Line:
We can draw a line in OpenCV by using the function cv2.line(). We know from basic geometry you can draw a line, you just need 2 points. So you’ll pass in coordinates of 2 points in this function.
pt1: First point of the line, this is a tuple of (x1,y1) point.
pt2: Second point of the line, this is a tuple of (x2,y2) point.
# Make a copy of the original image
copy = img.copy()
# Draw a line on the image with these parameters.
cv2.line(copy, (400,250),(300,30), (255,255,0), 5)
# Display image
cv2.imshow("Draw Line", copy);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Drawing a Circle
We can draw a Circle in OpenCV by using the function `cv2.Circle()`. For drawing a circle we just need a center point and a radius.
# Make a copy of the original image
copy = img.copy()
# Draw a Circle on naruto’s face with a radius of a 100.
cv2.circle(copy, (360,200), 100, (255,100,0), 5)
# Display image
cv2.imshow("Draw Circle", copy);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Drawing aRectangle
We can also draw a rectangle in OpenCV by using the function cv2.rectangle(). You just have to pass two corners of a rectangle to draw it. It’s similar to the cv2.line function.
# Make a copy of the original image
copy = img.copy()
# Draw Rectangle around Naruto's face.
cv2.rectangle(copy, (250,100), (450,300), (0,255,0),3)
# Display image
cv2.imshow("Draw Rectangle", copy);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Putting Text:
Finally, we can also write Text by using the function cv2.putText(). Writing Text on images is an essential tool, you will be able to see real-time stats on image instead of just printing. This is really handy when you’re working with videos.
origin: Top-left corner of the text (x,y) origin position.
fontFace: Font type, we will use cv2.FONT_HERSHEY_SIMPLEX.
fontScale: Font scale, how large your text will be.
# Make a copy of the original image
copy = img.copy()
# Write ‘Bleed AI’ on the image
img= cv2.putText(img,'Bleed-AI',(250,470),cv2.FONT_HERSHEY_SIMPLEX, 1.7, (255,255,0), 4)
# Display image
cv2.imshow("Write Text",img);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Cropping an Image:
We can also crop or slice an image, meaning we can extract any specific area of the image using its coordinates, the only condition is that it must be a rectangular area. You can segment irregular parts of images but the image is always stored as a rectangular object in the computer, this should not be a surprise since we already know that images are matrices. Now let’s say we wanted to crop naruto’s face then we would need four values namely X1 (lower bound on the x-axis), X2 (Upper bound on y-axis), Y1 (lower bound on the y-axis) and Y2 (Upper bound on the y-axis).
After getting these values, you will pass them in like below.
`face_roi = img [Start X : End X, Start Y: End Y]`
Lets see the full script
# Grab the Face ROI
face_roi = img[100:270,300:450]
# Display image
cv2.imshow("Image",face_roi);
cv2.waitKey(0)
cv2.destroyAllWindows()
Line 2: We are passing in the coordinates to crop naruto’s face, you can get these coordinates by several methods, some are of them are:by trial and error, by hovering the mouse over the image when using `matplotlib notebook` magic command, or by hovering over the image when you have installed OpenCV with QT support or by making a mouse click function that splits x,y coordinates.
Result:
Note: If you’re gonna modify the cropped ROI, then it’s better to make a copy of it, otherwise modifying the cropped version would also affect the original.
You can make a copy like this:
face_roi = img[100:270,300:450].copy()
Image Smoothing/Blurring:
Smoothing or blurring an Image in OpenCV is really easy. If you’re thinking about why we would need to blur an image then understand that It’s very common to blur/smooth an image in vision applications, this reduces noise in the image. The noise can be present due to various factors like maybe the sensor by which you took the picture was corrupted or it malfunctioned, or environmental factors like the lightning was poor, etc. Now there are different types of blurring to deal with different types of noises and I have discussed each method in detail and even done a comparison between them inside our Computer Vision Course but for now, we will briefly look at just one method, the Gaussian Blurring method. This is the most common image smoothing technique used. It gets rid of Gaussian Noise. In simple words, this will work most of the time.
ksize: Gaussian kernel size. kernel width and height can differ but they both must be positive and odd.
sigmaX: Gaussian kernel standard deviation in X direction.
Again to keep this short, I won’t be getting into the math nor the parameter details for how this function works, although it’s really interesting. One thing you need to learn is that by controlling the kernel size you control the level of smoothing done. There is also a SigmaX and a SigmaY parameter that you can control.
There are times when we need a binary black & white mask of the image, where our target object is in white and the background black or vice versa. The easiest way to get a mask of our image is to threshold our image. There are different types of thresholding methods, I’ve introduced most of them in our Computer Vision course but for now, we are going to discuss the most basic and most used one. So what thresholding does is that it checks each pixel in the image against a threshold value and If the pixel value is smaller than the threshold value, it is set to 0, otherwise, it is set to the maximum value, (this maximum value is usually 255 so white color).
thresh: Threshold value. (If you use THRESH_BINARY then all values above this are set to max_value.)
max_value: Maximum value, normally this is set to be 255.
type: Thresholding type. The most common types are THRESH_BINARY & THRESH_BINARY_INV
ret: Boolean variable which tells us if thresholding was successful or not.
Now before you can threshold an image you need to convert the image into grayscale, now you could have loaded the image in grayscale but since we have a color image already we can convert to grayscale using cv2.cvtColor function. This function can be used to convert one color to different color formats for this post we are only concerned with the grayscale conversion.
# Read the image
img = cv2.imread('media/shapes.jpg')
# Convert the color image to grayscale
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Display the grayscale image.
cv2.imshow("Grayscale Image", gray_image);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Now that we have a grayscale image, we can apply our threshold.
Line 2: We are applying a threshold such that all pixels having an intensity above 220 are converted to 255 and all pixels below 220 become 0.
Output:
Now Let’s see the result of the inverted threshold, which just reverses the results above. For this you just need to pass in cv2.THRESH_BINARY_INV instead of cv2.THRESH_BINARY.
Now we will take a look at edge detection, why edge detection? Well edges encode the structure of the images and it encodes most of the information in the images so for this reason edge detection is an integral part of many Vision applications.
In OpenCV there are edge detectors such as Sobel filters and laplacian filters but the most effective is the Canny Edge detector. In our Computer Vision Course I go into detail of exactly how this detector works but for now let’s take a look at its implementation in OpenCV.
Line 1-2: I’m detecting edges with lower and upper hysteresis values being 30 and 150. I can’t explain how these values work without going into the theory so, for now, understand that for any image you need to tune these 2 threshold values to get the correct results.
Output:
Contour Detection:
Contour detection is one of my most favorite topics because with just contour detection you can do a lot and I’ve built a number of cool applications using contours.
Contours can be defined simply as a curve joining all the continuous points (along the boundary), having the same color or intensity. In simple terms think of contours as white blobs on black background for e.g. in the output of threshold function or the edge detection function, each shape can be considered as an individual contour. So you can segment each shape, localize them or even recognize them.
The contours are a useful tool for shape analysis, object detection, and recognition, take a look at this detection and recognition application I’ve built using contour detection.
image Source: This is your input image in binary format, this is either a black & white image obtained from a thresholding or a similar function or the output of a canny edge detector.
mode: Contour retrieval mode, for example cv2.RETR_EXTERNAL mode lets you extract only external contours meaning if there is a contour inside a contour then that child contour will be ignored. You can see other RetrievalModes here
method: Contour approximation method, for most cases cv2.CHAIN_APPROX_NONE works just fine.
After you detect contours you can draw them on the image by using this function.
contours: This is a list of contours, each contour is stored as a vector.
contourIdx: Parameter indicating which contour to draw. If it is -1 then all the contours are drawn.
color: Color of the contours.
# Make a copy of the original image so it won’t be corrupted during drawing
image_copy = img.copy()
# Alternatively you can also pass in the edges output to the contours function.
# Threshold image, Remember the target object is white and background black.
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_ , thresholded_image = cv2.threshold(gray_image, 220, 255, cv2.THRESH_BINARY_INV)
# Detect Contours
contours, _ = cv2.findContours(thresholded_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Draw the detected contours, -1 means draw all detected contours
cv2.drawContours(image_copy , contours, -1 , (0,255,0), 3)
# Display image
cv2.imshow("Contour Detection", image_copy);
cv2.waitKey(0)
cv2.destroyAllWindows()
Line7 Using an Inverted threshold as the shapes need to be white and background black.
Line 10: Detecting contours on the thresholded image and drawing it on the image copy.
Line 13: Draw detected Contours.
Output:
You can also get the number of objects or shapes present by counting the number of contours.
print('Total Shapes present in image are: {}'.format(len(contours)))
Output:
Total Shapes present in image are: 6
Since there are 6 shapes in the above image we are seeing 6 detected contours.
In this section, we will take a look at morphological operations. This is one of the most used preprocessing techniques to get rid of noise in binary (black & white) masks. They need two inputs, one is our input image and a kernel (also called a structuring element) which decides the nature of the operation. Two very common morphological operations are Erosion and Dilation. Then there are other variants like Opening, Closing, Gradient, etc.
In this post, we will only be looking at Erosion & Dilation. These are all you need in most cases.
Erosion:
The fundamental idea of erosion is just like how it sounds, it erodes (eats away or eliminates) the boundaries of foreground objects (Always try to keep foreground in white). So what happens is that a kernel slides through the image. A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise, it is eroded (made to zero).
Erosion decreases the thickness or size of the foreground object or you can simply say the white region of image decreases. It is useful for removing small white noises.
kernel: Structuring element or filter used for erosion if None is passed then, a 3x3 rectangular structuring element is used. The bigger kernel you create the stronger the impact of erosion on the image.
iterations: Number of times erosion is applied, the larger the number, greater the effect.
We will be using this image for erosion, Notice the white spots, with erosion we will attempt to remove this noise.
# Read the image
img = cv2.imread('media/whitenoise.png', 0)
# Make a 7x7 kernel of ones
kernel = np.ones((7,7),np.uint8)
# Apply erosion with an iteration of 2
eroded_image = cv2.erode(img, kernel, iterations = 2)
# Display Image
cv2.imshow("Eroded Image", eroded_image);
cv2.waitKey(0)
cv2.destroyAllWindows()
Line 5: Making a 7x7 kernel, bigger the kernel the stronger the effect.
Line 8: Applying erosion with 2 iterations, the values for kernel and iterations should be tuned according to your own images.
Output:
As you can see the white noise is gone but there is a small problem, our object (person) has become thinner. We can easily fix this by applying dilation which is the opposite of erosion.
Dilation:
It is just the opposite of erosion. It increases the white region in the image or size of the foreground object increases. So essentially dilation expands the boundary of Objects. Normally, in cases like noise removal, erosion is followed by dilation. Because, erosion removes white noises, but it also shrinks our object like we have seen in our example. So now we dilate it. Since noise is gone, they won’t come back, but our object area increases.
Dilation is also useful for removing black noise or in other words black holes in our object. So it helps in joining broken parts of an object.
We will attempt to fill up holes/gaps in this image.
# Read the image
img = cv2.imread('media/blacknoise.png',0)
# Make a 7x7 kernel of ones
kernel = np.ones((7,7),np.uint8)
# Apply dilation with an iteration of 3
dilated_image = cv2.dilate(img, kernel,iterations = 3)
# Display Image
cv2.imshow("Eroded Image", dilated_image);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
See the black holes/gaps are gone. You will find a combination of erosion and dilation used across many image processing applications.
Working with Videos:
We have learned how to deal with images in OpenCV, now let’s work with Videos in OpenCV. First, it should be clear to you that any operation you perform on images can be done on videos too since a video is nothing but a series of images, for e.g. Consider a 30 FPS video, which means this video shows 30 Frames (images) each second.
There are multiple ways to work with images in OpenCV, you first you have to initialize the camera Object by doing this:
Now there are 4 ways we can use the videoCapture Object depending what you pass in as arg:
1. Using Live camera feed: You pass in an integer number i.e. 0,1,2 etc e.g. cap = cv2.VideoCapture(0), now you will be able to use your webcam live stream.
2.Playing a saved Video on Disk: You pass in the path to the video file e.g. cap = cv2.VideoCapture(Path_To_video).
3. Live Streaming from URL using Ip camera or similar: You can stream from a URL e.g. cap = cv2.VideoCapture(protocol://host:port/script_name?script_params|auth) Note, that each video stream or IP camera feed has its own URL scheme.
4.Read a sequence of Images: You can also read sequences of images but this is not used much.
The next step After Initializing is read from video frame by frame, we do this by using cap.read().
ret:: A boolean variable which either returns True if the frame was successfully read otherwise False if it fails to read the next frame, this is a really important param when working with videos since after reading the last frame from the video this parameter will return false meaning it can’t read the next frame so we know we can exit the program now.
frame: This will be a frame/image of our video. Now everytime we run cap.read() it will give us a new frame so we will put cap.read() in a loop and show all the frames sequentially , it will look like we are playing a video but actually we are just displaying frame by frame.
After exiting the loop there is one last thing you must do, you must release the cap object you created by doing cap.release() otherwise your camera will stay on even after the program ends. You may also want to destroy any remaining windows after the loop.
# Initialize Video capture Object.
cap = cv2.VideoCapture(0)
# Initialize a loop in which we will read video frame by frame
while(True):
# Read frame by frame
ret, frame = cap.read()
# If a frame is not read correctly exit the loop, most useful when working with videos on disk
if not ret:
break
# Now we can perform any image processing operations.
# I’m just going to convert to grayscale and call it a day for this one.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Show the frame we just read
cv2.imshow('frame',gray)
# Wait for the 1 millisecond, before showing the next frame.
# If the user presses the `q` key then exit the loop.
if cv2.waitKey(1) == ord('q'):
break
# Release the camera
cap.release()
# Destroy the windows you created
cv2.destroyAllWindows()
Line 2: Initializing the VideoCapture object, if you’re using a usb cam then this value can be 1, 2 etc instead of 0
Line 6-17: looping and reading frame by frame from the camera, making sure it’s not corrupted and then converting to grayscale.
Line 24-25: Check if the user presses the q under 1 millisecond after the imshow function, if yes then exit the loop. The ord() method converts a character to its ASCII value so we can compare it with the returned ASCII value of waitKey() method.
Line 28: Release the camera otherwise your cameras will be left on and the program will exit, this will cause problems the next time you run this cell.
Face Detection with Machine Learning:
In this section we will work with a machine learning-based face detection model, the model we are going to use is a Haar cascade based face detector. It’s the oldest known face detection technique that is still used today at some capacity, although there are more effective approaches, for e.g. take a look at Bleedfacedetector, a python library that I built a year back. It lets users use 4 different types of face detectors by just changing a single line of code.
This Haar Classifier has been trained on several positive (images with faces) and negative (images without faces) images. After training it has learned to recognize faces.
Before using the face detector, you first must initialize it.
scaleFactor: Parameter specifying how much the image size is reduced at each pyramid scale.
minNeighbors: Parameter specifying how many neighbors each candidate rectangle should have to retain it.
I’m not going to go into the details of this classifier so you can ignore the definitions of scaleFactor & minNeighbors and just remember that you should tune the value of scaleFactor for controlling speed/accuracy tradeoff. Also increase the number of minNeihbors if you’re getting lots of false detections. There is also a minSize & a maxSize parameter which I’m not discussing for now.
Let’s detect all the faces in this image.
# Read the image on which we want to apply face detection
image = cv2.imread('john.jpg')
# Initialize the haar classifier with the face detector model
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Perform the detection, here we are will use 1.3 scale factor and 5 min neighbours
faces = face_cascade.detectMultiScale(image, 1.3, 5)
# Loop through each face and a rectangle on the face coordinates.
for (x,y,w,h) in faces_2:
cv2.rectangle(img_detection_2,(x,y),(x+w,y+h),(0,255,255),4)
cv2.putText(img_detection_2,'Face Detected',(x,y+h+15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,0,25), 1, cv2.LINE_AA)
# Display images
cv2.imshow("Original",image);
cv2.imshow("Face Detected",img_detection);
cv2.waitKey(0)
cv2.destroyAllWindows()
Line 8: We are performing face detection and obtaining a list of faces.
Line 11-13: Looping through each face in the list & drawing a rectangle using its coordinates on the image.The list of faces is an array, of x,y,w,h coordinates so an Object (face) is represented as 4 numbers, x,y is the top left corner of the object (face) and w,h is the width and height of the object (face). We can easily use these coordinates to draw a rectangle on the face.
Output:
As you can see almost all faces were detected in the above image. Normally you don’t make deductions regarding a model based on a single image but if I were to make one then I’d say this model is racist or in ML terms this model is biased towards white people.
One issue with these cascades is that they will fail when the face is rotated or is tilted sideways or occupied but no worries you can use a stronger SSD based face detection using bleedfacedetecor.
There are also other Haar Cascades besides this face detector that you can use, take a look at the list here. Not all of them are good but you should try the eye & pedestrian cascades.
Image Classification with Deep Learning:
In this section, we will learn to use an image Classifier in OpenCV. We will be using OpenCV’s built-in DNN module. Recently I made a tutorial on performing Super Resolution with DNN module. The DNN module allows you to use pre-trained neural networks from popular frameworks like TensorFlow, PyTorch, ONNX etc. and use those models directly in OpenCV. One problem is that the DNN module does not allow you to train neural networks. Still, it’s a powerful tool, let’s take a look at an image classification pipeline using OpenCV.
Note: I will create a detailed post on OpenCV DNN module in a few weeks, for now I’m keeping this short.
DNN Pipeline
Generally there are 4 steps when doing deep learning with DNN module.
Read the image and the target classes.
Initialize the DNN module with an architecture and model parameters.
Perform the forward pass on the image with the module
Post process the results.
Now for this we are using a couple of files, like the class labels file, the neural network model and its configuration file, all these files can be downloaded in the source code download section of this post. We will start by reading the text file containing 1000 ImageNet Classes, and we extract and store each class in a python list.
# Split all the classes by a new line and store it in a variable called rows.
rows = open('synset_words.txt').read().strip().split("n")
# Check the number of classes.
print("Number of Classes "+str(len(rows)))
Output:
Number of Classes 1000
# Show the first 5 rows
print(rows[0:5])
Output:
[‘n01440764 tench, Tinca tinca’, ‘n01443537 goldfish, Carassius auratus’, ‘n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias’, ‘n01491361 tiger shark, Galeocerdo cuvieri’, ‘n01494475 hammerhead, hammerhead shark’]
All these classes are in the text file named synset_words.txt. In this text file, each class is in on a new line with its unique id, Also each class has multiple labels for e.g look at the first 3 lines in the text file:
‘n01440764 tench, Tinca tinca’
‘n01443537 goldfish, Carassius auratus’
‘n01484850 great white shark, white shark
So for each line we have the Class ID, then there are multiple class names, they all are valid names for that class and we’ll just use the first one. So in order to do that we’ll have to extract the second word from each line and create a new list, this will be our labels list.
Here we will extract the labels (2nd element from each line) and create a labels list.
# Split by comma after the first space is found, grab the first element and store it in a new list.
CLASSES = [r[r.find(" ") + 1:].split(",")[0] for r in rows]
# Print the first 50 processed class labels
print(CLASSES[0:50])
Now we will initialize our neural network which is a GoogleNet model trained in a caffe framework on 1000 classes of ImageNet. We will initialize it usingcv2.dnn.readNetFromCaffe(),there are different initialization methods for different frameworks.
# This is our model weights file
weights = 'media/bvlc_googlenet.caffemodel'
# This is our model architecture file
architecture ='media/bvlc_googlenet.prototxt'
# Here we will read a pre-trained caffe model with its architecture
net = cv2.dnn.readNetFromCaffe(architecture, weights)
This is the image upon which we will run our classification.
Pre-processing the image:
Now before you pass an image in the network you need to preprocess it, this means resizing the image to the size it was trained on, for many neural networks this is 224x224, in pre-processing step you also do other things like Normalize the image (make the range of intensity values between 0-1) and mean subtraction, etc. These are all the steps the authors did on the images that were used during model training.
Fortunately In OpenCV you have a function called cv2.dnn.blobFromImage() which most of the time takes care of all the pre-processing for you.
Scalefactor: Used to normalize the image. This value is multiplied by the image, value of 1 means no scaling is done.
Size: The size to which the image will be resized to, this depends upon each model.
Mean: These are mean R,G,B Channel values from the whole dataset and these are subtracted from the image’s R,G,B channels respectively, this gives illumination invariance to the model.
There are other important parameters too but I’m skipping them for now.
# Read the image
image = cv2.imread('media/fish.jpg', 1)
# Pre-process the image, with these values.
blob = cv2.dnn.blobFromImage(image, 1, (224, 224), (104, 117, 123))
# Pass the blob as input through the network
net.setInput(blob)
Now this blob is our pre-processed image. It’s ready to be sent to the network but first you must set it as input
# Pass the blob as input through the network
net.setInput(blob)
This is the most important step, now the image will go through the entire network and you will get an output. Most of the computation time will take place in this step.
# Perform the forward pass
Output = net.forward()
Now if we check the size of Output predictions, we will see that it’s 1000. So the model has returned a list of probabilities for each of the 1000 classes in ImageNet dataset. The index of the highest probability is our target class index.
# Length of the number of predictions
print("Total Number of Predictions are: {}".format(len(Output[0])))
Output:
Total Number of Predictions are: 1000
You can try to print the predictions to understand it better, so we will print initial 50 predictions.
Now if I wanted to get the top most prediction or the highest probability then we would just need to do np.max()
# Maximum probability
print(np.max(Output[0]))
Output:
0.9984837
See, we got a class with 99.84% probability. This is really good, it means our network is pretty sure about the name of the target class.
If we wanted to check the index of the target class we can just do np.argmax()
# Index of Class with the maximum Probability.
index = np.argmax(Output[0])
print(index)
Output:
1
Our network says the class with the highest probability is at index 1. We just have to use this index in the labels list to get the name of the actual predicted class.
print(CLASSES[index])
Output:
goldfish
So our target class is goldfish which has a probability of 99.84%
In the final step we are just going to put the above information over the image.
# Create text that says the class name and its probability.
text = "Label: {}, {:.2f}%".format(CLASSES[index], np.max(Output[0]))
# Put the text on the image
cv2.putText(image, text, (20, 20 ), cv2.FONT_HERSHEY_COMPLEX, 1, (100, 20, 255), 2)
# Display image
cv2.imshow("Classified Image",image);
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
So this was an Image classification pipeline, similarly there are a lot of other interesting neural nets for different tasks, Object Detection, Image Segmentation, Image Colorization etc. I cover using 13-14 Different Neural nets with OpenCV using Video Walkthroughs and notebooks inside our Computer Vision Course and also show you how to use them with Nvidia & OpenCL GPUs.
What’s Next?
If you want to go forward from here and learn more advanced things and go into more detail, understand theory and code of different algorithms then be sure to check out our Computer Vision & Image Processing with Python Course (Urdu/Hindi). In this course I go into a lot more detail on each of the topics that I’ve covered above.
If you want to start a career in Computer Vision & Artificial Intelligence then this course is for you. One of the best things about this course is that the video lectures are in Urdu/Hindi Language without any compromise on quality, so there is a personal/local touch to it.
Summary:
In this post, we covered a lot of fundamentals in OpenCV. We learn to work with images as well as videos, this should serve as a good starting point but keep on learning. Remember to refer to OpenCV documentation and StackOverflow if you’re stuck on something. In a few weeks, I’ll be sharing our Computer Vision Resource Guide, which will help you in your Computer Vision journey.
If you have any questions or confusion regarding this post, feel free to comment on this post and I’ll be happy to help.
You can reach out to me personally for a 1 on 1 consultation session in AI/computer vision regarding your project. Our talented team of vision engineers will help you every step of the way. Get on a call with me directlyhere.
A few weeks ago we learned how to do Super-Resolution using OpenCV’s DNN module, in today’s post we will perform Facial Expression Recognition AKA Emotion Recognition using the DNN module. Although the term emotion recognition is technically incorrect (I will explain why) for this problem but for the remainder of this post I’ll be using both of these terms, since emotion recognition is short and also good for SEO since people still search for emotion recognition while looking for facial expression recognition xD.
The post is structured in the following way:
First I will define Emotion Recognition & its importance.
Then I will discuss different approaches to tackle this problem.
Finally, we will Implement an Emotion Recognition pipeline using OpenCV’s DNN module.
Emotion Recognition Or Facial Expression Recognition
Now let me start by clarifying what I meant when I said this problem is incorrectly quoted as Emotion recognition. So you see by saying that you’re doing emotion recognition you’re implying that you’re actually finding the emotion of a person whereas in a typical AI-based emotion recognition system you’ll find around and the one that we’re gonna built looks only at a single image of a person’s face to determine the emotion of that person. Now, in reality, our expression may at times exhibit what we feel but not always. People may smile for a picture or someone may have a face that inherently looks gloomy & sad but that doesn’t represent the person’s emotion.
So If we were to build a system that actually recognizes the emotions of a person then we need to do more than look at a simple face image. We would also consider the body language of a person through a series of frames, so the network would be a combination of an LSTM & a CNN network. Also for a more robust system, we may also incorporate a voice tone recognition AI as the tone of a voice, and speech patterns tell a lot about the person’s feelings.
Since today we’ll only be looking at a single face image so it’s better to call our task Facial Expression Recognition rather than Emotion recognition.
Facial Expression Recognition Applications:
Monitoring facial expressions of several people over a period of time provides great insights if used carefully, so for this reason we can use this technology in the following applications.
1: Smart Music players that play music according to your mood:
Think about it, you come home after having a really bad day, you lie down on the bed looking really sad & gloomy and then suddenly just the right music plays to lift up your mood.
2: Student Mood Monitoring System:
Now a system that cleverly averages the expressions of multiple students over a period of time can get an estimate of how a particular topic or teacher is impacting students, does the topic being taught stresses out the students, is a particular session from a teacher a joyful experience for students.
3: Smart Advertisement Banners:
Think about smart advertisement banners that have a camera attached to it, when a commercial airs, it checks real-time facial expressions of people consuming that ad and informing the advertiser if the ad had the desired effect or not. Similarly, companies can get feedback if customers liked their products or not without even asking them.
These are just some of the applications from top of my head, if you start thinking about it you can come up with more use cases. One thing to remember is that you have to be really careful as how you use this technology. Use it as an assistive tool and do not completely rely on it. For e.g don’t deploy on Airport and start interrogating every other black guy who triggers Angry expressions on the system for a couple of frames.
Facial Expression Recognition Approaches:
So let’s talk about the ways we could go about recognizing someone’s facial expressions. We will look at some classical approaches first then move on to deep learning.
Haar Cascades based Recognition:
Perhaps the oldest method that could work are Haar Cascades. So essentially these Haar Cascades also called viola jones Classifier is an outdated Object detection technique by Paul Viola and Michael Jones in 2001. It is a machine learning-based approach where a cascade is trained from a lot of positive and negative images. It is then used to detect objects in images.
The most popular use of these cascades is as a face detector which is still used today, although there are better methods available.
Now instead of using face detection, we could train a cascade to detect expressions. Since you can only train a single class with a cascade so you’ll need multiple cascades. A better way to go about is to first perform face detection then look for different features inside the face ROI, like detecting a smile with this smile detection cascade. You can also train a frown detector and so on.
Truth be told, this method is so weak that I wouldn’t even try experimenting with this in this time and era but since people have used this in the past so I’m just putting it there.
Fisher, Eigen & LBPH based Recognition:
OpenCV’s built-in face_recognition module has 3 different face recognition algorithms, Eigenfaces face recognizer, Fisherfaces face recognizer and Local binary patterns histograms (LBPH) Face Recognizer.
If you’re wondering why am I mentioning face recognition algorithms on a facial expression recognition post, So understand this, these algorithms can extract some really interesting features like principal components and local histograms which you can then feed into an ML classifier like SVM, so in theory, you can repurpose them for emotion recognition, only this time the target classes are not the identities of people but some facial expressions. This will work best if you have a few classes, ideally 2-3. I haven’t seen many people work on emotion like this but take a look at this post in which a guy uses Fisher faces for facial expression recognition.
Again I would mention this is not a robust approach, but would work better than the previous one.
Histogram Oriented Gradients based Recognition (HOG):
Now similar to the above approach instead of using the face_recognizer module to extract features you can extract HOG features of faces, HOG based features are really effective. After extracting HOG features you can train an SVM or any other Machine learning classifier on top of it.
Custom Features with Landmark Detection:
One of the easiest and effective ways to create an emotion recognition system is to use a landmark detector like the one in dlib which allows you to detect 68 important landmarks on the face.
By using this detector you can extract facial features like eyes, eyebrows, mouth, etc. Now you can take custom measurements of these features like measuring the distance between the lip ends to detect if the person is smiling or not. Similarly, you can measure if the eyes are wide open or not, indicating surprise or shock.
Now there are two ways to go about it, either you can send these custom measurements to an ML classifier and let it learn to predict emotions based on these measurements or you use your own heuristics to determine when to call it happy, sad etc based on the measurements.
I do think the former approach is more effective than the latter. But if you’re just determining a singular emotion like if a person is smiling or not then it’s easier to use heuristics.
Deep Learning based Recognizer:
It should not come as a surprise that the State of the Art approach to detect emotions would be a deep learning-based approach. Let me explain how you would create a simple yet effective emotion recognizer system. So what you would simply do is train a Convolutional Neural Network (CNN) on different facial expression images (Ideally thousands of images for each class/emotion) and after the training showed it new samples and if done right it would perform better than all the above approaches I’ve mentioned.
Now that we have discussed different approaches, let’s move on to the coding part of the blog.
Facial Expression Recognition in OpenCV
We will be using a deep learning classifier that will be loaded to the OpenCV DNN module. The authors trained this model using MS Cognitive Toolkit (formerly CNTK) and then converted this model to ONNX (Open neural network exchange ) format.
ONNX format allows developers to move models between different frameworks such as CNTK, Caffe2, Tensorflow, PyTorch etc.
There is also a javascript version of this model (version 1.2) with a live demo which you can check out here. In this post we will be using version 1.3 which has a better performance.
In the paper, the authors demonstrate training a deep CNN using 4 different approaches: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. The model that we are going to use today was trained using cross-entropy loss, which according to the author’s conclusion was one of the best performing models.
The model was trained on FER+ dataset, FER dataset was the standard dataset for emotion recognition task but in FER+ each image has been labeled by 10 crowd-sourced taggers, which provides a better quality of ground truth label for still image emotion than the original FER labels.
More information about the ONNX version of the model can be found here.
The input to our emotion recognition model is a grayscale image of 64×64 resolution. The output is the probabilities of 8 emotion classes: neutral, happiness, surprise, sadness, anger, disgust, fear, and contempt.
Here’s the architecture of the model.
Here are the steps we would need to perform:
Initialize the Dnn module.
Read the image.
Detect faces in the image.
Pre-process all the faces.
Run a forward pass on all the faces.
Get the predicted emotion scores and convert them to probabilities.
Finally get the emotion corresponding to the highest probability
Make sure you have the following Libraries Installed.
OpenCV ( possibly Version 4.0 or above)
Numpy
Matplotlib
bleedfacedetector
Bleedfacedetector is my face detection library which can detect faces using 4 different algorithms. You can read more about the library here.
You can install it by doing:
pip install bleedfacedetector
Before installing bleedfacedetector make sure you have OpenCV & Dlib installed.
pip install opencv-contrib-python
To install dlib you can do:
pip install dlib OR pip install dlib==19.8.1
[optin-monster slug=”yif9nbr5u0xxerzz2wfu”]
Directory Hierarchy
You can go ahead and download the source code from the download code section. After downloading the zip folder, unzip it and you will have the following directory structure.
You can now run the Jupyter notebook Facial Expression Recognition.ipynband start executing each cell as follows.
Import Libraries
# Import required libraries
import cv2
import numpy as np
import matplotlib.pyplot as plt
import os
import bleedfacedetector as fd
Import time
# This is the magic command to show matplotlib graphs.
%matplotlib inline
Initialize DNN Module
To use Models in ONNX format, you just have to use cv2.dnn.readNetFromONNX(model) and pass the model inside this function.
# Set model path
model = 'Model/emotion-ferplus-8.onnx'
# Now read the model
net = cv2.dnn.readNetFromONNX(model)
Read Image
This is our image on which we are going to perform emotion recognition.
Line 5-6 : We’re setting the figure size and showing the image with matplotlib, [:,:,::-1] means to reverse image channels so we can show OpenCV BGR images properly in matplotlib. OpenCV BGR images.
Define the available classes / labels
Now we will create a list of all 8 available emotions that we need to detect.
The next step is to detect all the faces in the image, since our target image only contains a single face so we will extract the first face we find.
img_copy = image.copy()
# Use SSD detector with 20% confidence threshold.
faces = fd.ssd_detect(img_copy, conf=0.2)
# Lets take the coordinates of the first face in the image.
x,y,w,h = faces[0]
# Define padding for face roi
padding = 3
# Extract the Face from image with padding.
face = img_copy[y-padding:y+h+padding,x-padding:x+w+padding]
Line 4: We’re using an SSD based face detector with 20% filter confidence to detect faces, you can easily swap this detector with any other detector inside bleedfacedetector by just changing this line.
Line 7: We’re extracting the x,y,w,h coordinates from the first face we found in the list of faces.
Line 10-13: We’re padding the face by a value of 3, now this expands the face ROI boundaries, this way the model takes a look at a larger face image when predicting. I’ve seen this improves results in a lot of cases, Although this is not required.
Padded Vs Non Padded Face
Here you can see what the final face ROI looks like when it’s padded and when it’s not padded.
# Non Padded face
face = img_copy[y:y+h, x:x+w]
# Just increasing the padding for demo purpose
padding = 20
# Get the Padded face
padded_face_demo = img_copy[y-padding:y+h+padding,x-padding:x+w+padding]
plt.figure(figsize=[10, 10])
plt.subplot(121);plt.imshow(padded_face_demo[...,::-1]);plt.title("Padded face");plt.axis('off')
plt.subplot(122);plt.imshow(face[...,::-1]);plt.title("Non Padded face");plt.axis('off');
Pre-Processing Image
Before you pass the image to a neural network you perform some image processing to get the image in the right format. So the first thing we need to do is convert the face from BGR to Grayscale then we’ll resize the image to be of size 64x64. This is the size that our network requires. After that we’ll reshape the face image into (1, 1, 64, 64), this is the final format which the network will accept.
# Convert Image into Grayscale
gray = cv2.cvtColor(padded_face,cv2.COLOR_BGR2GRAY)
# Resize into 64x64
resized_face = cv2.resize(gray, (64, 64))
# Reshape the image into required format for the model
processed_face = resized_face.reshape(1,1,64,64)
Line 2: Convert the padded face into GrayScale image Line 5: Resize the GrayScale image into 64 x 64 Line 8: Finally we are reshaping the image into the required format for our model
Input the preprocessed Image to the Network
net.setInput(processed_face)
Forward Pass
Most of the Computations will take place in this step, This is the step where the image goes through the whole neural network.
Output = net.forward()
Check the output
As you can see, the model outputs scores for each emotion class.
# The output are the scores for each emotion class
print('Shape of Output: {} n'.format(Output.shape))
print(Output)
We will convert the model scores to class probabilities between 0-1 by applying a Softmax function on it.
# Compute softmax values for each sets of scores
expanded = np.exp(scores - np.max(Output))
probablities = expanded / expanded.sum()
# Get the final probablities
prob = np.squeeze(probablities)
print(prob)
# Get the index of the max probability, use that index to get the predicted emotion in the
# emotions list you created above.
predicted_emotion = emotions[prob.argmax()]
# Print the target Emotion
print('Predicted Emotion is: {}'.format(predicted_emotion ))
Predicted Emotion is: Surprise
Display Final Result
We already have the correct prediction from the last step but to make it more cleaner we will display the final image with the predicted emotion, we will also draw a bounding box over the detected face.
# Write predicted emotion on image
cv2.putText(img_copy,'{}'.format(predicted_emotion),(x,y+h+75), cv2.FONT_HERSHEY_SIMPLEX, 3, (255,0,255), 7, cv2.LINE_AA)
# Draw rectangular box on detected face
cv2.rectangle(img_copy,(x,y),(x+w,y+h),(0,0,255),5)
# Display image
plt.figure(figsize=(10,10))
plt.imshow(img_copy[:,:,::-1]);plt.axis("off");
Creating Functions
Now that we have seen a step by step implementation of the network, we’ll create the 2 following python functions.
Initialization Function: This function will contain parts of the network that will be set once, like loading the model.
Main Function: This function will contain all the rest of the code from preprocessing to postprocessing, it will also have the option to either return the image or display it with matplotlib.
Furthermore, the Main Function will be able to predict the emotions of multiple people in a single image, as we will be doing all the operations in a loop.
Initialization Function
def init_emotion(model="Model/emotion-ferplus-8.onnx"):
# Set global variables
global net,emotions
# Define the emotions
emotions = ['Neutral', 'Happy', 'Surprise', 'Sad', 'Anger', 'Disgust', 'Fear', 'Contempt']
# Initialize the DNN module
net = cv2.dnn.readNetFromONNX(model)
Main Function
Set returndata = True when you just want the image. I usually do this when working with videos.
def emotion(image,returndata=False):
# Make copy of image
img_copy = image.copy()
# Detect faces in image
faces = fd.ssd_detect(img_copy,conf=0.2)
# Define padding for face ROI
padding = 3
# Iterate process for all detected faces
for x,y,w,h in faces:
# Get the Face from image
face = img_copy[y-padding:y+h+padding,x-padding:x+w+padding]
# Convert the detected face from BGR to Gray scale
gray = cv2.cvtColor(face,cv2.COLOR_BGR2GRAY)
# Resize the gray scale image into 64x64
resized_face = cv2.resize(gray, (64, 64))
# Reshape the final image in required format of model
processed_face = resized_face.reshape(1,1,64,64)
# Input the processed image
net.setInput(processed_face)
# Forward pass
Output = net.forward()
# Compute softmax values for each sets of scores
expanded = np.exp(Output - np.max(Output))
probablities = expanded / expanded.sum()
# Get the final probablities by getting rid of any extra dimensions
prob = np.squeeze(probablities)
# Get the predicted emotion
predicted_emotion = emotions[prob.argmax()]
# Write predicted emotion on image
cv2.putText(img_copy,'{}'.format(predicted_emotion),(x,y+h+(1*20)), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,255),
2, cv2.LINE_AA)
# Draw a rectangular box on the detected face
cv2.rectangle(img_copy,(x,y),(x+w,y+h),(0,0,255),2)
if returndata:
# Return the the final image if return data is True
return img_copy
else:
# Displpay the image
plt.figure(figsize=(10,10))
plt.imshow(img_copy[:,:,::-1]);plt.axis("off");
You can also take the above main function that we created and put it inside a loop and it will start detecting facial expressions on a video, below code detects emotions using your webcam in real time. Make sure to set returndata = True
fps=0
init_emotion()
cap = cv2.VideoCapture('media/bean_input.mp4')
# If you want to use the webcam the pass 0
# cap = cv2.VideoCapture(0)
while(True):
start_time = time.time()
ret,frame=cap.read()
if not ret:
break
image = cv2.flip(frame,1)
image = emotion(image, returndata=True, confidence = 0.8)
cv2.putText(image, 'FPS: {:.2f}'.format(fps), (10, 20), cv2.FONT_HERSHEY_SIMPLEX,0.8, (255, 20, 55), 1)
cv2.imshow("Emotion Recognition",image)
k = cv2.waitKey(1)
fps= (1.0 / (time.time() - start_time))
if k == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Conclusion:
Here’s the confusion matrix of the model from the author’s paper. As you can see this model is not good at predicting Disgust, Fear & Contempt classes.
You can try running the model on different images and you’ll also agree with the above matrix, that the last three classes are pretty difficult to predict, particularly because It’s also hard for us to differentiate between these many emotions based on just facial expression, a lot of micro expressions overlap between these classes and so it’s understandable why the algorithm would have a hard time differentiating between 8 different emotional expressions.
Improvement Suggestions:
Still, if you really want to detect some expressions that the model seems to fail on then the best way to go about is to train the model yourself on your own data. Ethnicity & color can make a lot of difference.Also, try removing some emotion classes so the model can focus only on those that you care about.
You can also try changing the padding value, this seems to help in some cases.
If you’re working on a live video feed then try to average the results of several frames instead of giving a new result on every new frame.
You’ll come across many Computer Vision courses out there, but nothing beats a 1 on 1 video call support from an expert in the field. Plus there is a plethora of subfields and tons of courses on AI and computer vision out there, you need someone to lay out a step-by-step learning path customized to your needs. This is where I come in, whether you need monthly support or just want to have a one-time chat with me, I’ve got you covered. Check all the coaching details and packages here
In this tutorial we first learned about the Emotion Recognition problem, why it’s important, and what are the different approaches we could take to develop such systems.
Then we learned to perform emotion recognition using OpenCV’s DNN module. After that, we went over some ways on how to improve our results.
I hope you enjoyed this tutorial. If you have any questions regarding this post then please feel free to comment below and I’ll gladly answer them.
You can reach out to me personally for a 1 on 1 consultation session in AI/computer vision regarding your project. Our talented team of vision engineers will help you every step of the way. Get on a call with me directlyhere.
I would recommend that you go over that tutorial before reading this one but you can still easily follow along with this tutorial. For those of you who don’t know what Super-resolution is then here is an explanation.
Super Resolution can be defined as the class of Algorithms that upscales an image without losing quality, meaning you take a low-resolution image like an image of size 224×224 and upscale it to a high-resolution version like 1792×1792 (An 8x resolution) without any loss in quality. How cool is that?
Anyways that is Super resolution, so how is this different from the normal resizing you do?
When you normally resize or upscale an image you use Nearest Neighbor Interpolation. This just means you expand the pixels of the original image and then fill the gaps by copying the values of the nearest neighboring pixels.
The result is a pixelated version of the image.
There are better interpolation methods for resizing like bilinear or bicubic interpolation which take weighted average of neighboring pixels instead of just copying them.
Still the results are blurry and not great.
The super resolution methods enhance/enlarge the image without the loss of quality, Again, for more details on the theory of super resolution methods, I would recommend that you read my Super Resolution with OpenCV Tutorial.
In the above tutorial I describe several architectural improvements that happened with SR Networks over the years.
That all changes now, in this tutorial we will work with multiple models, even those that will do 8x resolution.
Today, we won’t be using the DNN module, we could do that but for the super resolution problem OpenCV comes with a special module called dnn_superres which is designed to use 4 different powerful super resolution networks. One of the best things about this module is that It does the required pre and post processing internally, so with only a few lines of code you can do super resolution.
The 4 models we are going to use are:
EDSR: Enhanced Deep Residual Network from the paper Enhanced Deep Residual Networks for Single Image Super-Resolution (CVPR 2017) by Bee Lim et al.
ESPCN: Efficient Subpixel Convolutional Network from the paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network (CVPR 2016) by Wenzhe Shi et al.
FSRCNN: Fast Super-Resolution Convolutional Neural Networks from the paper Accelerating the Super-Resolution Convolutional Neural Network (ECCV 2016) by Chao Dong et al.
LapSRN: Laplacian Pyramid Super-Resolution Network from the paper Deep Laplacian pyramid networks for fast and accurate super-resolution (CVPR 2017) by Wei-Sheng Lai et al.
Here are the papers for the models and some extra resources.
Make sure to download the zip folder from the download code section above. As you can see by clicking the Download models link that each model has different versions like 3x, 4x etc. This means that the model can perform 3x resolution, 4x resolution of the image, and so on. The download zip that I provide contains only a single version of each of the 4 models above.
You can feel free to test out other models by downloading them. These models should be present in your working directory if you want to use them with the dnn_superres module.
Now the inclusion of this super easy to use dnn_superres module is the result of the work of 2 developers Xavier Weber and Fanny Monori. They developed this module as part of their GSOC (Google summer of code) project. GSOC 2019 also made NVIDIA GPU support possible.
It’s always amazing to see how a summer project for students by google brings forward some great developers making awesome contributions to the largest Computer Vision library out there.
The dnn_superes module in OpenCV was included in version 4.1.2 for C++ but the python wrappers were added in 4.3 version about a month back, so you have to make sure that you have OpenCV version 4.3 installed. And of course, since this module is included in the contrib module so make sure you have also installed OpenCV contrib package.
[UPDATE 7/8/2020, OPENCV 4.3 IS NOW PIP INSTALLABLE]
Note: You can’t install OpenCV 4.3 version by doing pip install as the latest version here open-contrib-python from pipis still version 4.2.0.34.
So the pypi version of OpenCV is maintained by just one guy named: Olli-Pekka Heinisuo by username: skvark and he updates the pypi OpenCV package in his free time. Currently, he’s facing a compiling issue which is why 4.3 version has not come out as of 7-15-2020. But from what I have read, he will be building the .whl files for 4.3 version soon, it may be out this month. If that happens then I’ll update this post.
So right now the only way you will be able to use this module is if you have installed OpenCV 4.3 from Source. If you haven’t done that then you can easily follow my installation tutorial.
I should also take this moment to highlight the fact you should not always rely on OpenCV’s pypi package, no doubt skvark has been doing a tremendous job maintaining OpenCV’s pypi repo but this issue tells you that you can’t rely on a single developer’s free time to update the library for production use cases, learn to install the Official library from source. Still, pip install opencv-contrib-python is a huge blessing for people starting out or in early stages of learning OpenCV, so hats off to skvark.
As you might have noticed among the 4 models above we have already learned to use ESPCNN in the previous tutorial, we will use it again but this time with the dnn_superres module.
Super Resolution with dnn_superres Code
[optin-monster slug=”rvfkmnfpxleeisjulg1h”]
Directory Hierarchy
After downloading the zip folder, unzip it and you will have the following directory structure.
This is how our directory structure looks like, it has a Jupyter notebook, a media folder with images and the model folder containing all 4 models.
In the next few steps, will be using a setModel() function in which we will pass the model’s name and its scale. We could manually do that but all this information is already present in the model’s pathname so we just need to extract the model’s name and scale using simple text processing.
# Define model path, if you want to use a different model then just change this path.
model_path = "models/EDSR_x4.pb"
# Extract model name, get the text between '/' and '_'
model_name = model_path.split('/')[1].split('_')[0].lower()
# Extract model scale
model_scale = int(model_path.split('/')[1].split('_')[1].split('.')[0][1])
# Display the name and scale
print("model name: "+ model_name)
print("model scale: " + str(model_scale))
model name: edsr model scale: 4
Reading the model
Finally we will read the model, this is where all the required weights of the model gets loaded. This is equivalent to DNN module’s readnet function
# Read the desired model
sr.readModel(model_path)
Setting Model Name & Scale
Here we are setting the name and scale of the model which we extracted above.
Why do we need to do that ?
So remember when I said that this module does not require us to do preprocessing or postprocessing because it does that internally. So in order to initiate the correct pre and post-processing pipelines, the module needs to know which model we will be using and what version meaning what scale 2x, 3x, 4x etc.
# Set the desired model and scale to get correct pre-processing and post-processing
sr.setModel(model_name, model_scale)
Running the Network
This is where all the magic happens. In this line a forward pass of the network is performed along with required pre and post-processing. We are also making note of the time taken as this information will tell us if the model can be run in real-time or not.
As you can see it takes a lot of time, in fact, EDSR is the most expensive model out of the four in terms of computation.
It should be noted that larger your input image’s resolution is the more time its going to take in this step.
%%time
# Upscale the image
Final_Img = sr.upsample(image)
Wall time: 45.1 s
Check the Shapes
We’re also checking the shapes of the original image and the super resolution image. As you can see the model upscaled the image by 4 times.
print('Shape of Original Image: {} , Shape of Super Resolution Image: {}'.format(image.shape, result.shape))
Shape of Original Image: (262, 347, 3) , Shape of Super Resolution Image: (1200, 1200, 3)
Comparing the Original Image & Result
Finally we will display the original image along with its super resolution version. Observe the difference in Quality.
Although you can see the improvement in quality but still you can’t observe the true difference with matplotlib so its recommended that you save the SR image in disk and then look at it.
# Save the image
cv2.imwrite("outputs/testoutput.png", Final_Img);
Creating Functions
Now that we have seen a step by step implementation of the whole pipeline, we’ll create the 2 following python functions so we can use different models on different images by just calling a function and passing some parameters.
Initialization Function: This function will contain parts of the network that will be set once, like loading the model.
Main Function: This function will contain the rest of the code. it will also have the option to either return the image or display it with matplotlib. We can also use this function to process a real-time video.
Initialization Function
def init_super(model, base_path='models'):
# Define global variable
global sr, model_name, model_scale
# Create an SR object
sr = dnn_superres.DnnSuperResImpl_create()
# Define model path
model_path = os.path.join(base_path , model +".pb")
# Extract model name from model path
model_name = model.split('_')[0].lower()
# Extract model scale from model path
model_scale = int(model.split("_")[1][1])
# Read the desired model
sr.readModel(model_path)
sr.setModel(model_name, model_scale)
Main Function
Set returndata = True when you just want the image. This is usually done when I’m working with videos. I’ve also added a few more optional variables to the method.
print_shape: This variable decides if you want to print out the shape of the model’s output.
name: This is the name by which you will save the image in disk.
save_img: This variable decides if you want to save the images in disk or not.
def super_res(image, returndata=False, save_img=True, name='test.png', print_shape=True):
# Upscale the image
Final_Img = sr.upsample(image)
if returndata:
return Final_Img
else:
if print_shape:
print('Shape of Original Image: {} , Shape of Super Resolution Image: {}'.format(image.shape, Final_Img.shape))
if save_img:
cv2.imwrite("outputs/" + name, Final_Img)
plt.figure(figsize=[25,25])
plt.subplot(2,1,1);plt.imshow(image[:,:,::-1], interpolation = 'bicubic');plt.title("Original Image");plt.axis("off");
plt.subplot(2,1,2);plt.imshow(Final_Img[:,:,::-1], interpolation = 'bicubic');
plt.title("SR Model: {}, Scale: {}x ".format(model_name.upper(), model_scale)); plt.axis("off");
Now that we have created the initialization function and a main function, lets use all 4 models on different examples
The function above displays the original image along with the SR Image.
Initialize Enhanced Deep Residual Network (EDSR, 4x Resolution)
Shape of Original Image: (302, 357, 3) , Shape of Super Resolution Image: (2416, 2856, 3) Wall time: 26 s
Applying Super Resolution on Video
Lastly, I’m also providing the code to run Super-resolution on Videos. Although the example video I’ve used sucks, but that’s the only video I tested on primarily because I’m only interested in doing super resolution on images as this is where most of my use cases lie. Feel free to test out different models for real-time feed.
Tip: You might also want to save the High res video in disk using the VideoWriter Class.
# Set the fps counter to 0
fps=0
# Initialize the network.
init_super("ESPCN_x4")
# Initialize the videcapture object with the video.
cap = cv2.VideoCapture('media/demo1.mp4')
while(True):
# Note the starting time for fps calculation.
start_time = time.time()
# Read frame by frame.
ret,frame=cap.read()
# Break the loop if the video ends.
if not ret:
break
# Perform SR with returndata = True.
image = super_res(image, returndata=True)
# Put the value of FPS on the video.
cv2.putText(image, 'FPS: {:.2f}'.format(fps), (10, 20), cv2.FONT_HERSHEY_SIMPLEX,0.8, (255, 20, 55), 1)
# Show the current frame.
cv2.imshow("Super Resolution", image)
# Wait 1 ms and calculate the fps.
k = cv2.waitKey(1)
fps= (1.0 / (time.time() - start_time))
# If the user presses the `q` button then break the loop.
if k == ord('q'):
break
# Release the camera and destroy all the windows.
cap.release()
cv2.destroyAllWindows()
Conclusion
Here’s a chart for benchmarks using a 768×512 image with 4x resolution on an Intel i7-9700K CPU for all models.
The benchmark shows PSNR (Peak signal to noise ratio) and SSIM (structural similarity index measure) scores, these are the scores which measure how good the supre res network’s output is.
The best performing model is EDSR but it has the slowest inference time, the rest of the models can work in real time.
If you thought upscaling to 8x resolution was cool then take a guess on the scaling ability of the current state of the Art algorithm in super-resolution?
So believe it or not the state of the art in SR can actually do a 64x resolution…yes 64x, that wasn’t a typo.
In fact, the model that does 64x was published just last month, here’s the paper for that model, here’s the GitHub repo and here is a ready to run colab notebook to test out the code. Also here’s a video demo of it. It’s pretty rare that such good stuff is easily accessible for programmers just a month after publication so make sure to check it out.
The model is far too complex to explain in this post but the authors took a totally different approach, instead of using supervised learning they used self-supervised learning. (This seems to be on the rise).
You’ll come across many Computer Vision courses out there, but nothing beats a 1 on 1 video call support from an expert in the field. Plus there is a plethora of subfields and tons of courses on AI and computer vision out there, you need someone to lay out a step-by-step learning path customized to your needs. This is where I come in, whether you need monthly support or just want to have a one-time chat with me, I’ve got you covered. Check all the coaching details and packages here
In today’s tutorial we learned to use 4 different architectures to do Super resolution going from 3x to 8x resolution.
Since the library handles preprocessing and postprocessing, so the code for all the models was almost the same and pretty short.
As I mentioned earlier, I only showed you results of a single version of each model, you should go ahead and try other versions of each model.
These models have been trained using DIV2K BSDS and General100 datasets which contains images of diverse objects but the best results from a super-resolution model is obtained by training them for a domain-specific task, for e.g if you want the SR model to perform best on pedestrians then your dataset should consist mostly of pedestrian images. The best part about training SR networks is that you don’t need to spend hours doing manual annotation, you can just resize them and you’re all set.
Also I would raise a concern regarding these models that we must be careful using SR networks, for e.g. consider this scenario:
You caught an image of a thief stealing your mail on your low res front door cam, the image looks blurry and you can’t make out who’s in the image.
Now you being a Computer Vision enthusiast thought of running a super res network to get a clearer picture. After running the network, you get a much clearer image and you can almost swear that it’s Joe from the next block.
The same Joe that you thought was a friend of yours.
The same Joe that made different poses to help you create a pedestrian datasets for that SR network you’re using right now.
How could Joe do this?
Now you feel betrayed but yet you feel really Smart, you solved a crime with AI right?
You Start STORMING to Joe’s house to confront him with PROOF.
Now hold on! … like really hold on.
Don’t do that, seriously don’t do that.
Why did I go on a rant like that?
Well to be honest back when I initially learned about SR networks that’s almost exactly what I thought I would do. Solve Crimes by AI by doing just that (I know it was a ridiculous idea). But soon I realize that SR networks only learn to hallucinate data based on learned data, they can’t visualize a face with 100% accuracy that they’ve never seen. It’s still pretty useful but you have to use this technology carefully.
I hope you enjoyed this tutorial, feel free to comment below and I’ll gladly reply.
You can reach out to me personally for a 1 on 1 consultation session in AI/computer vision regarding your project. Our talented team of vision engineers will help you every step of the way. Get on a call with me directlyhere.
This tutorial will serve as a crash course to dlib library. Dlib library is another powerful computer vision library out there. It is not as extensive as OpenCV but still, there is a lot you can do with it.
This crash course assumes you’re somewhat familiar with OpenCV, if not then I’ve also published a crash course on OpenCV too. Make sure to download Dlib Resource Guide above which includes all important links in this post.
Side Note: I missed publishing a tutorial last week as I tested covid positive and was ill, still not 100% but getting better 🙂
The Dlib Library is created and maintained by Davis King, It’s a C++ toolkit containing machine learning & Computer Vision algorithms for a number of important tasks including, Facial Landmark detection, Deep Metric Learning, Object tracking, and more. It also has a python API.
Note: It’s worth noting that the main power of dlib library is in numerical optimization but today I’m only going to focus on applications, you can look at optimization examples here.
It’s a popular library that is used by people in both industry and academia in a wide range of domains including robotics, embedded devices, and other areas.
I plan to cover most of the prominent features and algorithms present in dlib library so this blog post alone can give you the best overview of dlib library and its functionality. Now, this is a big statement, If I had to explain most of dlib features in a single place then I would probably be writing a book or making a course on it but rather I plan to explain it all in this post.
So how am I going to accomplish that?
So here’s the thing I’m not going to write and explain the code for each algorithm with dlib library, because I don’t want to write several thousand’s of words worth of a blog post and also because almost all of the features of dlib library have been explained pretty well in several posts on the internet.
So if everything is out there then why the heck am I trying to make a crash course out of it ?
So here’s the real added value of this crash course:
In this post, I will connect all the best and the most important tutorials on different aspects of dlib library out there in a nice hierarchical order. This will not only serve as a golden Dlib library 101 to Mastery post for people just starting out with dlib but will also serve as a well-structured reference guide for dlib library users.
The post is split into various sections, in each section, I will briefly explain a useful algorithm or technique present in dlib library. If that explanation intrigues you and you feel that you need to explore that particular algorithm further then in each section I provide links to high-quality tutorials that goes in-depth about that function, the links would mostly be from Pyimagesearch, LearnOpenCV as these are golden sites when it comes to Computer Vision Tutorials.
When learning some topic, ideally we prefer these two things:
A Collection of all the useful material regarding the topic presented at one place in a nice and neat hierarchical order.
Each material presented and delivered in a high-quality format preferably by an author who knows how to teach it the right way.
In this post, I’ve made sure both of these points are true, all the information is presented in a nice order and the posts that I link to will be of high quality. Other than that I will also try to include other extra resources where I feel necessary.
This will only work if you have Visual Studio (i.e. you need a C++ compiler) and CMake installed as dlib will build and compile first before installing. If you don’t have these then you can use my OpenCV’s source installation tutorial to install these two things.
If you don’t want to bother installing these then here’s what you can do, if you have a python version greater then 3.6 then create a virtual environment for python 3.6 using Anaconda or virtualenv.
After creating a python 3.6 environment you can do:
pip install dlib==19.8.1
This will let you directly install pre-built binaries of dlib but this currently only works with python 3.6 and below.
Now that we have installed dlib, let’s start with face detection.
Why face detection ?
Well, most of the interesting use cases in dlib for computer vision is with faces, like facial landmark detection, face recognition, etc so before we can detect facial landmarks, we need to detect faces in the image.
Dlib not only comes with a face detector but it actually comes with 2 of them. If you’re a computer vision practitioner then you would most likely be familiar with the old Haar cascade based face detector. Although this face detector is a lot popular, it’s almost 2 decades old and not very effective when it comes to different orientations of the faces.
Dlib comes with 2 face detection algorithms that are way more effective than the haar cascade based detectors.
These 2 detectors are:
HOG (histogram of oriented gradients) based detection: This detector uses HOG and Support vector machines, its slower than haar cascades but its more accurate and able to handle different orientations CNN Based Detector: This is a really accurate deep learning based detector but its extremely slow on a CPU, you should only use this if you’ve compiled dlib with GPU.
You can learn more about these detectors here. Other than that I published a library called bleedfacedetector which lets you use these 2 detectors using just a few lines of the same code, and the library also has 2 other face detectors including the haar cascade one. You can look at bleedfacedetector here.
Now that we have learned how to detect faces in images, we will now learn the most common use case of dlib library which is facial landmark detection, with this method you will be able to detect key landmarks/features of the face like eyes, lips, etc.
The detection of these features will allow you to do a lot of things like track the movement of eyes, lips to determine the facial expression of a person, control a virtual Avatar with your facial expressions, understand 3d facial pose of a person, virtual makeover, face swapping, morphing, etc.
Remember those smart Snapchat overlays which trigger based on the facial movement, like that tongue that pops out when you open your mouth, well you can also make that using facial landmarks.
So its suffice to say that Facial landmark detection has a lot of interesting applications.
After reading the above tutorial the next step is to learn to manipulate the ROI of these landmarks so, you can modify or extract the individual features like the eyes, nose lips, etc. You can learn that by reading this Tutorial.
After you have gone through both of the above tutorials then you’re ready for running the landmark detector in real time but if you’re still confused about the exact process then take a look at this tutorial.
After you’re fully comfortable working with facial landmarks that’s when the fun starts. Now you’re ready to make some exciting applications, you can start by making a blink detection system by going through the tutorial here.
The main idea for a blink detection system is really simple, you just look at 2 vertical landmark points of the eyes and take the distance between these points, if the distance is too small (below some threshold) then that means the eyes are closed.
Of course, for a robust estimate, you won’t just settle for the distance between two points but rather you will take a smart average of several distances. One smart approach is to calculate a metric called Eye aspect ratio (EAR) for each eye. This metric was introduced in a paper called “ Real-Time Eye Blink Detection using Facial Landmarks”
This will allow you to utilize all 6 x,y landmark points of the eyes returned by dlib, and this way you can accurately tell if there was a blink or not.
Here’s the equation to calculate the EAR.
The full implementation details are explained in the tutorial linked above.
You can also easily extend the above method to create a drowsiness detector that alerts drivers if they feel drowsy, this can be done by monitoring how long the eyes are closed for. This is a really simple extension of the above and have real-world applications and could be used to save lives. Here’s a tutorial that explains how to build a step by step drowsiness detection system.
Interestingly you can take the same blink detection approach above and apply it to lips instead of the eyes, and create a smile detector. Yeah, the only thing you would need to change would be the x,y point coordinates (replace eye points with lip points), the EAR equation (use trial and error or intuition to change this), and the threshold.
Few years back I created this smile camera application with only a few lines of code, it takes a picture when you smile. You can easily create that by modifying the above tutorial.
What more can you create with this ?
How about a yawn detector, or a detector that tells if the user’s mouth is opened or not. You can do this by slightly modifying the above approach, you will be using the same lips x,y landmark points, the only difference would be how you’re calculating the distance between points.
Here’s a cool application I built a while back, its the infamous google dino game that’s controlled by me opening and closing the mouth.
The only drawback of the above application is that I can’t munch food while playing this game.
Taking the same concepts above you can create interesting snapchat overlay triggers.
Here’s an eye bulge and fire throw filter I created that triggers when I glare or open my mouth.
Similarly you can create lots of cool things using the facial landmarks.
Facial Alignment & Filter Orientation Correction:
Doing a bit of math with the facial landmarks will allow you to do facial alignment correction. Facial alignment allows you to correctly orient a rotated face.
Why is facial alignment important?
One of the most important use case for facial alignment is in face recognition, there are many classical face recognition algorithms that will perform better if the face is oriented correctly before performing inference on them.
One other useful thing concerning facial alignment is that you can actually extract the angle of the rotated face, this is pretty useful when you’re working with an augmented reality filter application as this will allow you to rotate the filters according to the orientation of the face.
Here’s an application I built that does that.
Head Pose Estimation:
A problem similar to facial alignment correction could be head pose estimation. In this technique instead of determining the 2d head rotation, you will learn to extract the full 3d head pose orientation. This is particularly useful when you’re working with an augmented reality application like overlaying a 3d mask on the face. You will only be able to correctly render the 3d object on the face if you know the face’s 3d orientation.
Landmark detection is not all dlib has to offer, there are other useful techniques like a correlation tracking algorithm for Object Tracking that comes packed with dlib.
This tracker works well with changes in translation and scale and it works in real time.
Object Detection VS Object Tracking:
If you’re just starting out in your computer vision journey and have some confusion regarding object detection vs tracking then understand that in Object Detection, you try to find an instance of the target object in the whole image. And you perform this detection in each frame of the video. There can be multiple instances of the same object and you’ll detect all of them with no differentiation between those object instances.
What I’m trying to say above is that a single image or frame of a video can contain multiple objects of the same class for e.g. multiple cats can be present on the same image and the object detector will see it as the same thing `CAT` with no difference between the individual cats throughout the video.
Whereas an Object Tracking algorithm will track each cat separately in each frame and will recognize each cat by a unique ID throughout the video.
Here’s a series of cool facial manipulations you can do by utilizing facial landmarks and some other techniques.
Face Morphing:
What you see in the above video is called facial morphing. I’m sure you have seen such effects in other apps and movies. This effect is a lot more than a simple image pixel blending or transition.
To have a morph effect like the above, you need to do image alignment, establish pixel correspondences using facial landmark detection and more.
By understanding and utilizing facial morphing techniques you can even do morphing between dissimilar objects like a face to a lion.
Face Swapping:
After you’ve understood face morphing then another really interesting you can do is face swapping, where you take a source face and put it over a destination face. Like putting Modi’s face over Musharaf’s above.
The techniques underlying face swapping is pretty similar to the one used in face morphing so there is not much new here.
The way this swapping is done makes the results look real and freakishly weird. See how everything from lightning to skin tone is matched.
Tip: If you want to make the above code work in real-time then you would need to replace the seamless cloning function with some other faster cloning method, the results won’t be as good but it’ll work in real-time.
Note: It should be noted this technique although gives excellent results but the state of the art in face swapping is achieved by deep learning based methods (deepfakes, FaceApp etc).
Face Averaging:
Average face of: Aiman Khan, Ayeza Khan, Mahira Khan, Mehwish Hayat, Saba Qamar & Syra Yousuf
Similar to above methods there’s also Face averaging where you smartly average several faces together utilizing facial landmarks.
The face image you see above is the average face I created using 6 different Pakistani female celebrities.
It should not come as a surprise that dlib also has a face recognition pipeline, not only that but the Face recognition implementation is really robust one and is a modified version of ResNet-34, based on the paper “ Deep Residual Learning for Image Recognition paper by He et al.”, it has an accuracy of 99.38% on the Labeled Faces in the Wild (LFW) dataset. This dataset contains ~3 million images.
The model was trained using deep metric learning and for each face, it learned to output a 128-dimensional vector. This vector encodes all the important information about the face. This vector is also called a face embedding.
First, you will store some face embeddings of target faces and then you will test on different new face images. Meaning you will extract embedding from test images and compare it with the saved embeddings of the target faces.
If two vectors are similar (i.e. the euclidean distance between them is small) then it’s said to be a match. This way you can make thousands of matches pretty fast. The approach is really accurate and works in real-time.
Dlib’s Implementation of face recognition can be found here. But I would recommend that you use the face_recognition library to do face recognition.This library uses dlib internally and makes the code a lot simpler.
Consider this, you went to a museum with a number of friends, all of them asked you to take their pictures behind several monuments/statues such that each of your friend had several images of them taken by you.
Now after the trip, all your friends ask for their pictures, now you don’t want to send each of them your whole folder. So what can you do here?
Fortunately, face clustering can help you out here, this method will allow you to make clusters of images of each unique individual.
Consider another use case: You want to quickly build a face recognition dataset for 10 office people that reside in a single room. Instead of taking manual face samples of each person, you instead record a short video of everyone together in the room, you then use a face detector to extract all the faces in each frame, and then you can use a face clustering algorithm to sort all those faces into clusters/folders. Later on, you just need to name these folders and your dataset is ready.
Clustering is a useful unsupervised problem and has many more use cases. Face clustering is built on top of face recognition so once you’ve understood the recognition part this is easy.
Just like the Dlib’s Facial Landmark detector, you can train your own custom landmark detector. This detector is also called a shape predictor. Now you aren’t restricted to only facial landmarks but you can go ahead and train a landmark detector for almost anything, body joints of a person, some key points of a particular object, etc.
As long as you can get sufficient annotated data for the key points, you can use dlib to train a landmark detector on it.
Just like a custom landmark detector, you can train a custom Object detector with dlib. Dlib uses Histogram of Oriented Gradients (HOG) as features and a Support Vector Machine (SVM) Classifier. This combined with sliding windows and image pyramids, you’ve got yourself an Object detector. The only limitation is that you can train it to detect a single object at a time.
The Object detection approach in dlib is based on the same series of steps used in the sliding window based object detector first published by Dalal and Triggs in 2005 in the Histograms of Oriented Gradients for Human Detection.
HOG + SVM based detector are the strongest non Deep learning based approach for object detection, Here’s a hand detector I built using this approach a few years back.
I didn’t even annotated nor collected training data for my hands but instead made a sliding window application that automatically collected my hand pictures as it moved on the screen and I placed my hands in the bounding box.
Afterward, I took this hand detector created a Video car game controller, so now I was steering the Video game car with my hands literally. To be honest, that wasn’t a pleasant experience, my hand was sore afterwards. Making something cool is not hard but it would take a whole lot effort to make a practical VR or AR-based application.
Dlib Optimizations For Faster & Better Performance:
Here’s a bunch of techniques and tutorials that will help you get the most out of dlib’s landmark detection.
Using A Faster Landmark Detector:
Beside’s the 68 point landmark detector, dlib also has 5 point landmark detector that is 10 times smaller and faster (about 10%) than the 68 point one. If you need more speed and the 5 landmark points as visualized above is all you need then you should opt for this detector. Also from what I’ve seen its also somewhat more efficient than the 68 point detector.
There are a bunch of tips and techniques that you can use to get a faster detection speed, now a landmark detector itself is really fast, the rest of the pipeline takes up a lot of time. Some tricks you can do to increase speed are:
Skip Frames:
If you’re reading from a high fps camera then it won’t hurt to perform detection on every other frame, this will effectively double your speed.
Reduce image Size:
If you’re using Hog + Sliding window based detection or a haar cascade + Sliding window based one then the face detection speed depends upon the size of the image. So one smart thing you can do is reduce the image size before face detection and then rescale the detected coordinates for the original image later.
Tip:The biggest bottleneck you’ll face in the landmark detection pipeline is the HOG based face detector in dlib which is pretty slow. You can replace this with haar cascades or the SSD based face detector for faster performance.
Summary:
Let’s wrap up, in this tutorial we went over a number of algorithms and techniques in dlib.
We started with installation, moved on to face detection and landmark prediction, and learned to build a number of applications using landmark detection. We also looked at other techniques like correlation tracking and facial recognition.
We also learned that you can train your own landmark detectors and object detectors with dlib.
At the end we learned some nice optimizations that we can do with our landmark predictor.
Final Tip: I know most of you won’t be able to go over all the tutorials linked here in a single day so I would recommend that you save and bookmark this page and tackle a single problem at a time. Only when you’ve understood a certain technique move on to the next.
It goes without saying that Dlib is a must learn tool for serious computer vision practitioners out there.
I hope you enjoyed this tutorial and found it useful. If you have any questions feel free to ask them in the comments and I’ll happily address it.
You can reach out to me personally for a 1 on 1 consultation session in AI/computer vision regarding your project. Our talented team of vision engineers will help you every step of the way. Get on a call with me directlyhere.
Ready to seriously dive into State of the Art AI & Computer Vision? Then Sign up for these premium Courses by Bleed AI
Wouldn’t it be cool if you could just wave a pen in the air to draw something virtually and it actually draws it on the screen? It could be even more interesting if we didn’t use any special hardware to actually achieve this, just plain simple computer vision would do, in fact, we wouldn’t even need to use machine learning or deep learning to achieve this.
Here’s a demo of the Application that we will built.