Generating DeepFakes from a Single Image in Minutes

By Rizwan Naeem and Taha Anwar

On August 1, 2022

In this tutorial, we will learn how to manipulate facial expressions and create a DeepFake video out of a static image using the famous First-Order Motion Model. Yes, you heard that right, we just need a single 2D image of a person to create the DeepFake video.

Excited yet? … not that much ? ..  well what if I tell you, the whole tutorial is actually on Google Colab, so you don’t need to worry about installation or GPUs to run, everything is configured.

And you know what the best part is?

Utilizing the colab that you will get in this tutorial, you can generate deepfakes in a matter of seconds, yes seconds, not weeks, not days, not hours but seconds.

What is a DeepFake?

The term DeepFake is a combination of two words; Deep refers to the technology responsible for generating DeepFake content, known as Deep learning, and Fake refers to the falsified content. The technology generates synthetic media, to create falsified content, which can be done by either replacing or synthesizing the new content (can be a video or even audio).

Below you can see the results on a few sample images:

This feels like putting your own words in a person’s mouth but on a whole new level.

Also, you may have noticed, in the results above, that we are generating the output video utilizing the whole frame/image, not just on the face ROI that people normally do.

First-Order Motion Model

We will be using the aforementioned First-Order Motion Model, so let’s start by understanding what it is and how it works?

The term First-Order Motion refers to a change in luminance over space and time, and the first-order motion model utilizes this change to capture motion in the source video (also known as the driving video). 

The framework is composed of two main components: motion estimation (which predicts a dense motion field) and image generation (which predicts the resultant video). You don’t have to worry about the technical details of these modules to use this model. If you are not a computer vision practitioner, you should skip the paragraph below.

The Motion Extractor module uses the unsupervised key point detector to get the relevant key points from the source image and a driving video frame. The local affine transformation is calculated concerning the frame from the driving video. A Dense Motion Network then generates an occlusion map and a dense optical flow, which is fed into the Generator Module alongside the source image. The Generator Module generates the output frame, which is a replica of the relevant motion from the driving video’s frame onto the source image.

This approach can also be used to manipulate faces, human bodies, and even animated characters, given that the model is trained on a set of videos of similar object categories.

Now that we have gone through the prerequisite theory and implementation details of the approach we will be using, let’s dive into the code.

Download code:


  • Step 3: Prepare a source image
    • Step 3.1: Detect the face
    • Step 3.2: Align and crop the face
    • Step 4: Create the DeepFake
      • Step 4.1: Download the First-Order Motion Model
      • Step 4.2: Load the source image and the driving video (Face cropped)
      • Step 4.3: Generate the video
      • Step 4.4: Embed the manipulated face into the source image
      • Step 5: Add audio (of the driving video) to the DeepFake output video
      • Conclusion
      • Alright, let’s get started.

        Step 1: Setup the environment

        In the first step, we will set up an environment that is required to use the First-Order Motion model.

        Step 1.1: Clone the repositories

        Clone the official First-Order-Model repository.

        Step 1.2: Install the required Modules

        Install helper modules that are required to perform the necessary pre- and post-processing.

        Import the required libraries.

        Step 2: Prepare a driving video

        In this step, we will create a driving video and will make it ready to be passed into the model.

        Step 2.1: Record a video from the webcam

        Create a function record_video() that can access the webcam utilizing JavaScript.

        Remember that Colab is a web IDE that runs entirely on the cloud, so that’s why JavaScript is needed to access the system Webcam.

        Now utilize the record_video() function created above, to record a video. Click the recording button, and then the browser will ask for user permission to access the webcam and microphone (if you have not allowed these by default) after allowing, the video will start recording and will be saved into the disk after a few seconds. Please make sure to have neutral facial expressions at the start of the video to get the best Deep Fake results.

        You can also use a pre-recorded video if you want, by skipping this step and saving that pre-recorded video at the video_path.

        The video is saved, but the issue is that the video is just a set of frames with no FPS and Duration information, and this can cause issues later on, so now, before proceeding further, resolve the issue by utilizing the FFMPEG command.

        Step 2.2: Crop the face from the recorded video

        Crop the face from the video by utilizing the script provided in the First-Order-Model repository.

        The Script will generate a FFMPEG Command that we can use to align and crop the face region of interest after resizing it to 256x256Note that it does not print any FFMPEG Command if it fails to detect the face in the video.

        Utilize the FFMPEG command generated by the script to create the desired video.

        Now that the cropped face video is stored in the disk, display it to make sure that we have extracted exactly what we desired.

        Perfect! The driving video looks good. Now we can start working on a source image.

        Step 3: Prepare a source Image

        In this step, we will make the source Image ready to be passed into the model.

        Download the Image

        Download the image that we want to pass to the First-Order Motion Model utilizing the wget command.

        Load the Image

        Read the image using the function cv2.imread() and display it utilizing the matplotlib library.

        Note: In case you want to use a different source image, make sure to use an image of a person with neutral expressions to get the best results.

        Step 3.1: Detect the face

        Similar to the driving video, we can’t pass the whole source image into the First-Order Motion Model, we have to crop the face from the image and then pass the face image into the model. For this we will need a Face Detector to get the Face Bounding Box coordinates and we will utilize the Mediapipe’s Face Detection Solution.

        Initialize the Mediapipe Face Detection Model

        To use the Mediapipe’s Face Detection solution, initialize the face detection class using the syntax, and then call the function with the arguments explained below:

        • model_selection – It is an integer index ( i.e., 0 or 1 ). When set to 0, a short-range model is selected that works best for faces within 2 meters from the camera, and when set to 1, a full-range model is selected that works best for faces within 5 meters. Its default value is 0.
        • min_detection_confidence – It is the minimum detection confidence between ([0.0, 1.0]) required to consider the face-detection model’s prediction successful. Its default value is 0.5 ( i.e., 50% ) which means that all the detections with prediction confidence less than 0.5 are ignored by default.

        Create a function to detect face

        Create a function detect_face() that will utilize the Mediapipe’s Face Detection Solution to detect a face in an image and will return the bounding box coordinates of the detected face.

        To perform the face detection, pass the image (in RGB format) into the loaded face detection model by using the function The output object returned will have an attribute detections that contains a list of a bounding box and six key points for each face in the image.

        Note that the bounding boxes are composed of xmin and width (both normalized to [0.0, 1.0] by the image width) and ymin and height (both normalized to [0.0, 1.0] by the image height). Ignore the face key points for now as we are only interested in the bounding box coordinates.

        After performing the detection, convert the bounding box coordinates back to their original scale utilizing the image width and height. Also draw the bounding box on a copy of the source image using the function cv2.rectangle().

        Utilize the detect_face() function created above to detect the face in the source image and display the results.

        Nice! face detection is working perfectly.

        Step 3.2: Align and crop the face

        Another very important preprocessing step is the Face Alignment on the source image. Make sure that the face is properly aligned in the source image otherwise the model can generate weird/funny output results.

        To align the face in the source image, first detect the 468 facial landmarks using Mediapipe’s Face Mesh Solution, then extract the eyes center and nose tip landmarks to calculate the face orientation and then finally rotate the image accordingly to align the face.

        Initialize the Face Landmarks Detection Model

        To use the Mediapipe’s Face Mesh solution, initialize the face mesh class using the syntax and call the function with the arguments explained below:

        • static_image_mode – It is a boolean value that is if set to False, the solution treats the input images as a video stream. It will try to detect faces in the first input images, and upon a successful detection further localizes the face landmarks. In subsequent images, once all max_num_faces faces are detected and the corresponding face landmarks are localized, it simply tracks those landmarks without invoking another detection until it loses track of any of the faces. This reduces latency and is ideal for processing video frames. If set to True, face detection runs on every input image, ideal for processing a batch of static, possibly unrelated, images. Its default value is False.
        • max_num_faces – It is the maximum number of faces to detect. Its default value is 1.
        • refine_landmarks – It is a boolean value that is if set to True, the solution further refines the landmark coordinates around the eyes and lips, and outputs additional landmarks around the irises by applying the Attention Mesh Model. Its default value is False.
        • min_detection_confidence – It is the minimum detection confidence ([0.0, 1.0]) required to consider the face-detection model’s prediction correct. Its default value is 0.5 which means that all the detections with prediction confidence less than 50% are ignored by default.
        • min_tracking_confidence – It is the minimum tracking confidence ([0.0, 1.0]) from the landmark-tracking model for the face landmarks to be considered tracked successfully, or otherwise face detection will be invoked automatically on the next input image, so increasing its value increases the robustness, but also increases the latency. It is ignored if static_image_mode is True, where face detection simply runs on every image. Its default value is 0.5.

        We will be working with images only, so we will have to set the static_image_mode to True. We will also define the eyes and nose landmarks indexes that are required to extract the eyes and nose landmarks.

        Create a function to extract eyes and nose landmarks

        Create a function extract_landmarks() that will utilize the Mediapipe’s Face Mesh Solution to detect the 468 Facial Landmarks and then extract the left and right eyes corner landmarks and the nose tip landmark.

        To perform the Face(s) landmarks detection, pass the image to the face’s landmarks detection machine learning pipeline by using the function But first, convert the image from BGR to RGB format using the function cv2.cvtColor() as OpenCV reads images in BGR format and the ml pipeline expects the input images to be in RGB color format.

        The machine learning pipeline outputs an object that has an attribute multi_face_landmarks that contains the 468 3D facial landmarks for each detected face in the image. Each landmark has:

        • x – It is the landmark x-coordinate normalized to [0.0, 1.0] by the image width.
        • y – It is the landmark y-coordinate normalized to [0.0, 1.0] by the image height.
        • z – It is the landmark z-coordinate normalized to roughly the same scale as x. It represents the landmark depth with the center of the head being the origin, and the smaller the value is, the closer the landmark is to the camera.

        After performing face landmarks detection on the image, convert the landmarks’ x and y coordinates back to their original scale utilizing the image width and height and then extract the required landmarks utilizing the indexes we had specified earlier. Also draw the extracted landmarks on a copy of the source image using the function, just for visualization purposes.

        Now we will utilize the extract_landmarks() function created above to detect and extract the eyes and nose landmarks and visualize the results.

        Cool! it is accurately extracting the required landmarks.

        Create a function to calculate eyes center

        Create a function calculate_eyes_center() that will find the left and right eyes center landmarks by utilizing the eyes corner landmarks that we had extracted in the extract_landmarks() function created above.

        Use the extracted_landmarks() and the calculate_eyes_center() function to calculate the central landmarks of the left and right eyes on the source image.

        Working perfectly fine!

        Create a function to rotate images

        Create a function rotate_image() that will simply rotate an image in a counter-clockwise direction with a specific angle without losing any portion of the image.

        Utilize the rotate_image() function to rotate the source image at an angle of 45 degrees.

        Rotation looks good, but rotating the image with a random angle will not bring us any good.

        Create a function to find the face orientation

        Create a function calculate_face_angle() that will find the face orientation, and then we will rotate the image accordingly utilizing the function rotate_image() created above, to appropriately align the face in the source image.

        To find the face angle, first get the eyes and nose landmarks using the extract_landmarks() function then we will pass these landmarks to the calculate_eyes_center() function to get the eyes center landmarks, then utilizing the eyes center landmarks we will calculate the midpoint of the eyes i.e., the center of the forehead. And we will use the detect_face() function created in the previous step, to get the face bounding box coordinates and then utilize those coordinates to find the center_pred point i.e., the mid-point of the bounding box top-right and top_left coordinate.

        And then finally, find the distance between the nosecenter_of_forehead and center_pred landmarks as shown in the gif above to calculate the face angle utilizing the famous cosine-law.

        Utilize the calculate_face_angle() function created above the find the face angle of the source image and display it.

        Face Angle: -8.50144759667417

        Now that we have the face angle, we can move on to aligning the face in the source image.

        Create a Function to Align the Face and Crop the Face Region

        Create a function align_crop_face() that will first utilize the function calculate_face_angle() to get the face angle, then rotate the image accordingly utilizing the rotate_image() function and finally crop the face from the image utilizing the face bounding box coordinates (after scaling) returned by the detect_face() function. In the end, it will also resize the face image to the size 256x256 that is required by the First-Order Motion Model.

        Use the function align_crop_face() on the source image and visualize the results.

        Make sure that the whole face is present in the cropped face ROI results. Increase/decrease the face_scale_factor value if you are testing this colab on a different source image. Increase the value if the face is being cropped in the source image and decrease the value if the face ROI image contains too much background.

        I must say its looking good! all the preprocessing steps went as we intended. But now comes a post-processing step, after generating the output from the First-Order Motion Model.

        Remember that later on, we will have to embed the manipulated face back into the source image, so a function to restore the source image’s original state after embedding the output is also required.

        Create a function to restore the original source image

        So now we will create a function restore_source_image() that will undo the rotation we had applied on the image and will remove the black borders which appeared after the rotation.

        Utilize the calculate_face_angle() and rotate_image() function to create a rotated image and then check if the restore_source_image() can restore the images original state by undoing the rotation and removing the black borders from image.

        Step 4: Create the DeepFake

        Now that the source image and driving video is ready, so now in this step, we will create a DeepFake video.

        Step 4.1: Download the First-Order Motion Model

        Now we will download the required pre-trained network from the Yandex Disk Models. We have multiple options there, but since we are only interested in face manipulation, we will only download the vox-adv-cpk.pth.tar file.

        Create a function to display the results

        Create a function display_results() that will concatenate the source image, driving video, and the generated video together and will show the results.

        Step 4.2: Load source image and driving video (Face cropped)

        Load the pre-processed source image and the driving video and then display them utilizing the display_results() function created above.

        Step 4.3: Generate the video

        Now that everything is ready, utilize the script that was imported earlier to finally generate the DeepFake video. First load the model file that was downloaded earlier along with the configuration file that was available in the First-Order-Model repository that was cloned. And then generate the video utilizing the demo.make_animation() function and display the results utilizing the display_results() function.

        Step 4.4: Embed the manipulated face into the source image

        Create a function embed_face() that will simply insert the manipulated face in the generated video back to the source image.

        Now let’s utilize the function embed_face() to insert the manipulated face into the source image.

        The video is now stored on the disk, so now we can display it to see what the final result looks like.

        Step 5: Add Audio (of the Driving Video) to the DeepFake Output Video

        In the last step, first copy the audio from the driving video into the generated video and then download the video on the disk.

        The video should have started downloading in your system.

        Bonus: Generate more examples

        Now let’s try to generate more videos with different source images.

        And here are a few more results on different sample images:

        After Johnny Depp, comes Mark Zuckerberg sponsoring Bleed AI.

        And last but not least, of course, comes someone from the Marvel Universe, yes it’s Dr. Strange himself asking you to visit Bleed AI.

        You can now share these videos that you have generated on social media. Make sure that you mention that it is a DeepFake video in the post’s caption.


        One of the current limitations of the approach we are using is when the person is moving too much in the driving video. The final results will be terrible because we are only getting the face ROI video from the First-Order Motion Model and then embedding the face video into the source image using image processing techniques. We can’t move the body of the person in the source image if the face is moving in the generated face ROI video. So for the driving videos in which the person is moving too much, you can skip the face embedding part or just train a First-Order Motion Model to manipulate the whole body instead of just the face, I might cover that in a future post.

        A Message on Deepfakes by Taha

        These days, It’s not a difficult job to create a  DeepFake video, as you can see, anyone with access to the colab repo (provided when you download the code) can generate deepfakes in minutes.

        Now these fakes are although realistic but you should be easily be able to tell between fake manipulation and real ones, this is because the model is particularly designed for faster interference, there are other approaches where it can take hours or days to render deepfakes but those are very hard to tell from real ones.

        The model I used today, is not new but it’s already been out there for a few years (Fun fact: we were actually working on this blogpost since mid of last year so yeah this got delayed  for more than a year) Anyways, the point is, the deepfake technology is fast evolving and leads to two things,

        1) Easier accessibility: More and more high-level tools and coming which makes the barrier to entry easier and more non-technical people can use these tools to generate deepfakes, I’m sure you know some mobile apps that let common generate these.

        2) Algorithms: algorithms are getting better and better such that, you’re going to find a lot of difficulty in identifying a deepfake vs a real video. Today, professional deepfake creators actually export the output of a deepfake model to a video editor and get rid of bad frames or correct them so people are not able to easily figure out if it’s a  fake and it makes sense if the model generates a 10 sec (30fps) frames then not all 300 outputs are going to be perfect.

        Obviously, deepfake tech has many harmful effects, it has been used to generate fake news, spread propaganda, and create pornography but it also has its creative use cases in the entertainment industry (check wombo) and in the content industry, just check out the amazing work is doing and how it had helped people and companies.

        One thing you might wonder is that in these times, how should you equip yourself to spot deepfakes?

        Well, there are certainly some things you can do to better prepare yourself, for one, you can learn a thing or two about digital forensics and how you can spot the fakes from anomalies, pixel manipulations, metadata, etc.

        Even as a non-tech consumer you can do a lot in identifying a fake from a real video by fact-checking and finding the original source of the video. For e.g. if you find your country’s president talking about starting a nuclear war with North Korea on some random person’s Twitter, then it’s probably fake no matter how real the scene looks. An excellent resource to learn about fact-checking is this youtube series called Navigating Digital Information by Crashcourse. Do check it out.


        Hire Us

        Let our team of expert engineers and managers build your next big project using Bleeding Edge AI Tools & Technologies


        Join My Course Computer Vision For Building Cutting Edge Applications Course

        The only course out there that goes beyond basic AI Applications and teaches you how to create next-level apps that utilize physics, deep learning, classical image processing, hand and body gestures. Don’t miss your chance to level up and take your career to new heights

        You’ll Learn about:

        • Creating GUI interfaces for python AI scripts.
        • Creating .exe DL applications
        • Using a Physics library in Python & integrating it with AI
        • Advance Image Processing Skills
        • Advance Gesture Recognition with Mediapipe
        • Task Automation with AI & CV
        • Training an SVM machine Learning Model.
        • Creating & Cleaning an ML dataset from scratch.
        • Training DL models & how to use CNN’s & LSTMS.
        • Creating 10 Advance AI/CV Applications
        • & More

        Whether you’re a seasoned AI professional or someone just looking to start out in AI, this is the course that will teach you, how to Architect & Build complex, real world and thrilling AI applications

        Ready to seriously dive into State of the Art AI & Computer Vision?
        Then Sign up for these premium Courses by Bleed AI

        Designing Advanced Image Filters in OpenCV | Creating Instagram Filters – Pt 3⁄3

        Designing Advanced Image Filters in OpenCV | Creating Instagram Filters – Pt 3⁄3

        This is the last tutorial in our 3 part Creating Instagram Filters series. In this tutorial, you will learn to create 10 very interesting and cool Instagram filters-like effects on images and videos. The Filters which are gonna be covered are; Warm Filter, Cold Filter, Gotham Filter, GrayScale Filter, Sepia Filter, Pencil Sketch Filter, Sharpening Filter, Detail Enhancing Filter, Invert Filter, and Stylization Filter.

        Working With Lookup Tables & Applying Color Filters on Images & Videos | Creating Instagram Filters – Pt ⅔

        Working With Lookup Tables & Applying Color Filters on Images & Videos | Creating Instagram Filters – Pt ⅔

        This is the second tutorial in our 3 part Creating Instagram Filters series. In this tutorial, you will learn what LookUp Tables are, why are they preferred along with their use cases in real-life, and then utilize these LookUp Tables to create some spectacular photo effects called Color Filters a.k.a. Tone Effects. And then you will create a user interface similar to the Instagram filter selection screen using mouse events & trackbars in OpenCV.


        Submit a Comment

        Your email address will not be published. Required fields are marked *