Table of Contents
How Do I Get Started?
You’re interested in Computer Vision, Deep Learning, and OpenCV…but you don’t know how to get started.
Follow these steps to get OpenCV configured/installed on your system, learn the fundamentals of Computer Vision, and graduate to more advanced topics, including Deep Learning, Face Recognition, Object Detection, and more!
Before you can start learning OpenCV you first need to install the OpenCV library on your system.
By far the easiest way to install OpenCV is via pip:
However, for the full, optimized install I would recommend compiling from source:
- How to install OpenCV 4 on Ubuntu
- Install OpenCV 4 on macOS
- Install OpenCV 4 on Raspberry Pi 4 and Raspbian Buster
Compiling from source will take longer and requires basic Unix command line and Operating System knowledge (but is worth it for the full install).
If you’re brand new to OpenCV and/or Computer Science in general, I would recommend you follow the pip install. Otherwise, you can compile from source.
If you run into any problems compiling from source you should revert to the pip install method.
Please note do that I do not support Windows.
I do not recommend Windows for Computer Vision, Deep Learning, and OpenCV.
Furthermore, I have not used the Windows OS in over 10+ years so I cannot provide support for it.
If you are using Windows and want to install OpenCV, be sure to follow the official OpenCV documentation.
Once you have OpenCV installed on your Windows system all code examples included in my tutorials should work (just understand that I cannot provide support for them if you are using Windows).
If you are struggling to configure your development environment be sure to take a look at my book, Practical Python and OpenCV, which includes a pre-configured VirtualBox Virtual Machine.
All you need to do is install VirtualBox, download the VM file, import it and load the pre-configured development environment.
And best of all, this VM will work on Linux, macOS, and Windows!
Command line arguments aren’t a Computer Vision concept but they are used heavily here on PyImageSearch and elsewhere online.
If you intend on studying advanced Computer Science topics such as Computer Vision and Deep Learning then you need to understand command line arguments:
Take the time now to understand them as they are a crucial Computer Science topic that cannot, under any circumstance, be overlooked.
Congrats, you are now ready to learn the fundamentals of Computer Vision and the OpenCV library!
This OpenCV Tutorial will teach you the basics of the OpenCV library, including:
- Loading an image
- Accessing individual pixels
- Array/Region of Interest (ROI) cropping
- Resizing images
- Rotating an image
- Edge detection
- Thresholding
- Drawing lines, rectangles, circles, and text on an image
- Masking and bitwise operations
- Contour and shape detection
- …and more!
Additionally, if you want a consolidated review of the OpenCV library that will get you up to speed in less than a weekend, you should take a look at my book, Practical Python and OpenCV.
At this point you have learned the basics of OpenCV and have a solid foundation to build upon.
Take the time now to follow these guides and practice building mini-projects with OpenCV.
To start, I highly recommend you follow this guide on debugging common “NoneType” errors with OpenCV:
You’ll see these types of errors when (1) your path to an input image is incorrect, returning in cv2.imread returning None or (2) OpenCV cannot properly access your video stream.
Trust me, at some point in your Computer Vision/OpenCV career you’ll see this error — take the time now to read the article above to learn how to diagnose and resolve the error.
The following tutorials will help you extend your OpenCV knowledge and build on the fundamentals:
- Rotate images (correctly) with OpenCV and Python
- OpenCV and Python Color Detection
- Montages with OpenCV
- Super fast color transfer between images
Contours are a very basic image processing technique — but they are also very powerful if you use them correctly.
The following tutorials will teach you the basics of contours with OpenCV:
- OpenCV center of contour
- Finding Shapes in Images using Python and OpenCVÂ
- Finding extreme points in contours with OpenCV
- Sorting Contours using Python and OpenCV
- OpenCV shape detection
- Determining object color with OpenCV
From there, follow this guide to build a document scanner using OpenCV:
This tutorial extends the document scanner to create an automatic standardized test (i.e, bubble multiple choice) scanner and grader:
Additionally, I recommend that you take these projects and extend them in some manner, enabling you to gain additional practice.
As you work through each tutorial, keep a notepad handy and jot down inspiration as it comes to you.
For example:
- How might you apply the algorithm covered in a tutorial to your particular dataset of images?
- What would you change if you wanted to filter out specific objects using contours?
Make notes to yourself and come back and try to solve these mini-projects later.
Practice makes perfect and Computer Vision/OpenCV are no different.
After working through the tutorials in Step #4 (and ideally extending them in some manner), you are now ready to apply OpenCV to more intermediate projects.
My first suggestion is to learn how to access your webcam using OpenCV.
The following tutorial will enable you to access your webcam in a threaded, efficient manner:
Again, refer to the resolving NoneType errors post if you cannot access your webcam
Next, you should learn how to write to video using OpenCV as well as capture “key events” and log them to disk as video clips:
Let’s now access a video stream and combine it contour techniques to build a real-world project:
One of my favorite algorithms to teach computer vision is image stitching:
- OpenCV panorama stitching
- Real-time panorama and image stitching with OpenCV
- Image Stitching with OpenCV and Python
These algorithms utilize keypoint detection, local invariant descriptor extraction, and keypoint matching to build a program capable of stitching multiple images together, resulting in a panorama.
There is a dedicated Optical Character Recognition (OCR) section later in this guide, but it doesn’t hurt to gain some experience with it now:
You should also gain some experience using image gradients:
- Detecting Barcodes in Images with Python and OpenCV
- Real-time barcode detection in video with Python and OpenCV
Eventually, you’ll want to build an OpenCV project that can stream your output to a web browser — this tutorial will show you how to do exactly that:
The following guides are miscellaneous tutorials that I recommend you work through to gain experience working with various Computer Vision algorithms:
- Blur detection with OpenCV
- Simple Scene Boundary/Shot Transition Detection with OpenCV
- Seam carving with OpenCV, Python, and scikit-image
- OpenCV Saliency Detection
Again, keep a notepad handy as you work through these projects.
Practice extending them in some manner to gain additional experience.
Congratulations, you have now learned the fundamentals of Image Processing, Computer Vision, and OpenCV!
The Computer Vision field is compromised of subfields (i.e., niches), including Deep Learning, Medical Computer Vision, Face Applications, and many others.
Many of these fields overlap and intertwine as well — they are not mutually exclusive.
That said, as long as you follow this page you’ll always have the proper prerequisites for a given niche, so don’t worry!
Most readers jump immediately into Deep Learning as it’s one of the most popular fields in Computer Science; however,
Where to Next?
If you need additional help learning the basics of OpenCV, I would recommend you read my book, Practical Python and OpenCV.
This book is meant to be a gentle introduction to the world of Computer Vision and Image Processing through the OpenCV library. And if you don’t know Python, don’t worry!
Since I explain every code examples in the book line-by-line, 1000s of PyImageSearch readers have used this book to not only learn OpenCV, but also Python at the same time!
If you’re looking for a more in-depth treatment of the Computer Vision field, I would instead recommend the PyImageSearch Gurus course.
The PyImageSearch Gurus course is similar to a college survey course in Computer Vision, but much more hands-on and practical (including well documented source code examples).
Otherwise, my personal recommendation would be to jump into the Deep Learning section — most PyImageSearch readers who are interested in Computer Vision are also interested in Deep Learning as well.
Deep Learning
Deep Learning algorithms are capable of obtaining unprecedented accuracy in Computer Vision tasks, including Image Classification, Object Detection, Segmentation, and more.
Follow these steps and you’ll have enough knowledge to start applying Deep Learning to your own projects.
Before you can apply Deep Learning to your projects, you first need to configure your Deep Learning development environment.
The following guides will help you install Keras, TensorFlow, OpenCV, and all other necessary CV and DL libraries you need to be successful when applying Deep Learning to your own projects:
- Ubuntu 18.04: Install TensorFlow and Keras for Deep Learning
- macOS: Install TensorFlow and Keras for Deep Learning
Again, I do not provide support for the Windows OS.
I do not recommend Windows for Computer Vision and Deep Learning.
Definitely consider using a Unix-based OS (i.e., Ubuntu, macOS, etc.) when building your Computer Vision and Deep Learning projects.
If you are struggling to configure your Deep Learning development environment, you can:
- Use my Pre-configured Amazon AWS deep learning AMI with Python
- Pick up a copy of my book, Deep Learning for Computer Vision with Python, which includes a VirtualBox Virtual Machine with all the DL and CV libraries you need pre-configured and pre-installed.
All you need to do is install VirtualBox, download the VM file, import it and load the pre-configured development environment.
And best of all, this VM will work on Linux, macOS, and Windows!
Provided that you have successfully configured your Deep Learning development environment, you can move now to training your first Neural Network!
I recommend starting with this tutorial which will teach you the basics of the Keras Deep Learning library:
After that, you should read this guide on training LeNet, a classic Convolutional Neural Network that is both simple to understand and easy to implement:
Implementing LeNet by hand is often the “Hello, world!” of deep learning projects.
Convolutional Neural Networks rely on a Computer Vision/Image Processing technique called convolution.
A CNN automatically learns kernels that are applied to the input images during the training process.
But what exactly are kernels and convolution? To answer that, you should read this tutorial:
Now that you understand what kernels and convolution are, you should move on to this guide which will teach you how Keras’ utilizes convolution to build a CNN:
So far you’ve learned how to train CNNs on pre-compiled datasets — but what if you wanted to work with your own custom data?
But how are you going to train a CNN to accomplish a given task if you don’t already have a dataset of such images?
The short answer is you can’t — you need to gather your image dataset first:
- How to create a deep learning dataset using Google Images
- How to (quickly) build a deep learning image dataset
The Google Images method is fast and easy, but can also be a bit tedious at the same time.
If you are an experiencing programming you will likely prefer the Bing API method as it’s “cleaner” and you have more control over the process.
At this point you have used Step #4 to gather your own custom dataset.
Let’s now learn how to train a CNN on top of that data:
You’ll also want to refer to this guide which will give you additional practice training CNNs with Keras:
Along the way you should learn how to save and load your trained models, ensuring you can make predictions on images after your model has been trained:
So, you trained your own CNN from Step #5 — but your accurate isn’t as good as what you want it to be.
What now?
In order to obtain a highly accurate Deep Learning model, you need to tune your learning rate, the most important hyperparameter when training a Neural Network.
The following tutorial will teach you how to start training, stop training, reduce your learning rate, and continue training, a critical skill when training neural networks:
This guide will teach you about learning rate schedules and decay, a method that can be quickly implemented to slowly lower your learning rate when training, allowing it to descend into lower areas of the loss landscape, and ideally obtain higher accuracy:
You should also read about Cyclical Learning Rates (CLRs), a technique used to oscillate your learning rate between an upper and lower bound, enabling your model to break out of local minima:
But what if you don’t know what your initial learning rate should be?
Don’t worry, I have a simple method that will help you out:
If you haven’t already, you will run into two important terms in Deep Learning literature:
Generalization: The ability of your model to correctly classify images that are outside the training set used to train the model.
Your model is said to “generalize well” if it can correctly classify images that it has never seen before.
Generalization is absolutely critical when training a Deep Learning model.
Imagine if you were working for Tesla and needed to train a self-driving car application used to detect cars on the road.
Your model worked well on the training set…but when you evaluated it on the testing set you found that the model failed to detect the majority of cars on the road!
In such a situation we would say that your model “failed to generalize”.
To fix this problem you need to apply regularization.
Regularization: The term “regularization” is used to encompass all techniques used to (1) prevent your model from overfitting and (2) generalize well to your validation and testing sets.
Regularization techniques include:
- L2 regularization (also called weight decay)
- Updating the CNN architecture to include dropout
You can read the following tutorial for an introduction/motivation to regularization:
Data augmentation is a type of regularization technique.
There are three types of data augmentation, including:
- Type #1: Dataset generation and expanding an existing dataset (less common)
- Type #2: In-place/on-the-fly data augmentation (most common)
- Type #3: Combining dataset generation and in-place augmentation
Unless you have a good reason not to apply data augmentation, you should always utilize data augmentation when training your own CNNs.
You can read more about data augmentation here:
So far we’ve trained our CNNs from scratch — but is it possible to take a pre-trained model and use it to classify images it was never trained on?
Yes, it absolutely is!
Taking a pre-trained model and using it to classify data it was never trained on is called transfer learning.
There are two types of transfer learning:
Feature extraction:Â Here we treat our CNN as an arbitrary feature extractor.
An input image is presented to the CNN.
The image is forward-propagated to an arbitrary layer of the network.
We take those activations as our output and treat them like a feature vector.
Given feature vectors for all input images in our dataset we train an arbitrary Machine Learning model (ex., Logistic Regression, Support Vector Machine, SVM) on top of our extracted features.
When making a prediction, we:
- Forward-propagate the input image.
- Take the output features.
- Pass them to our ML classifier to obtain our output prediction.
You can read more about feature extraction here:
- Keras: Feature extraction on large datasets with Deep Learning
- Online/Incremental Learning with Keras and Creme
Fine-tuning: Here we modify the CNN architecture itself by performing network surgery.
Think of yourself as a “CNN Surgeon”.
We start by removing the Fully-Connected (FC) layer head from the pre-trained network.
Next, we add a brand new, randomly initialized FC layer head to the network
Optionally, we freeze layers earlier in the CNN prior to training.
Keep in mind that CNNs are hierarchical feature learners:
- Layers earlier in the CNN can detect “structural building blocks”, including blobs, edges, corners, etc.
- Intermediate layers use these building blocks to start learning actual shapes
- Finally, higher-level layers of the network learn abstract concepts (such as the objects themselves).
We freeze layers earlier in the network to ensure we retain our structural building blocks
Training is then started using a very low learning rate.
Once our new FC layer head is “warmed up” we may then optionally unfreeze our earlier layers and continue training
You can learn more about fine-tuning here:
I’ll wrap up this section by saying that transfer learning is a critical skill for you to properly learn.
Use the above tutorials to help you get started, but for a deeper dive into my tips, suggestions, and best practices when applying Deep Learning and Transfer Learning, be sure to read my book Deep Learning for Computer Vision with Python.
Inside the text I not only explain transfer learning in detail, but also provide a number of case studies to show you how to successfully apply it to your own custom datasets.
At this point you have a good understanding of how to apply CNNs to images — but what about videos?
Can the same algorithms and techniques be applied?
Video classification is an entirely different beast — typical algorithms you may want to use here include Recurrent Neural Networks (RNNs) and Long Short-Term Memory networks (LSTMs).
However, before you start breaking out the “big guns” you should read this guide:
Inside you’ll learn how to use prediction averaging to reduce “prediction flickering” and create a CNN capable of applying stable video classification.
Imagine you are hired by a large clothing company (ex., Nordstorms, Neiman Marcus, etc.) and are tasked with building a CNN to classify two attributes of an input clothing image:
- Clothing Type:Â Shirt, dress, pants, shoes, etc.
- Color: The actual color of the item of clothing (i.e., blue, green, red, etc.).
To get started building such a model, you should refer to this tutorial:
As you’ll find out in the above guide, building a more accurate model requires you to utilize a multi-output network:
Now, let’s imagine that for your next job you are hired by real estate company used to automatically predict the price of a house based solely on input images.
You are given images of the bedroom, bathroom, living room, and house exterior.
You now need to train a CNN to predict the house price using just those images.
To accomplish that task you’ll need a multi-input network:
- Basic regression with Keras
- Keras, Convolutional Neural Networks, and Regression
- Keras: Multiple Inputs and Mixed Data
Both multi-input and multi-output networks are a bit on the “exotic” side.
You won’t need them often, but when you do, you’ll be happy you know how to use them!
The best way to improve your Deep Learning model performance is to learn via case studies.
The following case studies and tutorials will help you learn techniques that you can apply to your projects.
To start, I would familiarize yourself with common state-of-the-art architectures including VGGNet, ResNet, Inception/GoogLeNet, Xception, and others:
If you want to learn how to implement your own custom data generators when training Keras models, refer here:
For training your Keras models with multiple GPUs, you’ll want to read this guide:
You can also use Keras for regression problems:
- Basic Regression with Keras
- Keras, Convolutional Neural Networks, and Regression
- Keras: Multiple Inputs and Mixed Data
The OpenCV library ships with a number of pre-trained models for neural style transfer, black and white image colorization, holistically-nested edge detection and others — you can learn about these models using the links below:
- Neural Style Transfer with OpenCV
- Black and white image colorization with OpenCV and Deep Learning
- Holistically-Nested Edge Detection with OpenCV
While SGD is the most popular optimizer used to train deep neural networks, others exist, including Adam, RMSprop, Adagrad, Adadelta and others.
These two tutorials cover the Rectified Adam (RAdam) optimizer, including comparing Rectified Adam to the standard Adam optimizer:
If you intend on deploying your models to production, and more specifically, behind a REST API, I’ve authored three tutorials on the topic, each building on top of each other:
- Building a simple Keras + deep learning REST API
- A scalable Keras + deep learning REST API
- Deep learning in production with Keras, Redis, Flask, and Apache
Take your time practicing and working through them — the experience you gain will be super valuable when you go off on your own!
What if you…
- Didn’t have to select and implement a Neural Network architecture?
- Didn’t have to tune your learning?
- Didn’t have to tune your regularization parameters?
What if you instead could treat the training process like a “black box”:
- Input your data to an API
- And let the algorithms inside automatically train the model for you!
Sound too good to be true?
In some cases it is…
…but in others it works just fine!
We call these sets of algorithms Automatic Machine Learning (AutoML) — you can read more about these algorithms here:
The point here is that AutoML algorithms aren’t going to be replacing you as a Deep Learning practitioner anytime soon.
They are super important to learn about, but they have a long way to go if they are ever going to replace you!
Where to Next?
Congratulations! If you followed the above steps then you now have enough Deep Learning knowledge to consider yourself a “practitioner”!
But where should you go from here?
If you’re interested in a deeper dive into the world of Deep Learning, I would recommend reading my book, Deep Learning for Computer Vision with Python.
Inside the book you’ll find:
- Super practical walkthroughs that present solutions to actual, real-world image classification problems, challenges, and competitions.
- Hands-on tutorials (with lots of code) that not only show you the algorithms behind deep learning for computer vision but their implementations as well.
- A no-nonsense teaching style that is guaranteed to help you master deep learning for image understanding and visual recognition
You can learn more about the book here.
Otherwise, I would recommend reading the following sections of this guide:
- Object Detection: State-of-the-art object detectors, including Faster R-CNN, Single Shot Detectors (SSDs), YOLO, and RetinaNet all rely on Deep Learning. If you want to learn how to not only classify an input image but also locate where in the object is, then you’ll want to read these guides.
- Embedded and IoT Computer Vision and Computer Vision on the Raspberry Pi If you’re interested in applying DL to resource constrained devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano, these are the sections for you!
- Medical Computer Vision:Â Apply Computer Vision and Deep Learning to medical image analysis and learn how to classify blood cells and detect cancer.
Face Applications
Using Computer Vision we can perform a variety of facial applications, including facial recognition, building a virtual makeover system (i.e., makeup, cosmetics, eyeglasses/sunglasses, etc.), or even aiding in law enforcement to help detect, recognize, and track criminals.
Computer Vision is powering facial recognition at a massive scale — just take a second to consider that over 350 million images are uploaded to Facebook every day.
For each of those images, Facebook is running face detection (to detect the presence) of faces followed by face recognition (to actually tag people in photos).
In this section you’ll learn the basics of facial applications using Computer Vision.
Before you can build facial applications, you first need to configure your development environment.
Start by following Step #1 of the How Do I Get Started? section to install OpenCV on your system.
From there, you’ll need to install the dlib and face_recognition libraries.
The Install your face recognition libraries of this tutorial will help you install both dlib and face_recognition.
Make sure you have installed OpenCV, dlib, and face_recognition before continuing!
In order to apply Computer Vision to facial applications you first need to detect and find faces in an input image.
Face detection is different than face recognition.
During face detection we are simply trying to locate where in the image faces are.
Our face detection algorithms do not know who is in the image, simply that a given face exists at a particular location.
Once we have our detected faces, we pass them into a facial recognition algorithm which outputs the actual identify of the person/face.
Thus, all Computer Vision and facial applications must start with face detection.
There are a number of face detectors that you can use, but my favorite is OpenCV’s Deep Learning-based face detector:
OpenCV’s face detector is accurate and able to run in real-time on modern laptops/desktops.
That said, if you’re using a resource constrained devices (such as the Raspberry Pi), the Deep Learning-based face detector may be too slow for your application.
In that case, you may want to utilize Haar cascades or HOG + Linear SVM instead:
Haar cascades are very fast but prone to false-positive detections.
It can also be a pain to properly tune the parameters to the face detector.
HOG + Linear SVM is a nice balance between the Haar cascades and OpenCV’s Deep Learning-based face detector.
This detector is slower than Haar but is also more accurate.
Here’s my suggestion:
- If you need accuracy, go with OpenCV’s Deep Learning face detector.
- If you need pure speed, go with Haar cascades.
- And if you need a balance between the two, go with HOG + Linear SVM.
Finally, make sure you try all three detectors before you decide!
Gather a few example images and test out the face detectors.
Let your empirical results guide you — apply face detection using each of the algorithms, examine the results, and double-down on the algorithm that gave you the best results.
At this point you can detect the location of a face in an image.
But what if we wanted to localize various facial structures, including:
- Nose
- Eyes
- Mouth
- Jawline
Using facial landmarks we can do exactly that!
And best of all, facial landmark algorithms are capable of running in real-time!
Most of your computation is going to be spent detecting the actual face — once you have the face detected, facial landmarks are quite fast!
Start by reading the following tutorials to learn how localize facial structures on a detected face:
Now that you have some experience with face detection and facial landmarks, let’s practice these skills and continue to hone them.
I suggest going through the following guides to help you apply Computer Vision to facial applications:
Are you ready to build your first facial recognition system?
Hold up — I get that you’re eager, but before you can build a face recognition system, you first need to gather your dataset of example images.
The following tutorials will help you create a face recognition dataset:
- How to build a custom face recognition dataset
- How to create a deep learning dataset using Google Images
You can then take the dataset you created and proceed to the next step to build your actual face recognition system.
Note: If you don’t want to build your own dataset you can proceed immediately to Step #6 — I’ve provided my own personal example datasets for the tutorials in Step #6 so you can continue to learn how to apply face recognition even if you don’t gather your own images.
At this point you have either (1) created your own face recognition dataset using the previous step or (2) elected to use my own example datasets I put together for the face recognition tutorials.
To build your first face recognition system, follow this guide:
This tutorial utilizes OpenCV, dlib, and face_recognition to create a facial recognition application.
The problem with the first method is that it relies on a modified k-Nearest Neighbor (k-NN) search to perform the actual face identification.
k-NN, while simple, can easily fail as the algorithm doesn’t “learn” any underlying patterns in the data.
To remedy the situation (and obtain probabilities associated with the face recognition), you should follow this guide:
You’ll note that this tutorial does not rely on the dlib and face_recognition libraries — instead, we use OpenCV’s FaceNet model.
A great project for you would be to:
- Replace OpenCV’s FaceNet model with the dlib and face_recognition packages.
- Extract the 128-d facial embeddings
- Train a Logistic Regression or Support Vector Machine (SVM) on the embeddings extracted by dlib/face_recognition
Take your time whewn implementing the above project — it will be a great learning experience for you.
Whenever I write about face recognition the #1 question I get asked is: “How can I improve my face recognition accuracy?”
I’m glad you asked — and in fact, I’ve already covered the topic.
Make sure you refer to the Drawbacks, limitations, and how to obtain higher face recognition accuracy section (right before the Summary) of the following tutorial:
You should also read up on face alignment as proper face alignment can improve your face recognition accuracy:
Inside that section I discuss how you can improve your face recognition accuracy.
You may have noticed that it’s possible to “trick” and “fool” your face recognition system by holding up a printed photo of a person or photo of the person on your screen.
In those situations your face recognition correctly recognizes the person, but fails to realize that it’s a fake/spoofed face!
What do you do then?
The answer is to apply liveness detection:
Liveness detection algorithms are used to detect real vs. fake/spoofed faces.
Once you have determined that the face is indeed real, then you can pass it into your face recognition system.
Where to Next?
Congrats on making it all the way through the Facial Applications section!
That was quite a lot of content to cover and you did great.
Take a second now to be proud of yourself and your accomplishments.
But what now — where should you go next?
My recommendation would be the PyImageSearch Gurus course.
The PyImageSearch Gurus course includes additional modules and lessons on face recognition.
Additionally, you’ll also find:
- An actionable, real-world course on OpenCV and computer vision (similar to a college survey course on Computer Vision but much more hands-on and practical).
- The most comprehensive computer vision education online today. The PyImageSearch Gurus course covers 13 modules broken out into 168 lessons, with other 2,161 pages of content. You won’t find a more detailed computer vision course anywhere else online, I guarantee it.
- A community of like-minded developers, researchers, and students just like you, who are eager to learn computer vision and level-up their skills.
To learn more about the PyImageSearch Gurus course, just use the link below:
Optical Character Recognition (OCR)
One of the first applications of Computer Vision was Optical Character Recognition (OCR).
OCR algorithms seek to (1) take an input image and then (2) recognize the text/characters in the image, returning a human-readable string to the user (in this case a “string” is assumed to be a variable containing the text that was recognized).
While OCR is a simple concept to comprehend (input image in, human-readable text out) it’s actually extremely challenging problem that is far from solved.
The steps in this section will arm you with the knowledge you need to build your own OCR pipelines.
Before you can apply OCR to your own projects you first need to install OpenCV.
Follow Step #1 of the How Do I Get Started? section above to install OpenCV on your system.
Once you have OpenCV installed you can move on to Step #2.
Tesseract is an OCR engine/API that was originally developed by Hewlett-Packard in the 1980s.
The library was open-sourced in 2005 and later adopted by Google in 2006.
Tesseract supports over 100 written languages, ranging from English to to Punjabi to Yiddish.
Combining OpenCV with Tesseract is by far the fastest way to get started with OCR.
First, make sure you Tesseract installed on your system:
From there, you can create your first OCR application using OCR and Tesseract:
It’s entirely possible to perform OCR without libraries such as Tesseract.
To accomplish this task you need to combine feature extraction along with a bit of heuristics and/or machine learning.
The following guide will give you experience recognizing digits on a 7-segment display using just OpenCV:
Take your time and practice with that tutorial — it will help you learn how to approach OCR projects.
Let’s continue our study of OCR by solving mini-projects:
- Credit card OCR with OpenCV and PythonÂ
- Bank check OCR with OpenCV and Python (Part I)
- Bank check OCR with OpenCV and Python (Part II)
- Detecting machine-readable zones in passport images
Again, follow the guides and practice with them — they will help you learn how to apply OCR to your tasks.
So far we’ve applied OCR to images that were captured under controlled environments (i.e., no major changes in lighting, viewpoint, etc.).
But what if we wanted to apply OCR to images in uncontrolled environments?
Imagine we were tasked with building a Computer Vision system for Facebook to handle OCR’ing the 350+ million new images uploaded to their new system.
In that we case, we can make zero assumptions regarding the environment in which the images were captured.
Some images may be captured using a high quality DSLR camera, others with a standard iPhone camera, and even others with a decade old flip phone — again, we can make no assumptions regarding the quality, viewing angle, or even contents of the image.
In that case, we need to break OCR into a two stage process:
- Stage #1: Use the EAST Deep Learning-based text detector to locate where text resides in the input image.
- Stage #2: Use an OCR engine (ex., Tesseract) to take the text locations and then actually recognize the text itself.
To perform Stage #1 (Text Detection) you should follow this tutorial:
If you’ve read the Face Applications section above you’ll note that our OCR pipeline is similar to our face recognition pipeline:
- First, we detect the text in the input image (akin to to detecting/locating a face in an image)
- And then we take the regions of the image that contain the text, and then actually recognize it (which is similar to taking the location of a face and then actually recognizing who is in the face).
Now that we know where in the input image text resides, we can then take those text locations and actually recognize the text.
To accomplish this task we’ll again be using Tesseract, but this time we’ll want to use Tesseract v4.
The v4 release of Tesseract contains a LSTM-based OCR engine that is far more accurate than previous releases.
You can learn how to combine Text Detection with OCR using Tesseract v4 here:
Where to Next?
Keep in mind that OCR, while widely popular, is still far from being solved.
It is likely, if not inevitable, that your OCR results will not be 100% accurate.
Commercial OCR engines anticipate results not being 100% correct as well.
These engines will sometimes apply auto-correction/spelling correction to the returned results to make them more accurate.
The pyspellchecker package would likely be a good starting point for you if you’re interested in spell checking the OCR results.
Additionally, you may want to look at the Google Vision API:
While the Google Vision API requires (1) an internet connection and (2) payment to utilize, in my opinion it’s one of the best OCR engines available to you.
OCR is undoubtedly one of the most challenging areas of Computer Vision.
If you need help building your own custom OCR systems or increasing the accuracy of your current OCR system,, I would recommend joining the PyImageSearch Gurus course.
The course includes private forums where I hang out and answer questions daily.
It’s a great place to get expert advice, both from me, as well as the more advanced students in the course.
Click here to learn more about the PyImageSearch Gurus course.
Object Detection
Object detection algorithms seek to detect the location of where an object resides in an image.
These algorithms can be as simple as basic color thresholding or as advanced as training a complex deep neural network from scratch.
In the first part of this section we’ll look at some basic methods of object detection, working all the way up to Deep Learning-based object detectors including YOLO and SSDs.
Prior to working with object detection you’ll need to configure your development environment.
To start, make sure you:
- Follow Step #1 of the How Do I Get Started? section to install OpenCV.
- Install Keras and TensorFlow via Step #1 of the Deep Learning section.
Provided you have OpenCV, TensorFlow, and Keras installed, you are free to continue with the rest of this tutorial.
We’ll keep our first object detector/tracker super simple.
We’ll rely strictly on basic image processing concepts, namely color thresholding.
To apply color threshold we define an upper and lower range in a given color space (such as RGB, HSV, L*a*b*, etc.)
Then, for an incoming image/frame, we use OpenCV’s cv2.inRange function to apply color thresholding, yielding a mask, where:
- All foreground pixels are white
- And all background pixels are black
Therefore, all pixels that fall into our upper and lower boundaries will be marked as foreground.
Color thresholding methods, as the name suggestions, are super useful when you know the color of the object you want to detect and track will be different than all other colors in the frame.
Furthermore, color thresholding algorithms are very fast, enabling them to run in super real-time, even on resource constrained devices, such as the Raspberry Pi.
Let’s go ahead and implement your first object detector now:
Then, when you’re done, you can extend it to track object movement (north, south, east, west, etc.):
Once you’ve implemented the above two guides I suggest you extend the project by attempting to track your own objects.
Again, keep in mind that this object detector is based on color, so make sure the object you want to detect has a different color than the other objects/background in the scene!
Color-based object detectors are fast and efficient, but they do nothing to understand the semantic contents of an image.
For example, how would you go about defining a color range to detect an actual person?
Would you attempt to track based on skin tone?
That would fail pretty quickly — humans have a large variety of skin tones, ranging from ethnicity, to exposure to the sun. Defining such a range would be impossible.
Would clothing work?
Well, maybe if you were at a soccer/football game and wanted to track players on the pitch via their jersey colors.
But for general purpose applications that wouldn’t work either — clothing comes in all shapes, sizes, colors, and designs.
I think you get my point here — trying to detect a person based on color thresholding methods alone simply isn’t going to work.
Instead, you need to use a dedicated object detection algorithm.
One of the most common object detectors is the Viola-Jones algorithm, also known as Haar cascades.
The Viola-Jones algorithm was published back in 2001 but is still used today (although Deep Learning-based object detectors obtain far better accuracy).
To try out a Haar cascade out, follow this guide:
In 2005, Dalal and Triggs published the seminal paper, Histogram of Oriented Gradients for Human Detection.
This paper introduces what we call the HOG + Linear SVM Object Detector:
Let’s gain some experience applying HOG + Linear SVM to pedestrian detection:
You’ll then want to understand the parameters to OpenCV’s detectMultiScale function, including how to tune them obtain higher accuracy:
Now that we’ve seen how HOG + Linear SVM works in practice, let’s dissect the algorithm a bit.
To start, the HOG + Linear SMV object detectors uses a combination of sliding windows, HOG features, and a Support Vector Machine to localize objects in images.
Image pyramids allow us to detect objects at different scales (i.e., objects that are closer to the camera as well as objects farther away):
Sliding windows enable us to detect objects at different locations in a given scale of the pyramid:
Finally, you need to understand the concept of non-maxima suppression, a technique used in both traditional object detection as well as Deep Learning-based object detection:
When performing object detection you’ll end up locating multiple bounding boxes surrounding a single object.
This behavior is actually a good thing — it implies that your object detector is working correctly and is “activating” when it gets close to objects it was trained to detect.
The problem is that we now have multiple bounding boxes for one object.
To rectify the problem we can apply non-maxima suppression, which as the name suggestions, suppresses (i.e., ignores/deletes) weak, overlapping bounding boxes.
The term “weak” here is used to indicate bounding boxes of low confidence/probability.
If you are interested in learning more about the HOG + Linear SVM object detector, including:
- How to train your own custom HOG + Linear SVM object detector
- The inner-workings of the HOG + Linear SVM detector
- …then you’ll want to refer to the PyImageSearch Gurus course – Inside the course you’ll find 30+ lessons on HOG feature extraction and the HOG + Linear SVM object detection algorithm.
For ~10 years HOG + Linear SVM (including its variants) was considered the state-of-the-art in terms of object detection.
However, Deep Learning-based object detectors, including Faster R-CNN, Single Shot Detector (SSDs), You Only Look Once (YOLO), and RetinaNet have obtained unprecedented object detection accuracy.
The OpenCV library is compatible with a number of pre-trained object detectors — let’s start by taking a look at this SSD:
In Step #5 you learned how to apply object detection to images — but what about video?
Is it possible to apply object detection to real-time video streams?
On modern laptops/desktops you’ll be able to run some (but not all) Deep Learning-based object detectors in real-time.
This tutorial will get you started:
For a deeper dive into Deep Learning-based object detection, including how to filter/remove classes that you want to ignore/not detect, refer to this tutorial:
Next, you’ll want to practice applying the YOLO object detector:
The YOLO object detector is designed to be super fast; however, it appears that the OpenCV implementation is actually far slower than the SSD counterparts.
I’m not entirely sure why that is.
Furthermore, OpenCV’s Deep Neural Network (dnn ) module does not yet support NVIDIA GPUs, meaning that you cannot use your GPU to improve inference speed.
OpenCV is reportedly working on NVIDIA GPU support but it may not be until 2020 until that support is available.
If you decide you want to train your own custom object detectors from scratch you’ll need a method to evaluate the accuracy of the model.
To do that we use two metrics: Intersection over Union (IoU) and mean Average Precision (mAP) — you can read about them here:
If you’ve followed along so far, you know that object detection produces bounding boxes that report the location and class label of each detected object in an image.
But what if you wanted to extend object detection to produce pixel-wise masks?
These masks would not only report the bounding box location of each object, but would report which individual pixels belong to the object.
These types of algorithms are covered in the Instance Segmentation and Semantic Segmentation section.
Deep Learning-based object detectors, while accurate, are extremely computationally hungry, making them incredibly challenging to apply them to resource constrained devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano.
If you would like to apply object detection to these devices, make sure you read the Embedded and IoT Computer Vision and Computer Vision on the Raspberry Pi sections, respectively.
Where to Next?
Congratulations, you now have a solid foundation on how object detection algorithms work!
If you’re looking to study object detection in more detail, I would recommend you:
Join the PyImageSearch Gurus course: Inside the course I cover the inner-workings of the HOG + Linear SVM algorithm, including how to train your own custom HOG + Linear SVM detector.
Take a look at Deep Learning for Computer Vision with Python: That book covers Deep Learning-based object detection in-depth, including how to (1) annotate your dataset and (2) train the follow object detectors: Faster R-CNNs, Single Shot Detectors (SSDs), RetinaNet.
If you’re interested in instance/semantic segmentation, the text covers Mask R-CNN as well.
Read through Raspberry Pi for Computer Vision
As the name suggestions, this book is dedicated to developing and optimizing Computer Vision and Deep Learning algorithms on resource constrained devices, including the:
- Raspberry Pi
- Google Coral
- Intel Movidius NCS
- NVIDIA Jetson Nano
Inside you’ll learn how to train your own object detectors, optimize/convert them for the RPi, Coral, NCS, and/or Nano, and then run the detectors in real-time!
Object Tracking
Object Tracking algorithms are typically applied after and object has already been detected; therefore, I recommend you read the Object Detection section first. Once you’ve read those sets of tutorials, come back here and learn about object tracking.
Object detection algorithms tend to be accurate, but computationally expensive to run.
It may be infeasible/impossible to run a given object detector on every frame of an incoming video stream and still maintain real-time performance.
Therefore, we need an intermediary algorithm that can accept the bounding box location of an object, track it, and then automatically update itself as the object moves about the frame.
We’ll learn about these types of object tracking algorithms in this section.
Prior to working through this section you’ll need to install OpenCV on your system.
Make sure you follow Step #1 of How Do I Get Started? to configure and install OpenCV.
Additionally, I recommend reading the Object Detection section first as object detection tends to be a prerequisite to object tracking.
The first object tracker we’ll cover is a color-based tracker.
This algorithm combines both object detection and tracking into a single step, and in fact, is the simplest object tracker possible.
You can read more about color-based detection and tracking here:
Our color-based tracker was a good start, but the algorithm will fail if there is more than one object we want to track.
For example, let’s assume there are multiple objects in our video stream and we want to associate unique IDs with each of them — how might we go about doing that?
The answer is to apply a Centroid Tracking algorithm:
Using Centroid Tracking we can not only associate unique IDs with a given object, but also detect when an object is lost and/or has left the field of view.
OpenCV comes with eight object tracking algorithms built-in to the library, including:
- BOOSTING Tracker
- MIL Tracker
- KCF Tracker
- CSRT Tracker
- MedianFlow Tracker
- TLD Tracker
- MOSSE Tracker
- GOTURN Tracker
You can learn how to use each of them in this tutorial:
The dlib library also has an implementation of correlation tracking:
When utilizing object tracking in your own applications you need to balance speed with accuracy.
My persona recommendation is to:
- Use CSRT when you need higher object detection accuracy and can tolerate slower FPS throughput.
- Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy.
- Use MOSSE when you need pure speed.
Step #4 handled single object tracking using OpenCV and dlib’s object trackers — but what about multi-object tracking?
You should start by reading about multi-object tracking with OpenCV:
Multi-object tracking is, by definition, significantly more complex, both in terms of the underlying programming, API calls, and computationally efficiency.
Most multi-object tracking implementations instantiate a brand new Python/OpenCV class to handle object tracking, meaning that if you have N objects you want to track, you therefore have N object trackers instantiated — which quickly becomes a problem in crowded scenes.
Your CPU will choke on the load and your object tracking system will come to a grinding halt.
One way to overcome this problem is to use multiprocessing and distribute the load across multiple processes/cores, thus enabling you to reclaim some speed:
So far you’ve learned how to apply single object tracking and multi-object tracking.
Let’s put all the pieces together and build a person/footfall counter application capable of detecting, tracking, and counting the number of people that enter/exit a given area (i.e., convenience store, grocery store, etc.):
In particular, you’ll want to note how the above implementation takes a hybrid approach to object detection and tracking, where:
- The object detector is only applied every N frames.
- One object tracker is created per detected object.
- The trackers enable us to track the objects.
- Then, once we reach the N-th frame, we apply object detection, associate centroids, and then create new object trackers.
Such a hybrid implementation enables us to balance speed with accuracy.
Where to Next?
Object tracking algorithms are more of an advanced Computer Vision concept.
If you’re interested in studying Computer Vision in more detail, I would recommend the PyImageSearch Gurus course.
This course is similar to a college survey in Computer Vision, but way more practical, including hands-on coding and implementations.
Instance Segmentation and Semantic Segmentation
There are three primary types of algorithms used for image understanding:
- Image classification algorithms enable you to obtain a single label that represents the contents of an image. You can think of image classification as inputting a singleimage to a network and obtaining a single label as output.
- Object detection algorithms are capable of telling you not only what is in an image, but also where in the image a given object is. Object detectors thus accept a single input image and then returning multiple values as an output. The output itself is a list of values containing (1) the class label and (2) the bounding box (x, y)-coordinates of where the particular object is in the image.
- Instance segmentation and semantic segmentation take object detection farther. Instead of returning bounding box coordinates, instance/semantic segmentation methods instead yield pixel-wise masks that tell us (1) the class label of an object, (2) the bounding box coordinates of the object, and (3) the coordinates of the pixels that belong to the object.
These segmentation algorithms are intermediate/advanced techniques, so make sure you read the Deep Learning section above to ensure you understand the fundamentals.
In order to perform instance segmentation you need to have OpenCV, TensorFlow, and Keras installed on your system.
Make sure you follow Step #1 from the How Do I Get Started? section to install OpenCV.
From there, follow Step #1 from the Deep Learning section to ensure TensorFlow and Keras are properly configured.
Now that you have your deep learning machine configured, you can learn about instance segmentation.
Follow this guide to utilize your first instance segmentation network using OpenCV:
That guide will also teach you how instance segmentation is different from object detection.
Mask R-CNN is arguably the most popular instance segmentation architecture.
Mask R-CNNs have been successfully applied to self-driving cars (vehicle, road, and pedestrian detection), medical applications (automatic tumor detection/segmentation), and much more!
This guide will show you how to use Mask R-CNN with OpenCV:
And this tutorial will teach you how to use the Keras implementation of Mask R-CNN:
When performing instance segmentation our goal is to (1) detect objects and then (2) compute pixel-wise masks for each object detected.
Semantic segmentation is a bit different — instead of labeling just the objects in an input image, semantic segmentation seeks to label every pixel in the image.
That means that if a given pixel doesn’t belong to any category/class, we label it as “background” (meaning that the pixel does not belong to any semantically interesting object).
Semantic segmentation algorithms are very popular for self-driving car applications as they can segment an input image/frame into components, including road, sidewalk, pedestrian, bicyclist, sky, building, background, etc.
To learn more about semantic segmentation algorithms, refer to this tutorial:
Where to Next
Congratulations, you now understand how to work with instance segmentation and semantic segmentation algorithms!
However, we worked only with pre-trained segmentation networks — what if you wanted to train your own
That is absolutely possible — and to do so, you’ll want to refer to Deep Learning for Computer Vision with Python.
Inside the book you’ll discover
The annotation tools I recommend (and how to use them) when labeling your own image dataset for instance/semantic segmentation.
How to train a Mask R-CNN on your own custom dataset.
How to take your trained Mask R-CNN and apply it to your own images.
My best practices, tips, and suggestions when training your own Mask R-CNN.
Embedded and IoT Computer Vision
Applying Computer Vision and Deep Learning algorithms to resource constrained devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano can be super challenging due to the fact that state-of-the-art CV/DL algorithms are computationally hungry — these resource constrained devices just don’t have enough CPU power and sufficient RAM to feed these hungry algorithm beasts.
But don’t worry!
You can still apply CV and DL to these devices — you just need to follow these guides first.
Before you start applying Computer Vision and Deep Learning to embedded/IoT applications you first need to choose a device.
I suggest starting with the Raspberry Pi — it’s a super cheap ($35) and easily accessible device for your initial forays into embedded/IoT Computer Vision and Deep Learning.
These guides will help you configure your Raspberry Pi:
- Install OpenCV on your RPi the “easy way” with pip
- Compile and install OpenCV 4 from source on Raspberry Pi 4 and Raspbian Buster
Another option to consider is NVIDIA’s Jetson Nano, what many call the “Raspberry Pi for Artificial Intelligence”.
At $99 it’s still reasonable affordable and packs a Maxwell 128 CUDA core GPU capable of 472 GFLOPS of computation.
To get started with the NVIDIA Jetson Nano, follow this guide:
You may also want to consider Google’s Coral Platform along with the Movidius NCS.
The Google Coral USB Accelerator is a particularly attractive option as it’s essentially a Deep Learning USB Stick (similar to Intel’s Movidius NCS).
Both the Movidius NCS and Google Coral USB Accelerator plug into a USB port on your embedded device (such as a Raspberry Pi or Jetson Nano).
You can then performance inference (i.e., prediction) on the USB stick, yielding faster throughput than using the CPU alone.
We’ll cover both the Movidius NCS and Google Coral USB Accelerator later in this section.
Again, I strongly recommend the Raspberry Pi as your first embedded vision platform — it’s super cheap and very easy to use.
To get started, I would recommend that you understand how to:
- Access the Raspberry Pi Camera with OpenCV and Python
- Diagnose common errors using the Raspberry Pi camera module
Next, build your first motion detector using the Raspberry Pi:
And then extend it to build an IoT home surveillance system:
If I’ve said it once, I’ve said it a hundred times — the best way to learn Computer Vision is through practical, hands-on the projects.
The same is true for Embedded Vision and IoT projects as well.
To gain additional experience building embedded CV projects, follow these guides to work with video on embedded devices, including working with multiple cameras and live streaming video over a network:
- Multiple cameras with the Raspberry Pi and OpenCV
- Live video streaming over network with OpenCV and ImageZMQ
- OpenCV – Stream video to web browser/HTML page
To gain experience working with hardware, check out this pan/tilt face tracker:
There is a dedicated Face Applications section in this guide, but there’s no harm in getting experience with face applications on the RPi now:
- Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib
- Raspberry Pi Face Recognition
If you’re eager to gain some initial experience using deep learning on embedded devices, start with this guide:
From there you’ll want to go through the steps in the Deep Learning section.
Finally, if you want to integrate text message notifications into the Computer Visions security system we build in the previous step, then read this tutorial:
If you followed Step #3 then you found out that running Deep Learning models on resource constrained devices such as the Raspberry Pi can be computationally prohibitive, preventing you from obtaining real-time performance.
In order to boost your Frames Per Second (FPS) throughput rate, you should consider using a coprocessor such as Intel’s Movidius NCS or Google’s Coral USB Accelerator:
- Getting started with the Intel Movidius Neural Compute Stick
- Getting started with Google Coral’s TPU USB Accelerator
Or, you may want to switch to a different board entirely! For that I would recommend NVIDIA’s Jetson Nano:
These devices/boards can substantially boost your FPS throughput!
Just as image classification can be slow on embedded devices, the same is true for object detection as well.
And in fact, object detection is actually slower than image classification given the additional computation required.
To see how object detection on the RPi CPU can be a challenge, start by reading this guide:
To get around this limitation we can once again lean on the Movidius NCS, Google Coral, and NVIDIA Jetson Nano:
- Real-time object detection on the Raspberry Pi with the Movidius NCS
- OpenVINO, OpenCV, and Movidius NCS on the Raspberry Pi
- Object detection and image classification with Google Coral USB Accelerator
- Getting started with the NVIDIA Jetson Nano
Where to Next?
At this point you should:
- Understand how to apply basic Computer Vision algorithms to resource constrained devices.
- And more importantly, appreciate how challenging it can be to apply these algorithms given limited CPU, RAM, and power.
If you’d like a deeper understanding of this material, including how:
- Build practical, real-world computer vision applications on the Raspberry Pi
- Create computer vision and Internet of Things (IoT) projects and applications with the RPi
- Optimize your OpenCV code and algorithms on the resource constrained Pi
- Perform Deep Learning on the Raspberry Pi (including utilizing the Movidius NCS and OpenVINO toolkit)
- Utilize the Google Coral and NVIDIA Jetson Nano to build embedded computer vision and deep learning applications
….then you should definitely take a look at my book, Raspberry Pi for Computer Vision!
This book is your one-stop shop for learning how to master Computer Vision and Deep Learning on embedded devices.
Computer Vision on the Raspberry Pi
At only $35, the Raspberry Pi (RPi) is a cheap, affordable piece of hardware that can be used by hobbyists, educators, and professionals/industry alike.
The Raspberry Pi 4 (the current model as of this writing) includes a Quad core Cortex-A72 running at 1.5Ghz and either 1GB, 2GB, or 4GB of RAM (depending on which model you purchase) — all running on a computer the size of a credit card.
But don’t let its small size fool you!
The Raspberry Pi can absolutely be used for Computer Vision and Deep Learning (but you need to know how to tune your algorithms first).
Prior to working through these steps I recommend that you first work through the How Do I Get Started? section first.
Not only will that section teach you how to install OpenCV on your Raspberry Pi, but it will also teach you the fundamentals of the OpenCV library.
If you find yourself struggling to get OpenCV installed on your Raspberry Pi, take a look at both:
Both of those books contain a pre-configured Raspbian .img file.
All you need to do is download the .img file, flash it to your micro-SD card, and boot your RPi.
From there you’ll have a pre-configured development environment with OpenCV and all other CV/DL libraries you need pre-installed.
This .img file can save you days of heartache trying to get OpenCV installed.
Assuming you now have OpenCV installed on your RPi, you might be wondering about development best practices — what is the best way to write code on the RPi?
Should you install a dedicated IDE, such as PyCharm, directly on the Pi itself and code there?
Should you use a lightweight code editor such as Sublime Text?
Or should you SSH/VNC in to the RPi and edit the code that way?
You could potentially do all three of those, but my favorite is to use either PyCharm or Sublime Text on my laptop/desktop with a SFTP plugin:
Doing so enables me to code using my favorite IDE on my laptop/desktop.
Once I’m done editing a file, I save it, after which the file is automatically uploaded to the RPi.
It does take some additional time to configure your RPi and laptop/desktop in this manner, but once you do, it’s so worth it!
Now that your development environment is configured, you should verify that you can access your camera, whether that be a USB webcam or the Raspberry Pi camera module:
The Raspberry Pi is naturally suited for home security applications, so let’s learn how we can utilize motion detection to detect when there is an intruder in our home:
If you want to use the GPIO to control additional hardware, specifically Hardware on Top (HATs), you should study how OpenCV and GPIO can be used together on the Raspberry Pi:
Facial applications, including face recognition can be extremely tricky on the Raspberry Pi due to the limited computational horsepower.
Algorithms that worked well on our laptop/desktop may not translate well to our Raspberry Pi, so therefore, we need to take care to perform additional optimizations.
These tutorials will get you started applying facial applications on the RPi:
Deep Learning algorithms are notoriously computationally hungry, and given the resource constrained nature of the RPi, CPU and memory come at a premium.
To discover why Deep Learning algorithms are slow on the RPi, start by reading these tutorials:
Then, when you’re done, come back and learn how to implement a complete, end-to-end deep learning project on the RPi:
One of the benefits of the using the Raspberry Pi is that it makes it so easy to work with additional hardware, especially for robotics applications.
In this tutorial you will learn how to apply face tracking using a pan/tilt servo:
In order to speedup Deep Learning model inference on the Raspberry Pi we can use a coprocessor.
Think of a coprocessor as a USB stick that contains a specialized chip used to make Deep Learning models run faster.
We plug the stick into our RPi, integrate with the coprocessor API, and then push all Deep Learning prediction to the USB stick.
One of the most popular Deep Learning coprocessors is Intel’s Movidius NCS.
Using the NCS we can obtain upwards of a 1,200% speedup in our algorithms!
To learn more about the NCS, and use it for your own embedded vision applications, read these guides:
- Getting started with the Intel Movidius Neural Compute Stick
- Real-time object detection on the Raspberry Pi with the Movidius NCS
- OpenVINO, OpenCV, and Movidius NCS on the Raspberry Pi
Additionally, my new book, Raspberry Pi for Computer Vision, includes detailed guides on how to:
- Train your own Deep Learning model on your own custom dataset
- Optimize the model using the OpenVINO Toolkit
- Deploy the optimized model to the RPi
- Enjoy faster inference on the Raspberry Pi!
Google’s Coral USB accelerator is a competitor to Intel’s Movidius NCS coprocessor.
One of the benefits of combining the the Google Coral USB Accelerator with the RPi 4 is USB 3.0.
Using USB 3 we can obtain faster inference than the Movidius NCS.
The Google Coral USB Accelerator is also very easy to use — you can read more about it here:
- Getting started with Google Coral’s TPU USB Accelerator
- Object detection and image classification with Google Coral USB Accelerator
Where to Next?
Congrats on using the Raspberry Pi to apply Computer Vision algorithms!
If you would like to take the next step, I would suggest reading my new book, Raspberry Pi for Computer Vision.
That book will teach you how to use the RPi, Google Coral, Intel Movidius NCS, and NVIDIA Jetson Nano for embedded Computer Vision and Deep learning applications.
And just like all my tutorials, each chapter of the text includes well documented code and detailed walkthroughs, ensuring that you understand exactly what’s going on.
Medical Computer Vision
Computer Vision and Deep Learning algorithms have touched nearly every facet of Computer Science.
One area that CV and DL algorithms are making a massive impact on is the field of Medical Computer Vision.
Using Medical Computer Vision algorithms, we can now automatically analyze cell cultures, detect tumors, and even predict cancer before it even metastasizes!
Step #2 and #3 of this section will require that you have OpenCV configured and installed on your machine.
Make sure you follow Step #1 from the How Do I Get Started? section to install OpenCV.
Step #4 covers how to use Deep Learning for Medical Computer Vision.
You will need to have TensorFlow and Keras installed on your system for those guides.
You should follow Step #1 from the Deep Learning section to ensure TensorFlow and Keras are properly configured.
Our first Medical Computer Vision project uses only basic Computer Vision algorithms, thus demonstrating how even basic techniques can make a profound impact on the medical community:
Fun fact:Â I wrote the above tutorial in collaboration with PyImageSearch reader, Joao Paulo Folador, a PhD student from Brazil.
We then published a paper detailing the method in CLAIB 2019!
It’s just further proof that PyImageSearch tutorials can lead to publishable results!
Now that you have some experience, let’s move on to a slightly more advanced Medical Computer Vision project.
Here you will learn how to use Deep Learning to analyze root health of plants:
Our previous sections dealt with applying Deep Learning to a small medical image dataset.
But what about larger medical datasets?
Can we apply DL to those datasets as well?
You bet we can!
The following two guides will show you how to use Deep Learning to automatically classify malaria in blood cells and perform automatic breast cancer detection:
- Deep Learning and Medical Image Analysis with Keras
- Breast cancer classification with Keras and Deep Learning
Take your time working through those guides and make special note of how we compute the sensitivity and specificity, of the model — two key metrics when working with medical imaging tasks that directly impact patients.
Where to Next
As I mention in my About page, Medical Computer Vision is a topic near and dear to my heart.
Previously, my company has consulted with the National Cancer Institute and National Institute of Health to develop image processing and machine learning algorithms to automatically analyze breast histology images for cancer risk factors.
I’ve also developed methods to automatically recognize prescription pills in images, thereby reducing the number of injuries and deaths that happen each year due to the incorrect medication being taken.
I continue to write about Medical Computer Vision, so if you’re interested in the topic, be sure to keep an eye on the PyImageSearch blog.
Otherwise, you should take a look at my book, Deep Learning for Computer Vision with Python, which covers chapters on:
- Automatic cancer/skin lesion segmentation using Mask R-CNNs
- Prescription pill detection/localization using Mask R-CNNs
Working with Video
Most tutorials I have on the PyImageSearch blog involve working with images — but what if you wanted to work with videos instead?
If that’s you, make sure you pay attention to this section.
Prior to working with video (both on file and live video streams), you first need to install OpenCV on your system.
You should follow Step #1 of the How Do I Get Started? section to configure and install OpenCV on your machine.
Now that you have OpenCV installed, let’s learn how to access your webcam.
If you are using either a USB webcam or built-in webcam (such as the camera on your laptop), you can use OpenCV’s cv2.VideoCapture  class.
The problem with this method is that it will block your main execution thread until the next frame is read from the camera sensor.
That can be a big problem as it can dramatically decrease the Frames Per Second (FPS) throughput of your system.
To resolve the issue, I have implemented a threaded VideoStream class that more efficiently reads frames from a camera:
I would also suggest reading the following tutorial which provides a direct comparison of the cv2.VideoCapture class to my VideoStream class:
If you are using a Raspberry Pi camera module then you should follow this getting started guide to access the RPi camera:
- Accessing the Raspberry Pi Camera with OpenCV and Python
- Common errors using the Raspberry Pi camera module
Once you’ve confirmed you can access the RPi camera module you can use the VideoStream class which is compatible with both built-in/USB webcams and the RPi camera module:
Inevitably, there will be a time where OpenCV cannot access your camera and your script errors out, resulting in a “NoneType” error — this tutorial will help you diagnose and resolve such errors:
I’m strong believer in learning by doing through practical, hands-on applications — and it’s hard to get more practical than face detection!
This tutorial will teach you how to apply face detection to video streams:
Building on face detection, let’s learn how to apply face applications to video streams as well:
Face detection is a special class of object detection.
Object detectors can be trained to recognize just about any type of object.
The OpenCV library enables us to use pre-trained object detectors to detect common objects we encounter in our daily lives (people, cars, trucks, dogs, cats, etc.).
The following tutorials will teach you how to apply object detection to video streams:
At this point you have a fair amount of experience applying Computer Vision and OpenCV to videos — let’s continue practicing using these tutorials:
Take you time working through them and take notes as you do so.
You should pay close attention to the tutorials that interest you and excite you the most.
Take note of them and then revisit your ideas after you finish these tutorials.
Ask yourself how could extend them to work with your own projects?
What if you tried a different video source?
Or how might you integrate one of these video applications into a home security system?
Brainstorm these ideas and then try to implement them yourself — the best way to learn is to learn by doing!
So far we’ve looked at how to process video streams with OpenCV, provided that we have physical access to the camera.
But what if wanted to to access a network or an IP camera — how might we do that?
Accessing RTSP streams with OpenCV is a big pain and not something I recommend doing.
Instead, you should use ImageZMQ to stream frames directly from a camera to a server for processing:
For this step I’ll be making the assumption that you’ve worked through the first half of the Deep Learning section.
Provided that you have, you may have noticed that applying image classification to video streams results in a sort of prediction flickering.
A “prediction flicker” occurs when an image classification model reports Label A for Frame N, but then reports Label B (i.e., a different class label) for Frame N + 1 (i.e., the next frame in the video stream), despite the frames having near-identical contents!
Prediction flickering is a natural phenomena in video classification.
It happens due to noise in the input frames confusing the classification model.
One simple method to rectify prediction flickering is to apply prediction averaging:
Using prediction averaging you can overcome the prediction flickering problem.
Additionally, you may want to look into more advanced Deep Learning-based image/video classifiers, including Recurrent Neural Networks (RNNs) and Long Short-Term Memory Networks (LSTMs).
Where to Next?
If you’re brand new to the world of Computer Vision and Image Processing, I would recommend you read Practical Python and OpenCV.
That book will teach you the basics of Computer Vision through the OpenCV library — and best of all, you can complete that book in only a single weekend.
It’s by far the fastest way to get up and running with OpenCV.
And furthermore, the book includes complete code templates and examples for working with video files and live video streams with OpenCV.
For a more detailed review of the Computer Vision field, I would recommend the PyImageSearch Gurus course.
The PyImageSearch Gurus course is a comprehensive dive into the world of Computer Vision.
You can think of the Gurus course as similar to a college survey course on CV (but much more hands-on and practical).
Finally, you’ll note that we utilized a number of pre-trained Deep Learning image classifiers and object detectors in this section.
If you’re interested in training your own custom Deep Learning models you should look no further than Deep Learning for Computer Vision with Python.
You’ll learn how to create your own datasets, train models on top of your data, and then deploy the trained models to solve real-world projects.
It’s by far the most comprehensive, detailed, and complete Computer Vision and Deep Learning education you can find online today.
Image Search Engines
Content-based Image Retrieval (CBIR) is encompasses all algorithms, techniques, and methods to build an image search engine.
An image search engine functions similar to a text search engine (ex., Google, Bing, etc.).
A user visits the search engine website, but instead of having a text query (ex., “How do I learn OpenCV?”) they instead have an image as a query.
The goal of the image search engine is to accept the query image and find all visually similar images in a given dataset.
CBIR is the primary reason I started studying Computer Vision in the first place. I found the topic fascinating and am eager to share my knowledge with you.
Before you can perform CBIR or build your first image search engine, you first need to install OpenCV your system.
Follow Step #1 of the How Do I Get Started? section above to configure OpenCV and install it on your machine.
The first image search engine you’ll build is also one of the first tutorials I wrote here on the PyImageSearch blog.
Using this tutorial you’ll learn how to search for visually similar images in a dataset using color histograms:
In Step #2 we built an image search engine that characterized the contents of an image based on color — but what if we wanted to quantify the image based on texture, shape, or some combination of all three?
How might we go about doing that?
In order to describe the contents of an image, we first need to understand the concept of image quantification:
How To Describe and Quantify an Image Using Feature Vectors
Image quantification is the process of:
- Accepting an input image
- Applying an algorithm to characterize the contents of the image based on shape, color, texture, etc.
- Returning a list of values representing the quantification of the image (we call this our feature vector).
- The algorithm that performs the quantification is our image descriptor or feature descriptor.
There are four key steps to building any image search engine:
- Building an Image Search Engine: Defining Your Image Descriptor (Step 1 of 4)
- Building an Image Search Engine: Indexing Your Dataset (Step 2 of 4)
- Building an Image Search Engine: Defining Your Similarity Metric (Step 3 of 4)
- Building an Image Search Engine: Searching and Ranking (Step 4 of 4)
As your CBIR system becomes more advanced you’ll start to include sub-steps between the main steps, but for now, understand that those four steps will be present in any image search engine you build.
Now that you understand the fundamentals of CBIR, let’s apply it to a mini-project:
In the above tutorial you’ll learn how to combine color with locality, leading to a more accurate image search engine.
So far we’ve learned how to build an image search engine to find visually similar images in a dataset.
But what if we wanted to find duplicate or near-duplicate images in a dataset?
Such an application is a subset of the CBIR field called image hashing:
Image hashing algorithms compute a single integer to quantify the contents of an image.
The goal of applying image hashing is to find all duplicate/near-duplicate images.
Practical use cases of image hashing include:
De-duping a set of images you obtained by crawling the web.
You may be using my Google Images scraper or my Bing API crawler to build a dataset of images to train your own custom Convolutional Neural Network.
In that case, you want want to find all duplicate/near-duplicate images in your dataset (as these duplicates provide no additional value to the dataset itself).
Building TinEye, a reverse image search engine.
Reverse image search engines:
Accept an input image
- Compute its hash
- And tell you everywhere on the web that the input image appears on
At this point you know how image hashing algorithms work — but how can we scale them like TinEye has?
The answer is to utilize specialized data structures, such as VP-Trees.
This tutorial will show you how to efficiently use VP-Trees to scale your image hashing search engine:
Where to Next?
The techniques covered here will help you build your own basic image search engines.
The problem with these algorithms is they do not scale.
If you want to build more advanced image search engines that scale to millions of images you’ll want to look into:
- The Bag-of-Visual-Words model (BOVW)
- k-Means clustering and forming a “codebook”
- Vector quantization
- Tf-idf weighting
- Building an inverted index
The PyImageSearch Gurus course includes over 40+ lessons on building image search engines, including how to scale your CBIR system to millions of images.
If you’re interested in learning more about the course, and extending your own CBIR knowledge, just use the link below:
Interviews, Case Studies, and Success Stories
You can learn Computer Vision, Deep Learning, and OpenCV — I am absolutely confident in that.
And if you’ve been following this guide, you’ve seen for yourself how far you’ve progressed.
However, we cannot spend all of our time neck deep in code and implementation — we need to come up for air, rest, and recharge our batteries.
When then happens I suggest supplementing your technical education with a bit of light reading used to open your mind to what the world of Computer Vision and Deep Learning offers you.
After 5 years running the PyImageSearch blog I’ve seen countless readers dramatically change their lives, including changing their careers to CV/DL/AI, being awarded funding, winning Kaggle competitions, and even becoming CTOs of funded companies!
It’s truly a privilege and an honor to be taking this journey with you — thank you for letting me accompany you on it.
Below you’ll find some of my favorite interviews, case studies, and success stories.
Ever wonder what it’s like to work as a Computer Vision/Deep Learning researcher and developer?
You’re not alone.
Over the past 5 years running PyImageSearch, I have received 100s of emails and inquiries that are “outside” traditional CV, DL, and OpenCV questions.
They instead focus on something much more personal — my daily life.
To give you an idea of what it’s like to be me, I’m giving you a behind the scenes look at:
- How I spend my day.
- What it’s like balancing my role as a (1) computer vision researcher/developer and (2) a writer and owner of PyImageSearch.
- The habits and practices I’ve spent years perfecting to help me get shit done.
You can read the full post here: A day in the life of Adrian Rosebrock: computer vision researcher, developer, and entrepreneur.
Back in 2015 I was interviewed on Scott Hanselman’s legendary podcast, Hanselminutes:
Inside the podcast Scott and I discuss the types of problems Computer Vision can solve, from medical issues to gaming, retail to surveillance.
This podcast is an excellent listen if you’re brand new to the world of Computer Vision (or if you want something entertaining to listen to).
A more recent podcast (April 2019) comes from an interview on the Super Data Science Podcast, hosted by Kirill Eremenko:
In the podcast we discuss Computer Vision, Deep Learning, and what the future holds for the fields.
I highly recommend listening to this podcast, regardless if you are brand new to Computer Vision or already a seasoned expert — it’s both entertaining and educational at the same time.
Saideep Talari’s story holds a special place in my heart.
He started his career as a network tester, found his first job in Computer Vision after completing the PyImageSearch Gurus course, and then after completing Deep Learning for Computer Vision with Python is now the CTO of a tech company with over $2M in funding.
He’s also an incredibly nice person — he used his earnings to clear his families debts and start fresh.
Saideep is one my favorite people I’ve ever had the privledge of knowing — there’s a lot you can learn from this interview:
Tuomo Hiippala was awarded a $30,500 research grant for his work in Computer Vision, Optical Character Recognition, and Document Understanding.
Find out how he landed the grant in the interview with him:
David Austin and his teammate, Weimin Wang, took home 1st place (and $25,000) in Kaggle’s Iceberg Classifier Challenge (Kaggle’s most competitive challenge ever).
David and Weimin used techniques from both the PyImageSearch Gurus course and Deep Learning for Computer Vision with Python to come up with their winning solution — read the full interview, including how they did it, here:
Kapil Varshney was recently hired at Esri R&D as a Data Scientist focusing on Computer Vision and Deep Learning.
Kapil’s story is really important as it shows that, no matter what your background is, you can be successful in computer vision and deep learning — you just need the right education first!
You see, Kapil is a long-time PyImageSearch reader who read Deep Learning for Computer Vision with Python (DL4CV) last year.
Soon after reading DL4CV, Kapil competed in a challenge sponsored by Esri to detect and localize objects in satellite images (including cars, swimming pools, etc.).
He finished in 3rd-place out of 53 competitors.
Esri was so impressed with Kapil’s work that after the contest they called him in for an interview.
Kapil nailed the interview and was hired full-time at Esri R&D.
His work on satellite image analysis at Esri now impacts millions of people across the world daily — and it’s truly a testament to his hard work.
You can read the full interview with Kapil here:
Where to Next
I can’t promise you that you’ll win a Kaggle competition like David or become the CTO of a Computer Vision company like Saideep did, but I can guarantee you that the books and courses I offer here on PyImageSearch are the best resources available today to help you master computer vision and deep learning.
If you’d like to follow in their steps, you can see what books and courses I offer here:
- What books and courses do you offer?
- What do each of your books/courses cover? How are they similar and how are they different?
If you need help choosing a book/course, I suggest starting here:
And if you have any questions on my books/courses, feel free to reach out to me:
Need more Help?
I’m dedicated to helping you learn Computer Vision, Deep Learning, and OpenCV. If you need more help from me, here are a few options:
Practical Python and OpenCV
Gentle introduction to the world of computer vision and image processing through Python and the OpenCV library.
Deep Learning for Computer Vision with Python
My in-depth, deep dive into the world of Deep Learning and Computer Vision.
PyImageSearch Gurus Course
The most complete, comprehensive computer vision course online today.
Raspberry Pi for Computer Vision
Learn how to apply CV and DL to embedded devices, such as the RPi, Movidius NCS, Google Coral, and NVIDIA Jetson Nano.
Blog
I’ve authored over 350+ free tutorials on the PyImageSearch.com blog.
It’s likely that I have already authored a tutorial to help you with your question or project.
Make sure you use the “Search” bar to search for keywords related to your topic. The search bar can be found on top-right of the sidebar on every page
FAQ
I’ve compiled answers to to the most common questions I receive on my official FAQ page.
Please check the FAQ as it’s possible that your question has been addressed there.
Contact
If you are a paying customer of mine, feel free to use my contact form to ask me a question (but please, kindly limit it to one question per email).