Inside today’s tutorial, you will learn how to track multiple objects using OpenCV and Python.
This post was inspired by a question I received from PyImageSearch reader, Ariel.
Ariel writes:
Hi Adrian, thanks for last week’s blog post on object tracking.
I have a situation where I need to track multiple objects but the code last week didn’t seem to handle that use case. I’m a bit stumped on how I can modify the code to track multiple objects.
What should I do?
Great question, Ariel — as we’ll find out, tracking multiple objects is fairly similar to tracking single objects, with only one or two caveats that we need to keep in mind.
To learn how to track multiple objects with OpenCV, just keep reading!
Looking for the source code to this post?
Jump Right To The Downloads SectionTracking multiple objects with OpenCV
In the remainder of this tutorial, you will utilize OpenCV and Python to track multiple objects in videos.
I will be assuming you are using OpenCV 3.2 (or greater) for this tutorial.
If you are using OpenCV 3.1 or below you should use my OpenCV install tutorials to install an updated version.
From there, let’s get started implementing OpenCV’s multi-object tracker.
Project structure
Today’s code + video files can be obtained via the “Downloads” section of this blog post. Once you’ve downloaded the zip to your computer, you can use the following 3 commands to inspect the project structure:
$ unzip multi-object-tracking.zip $ cd multi-object-tracking $ tree . ├── multi_object_tracking.py └── videos ├── los_angeles.mp4 ├── nascar.mp4 ├── soccer_01.mp4 └── soccer_02.mp4 1 directory, 5 files
The output of tree
shows our project structure.
- We’ll be discussing one Python script —
multi_object_tracking.py
. - I’ve supplied 4 example videos for you to experiment with. Credits for these videos are given later in this blog post.
Implementing the OpenCV multi-object tracker
Let’s get started coding our multi-object tracker.
Create a new file named multi_object_tracking.py
and insert the following code:
# import the necessary packages from imutils.video import VideoStream import argparse import imutils import time import cv2
To begin, we import our required packages. You’ll need OpenCV and imutils installed in your environment.
To install imutils
, simply use pip:
$ pip install --upgrade imutils
From there we’ll parse command line arguments:
# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-v", "--video", type=str, help="path to input video file") ap.add_argument("-t", "--tracker", type=str, default="kcf", help="OpenCV object tracker type") args = vars(ap.parse_args())
Our two command line arguments consist of:
--video
: The path to our input video file.--tracker
: The OpenCV object tracker to use. There are 7 trackers listed in the next code block to choose from and by defaultkcf
is used.
From there we’ll initialize our multi-object tracker:
# initialize a dictionary that maps strings to their corresponding # OpenCV object tracker implementations OPENCV_OBJECT_TRACKERS = { "csrt": cv2.TrackerCSRT_create, "kcf": cv2.TrackerKCF_create, "boosting": cv2.TrackerBoosting_create, "mil": cv2.TrackerMIL_create, "tld": cv2.TrackerTLD_create, "medianflow": cv2.TrackerMedianFlow_create, "mosse": cv2.TrackerMOSSE_create } # initialize OpenCV's special multi-object tracker trackers = cv2.MultiTracker_create()
Please refer to the previous post on OpenCV Object Trackers for the full explanation of the 7 available tracker algorithms defined on Lines 18-26.
I recommend the following three algorithms:
- KCF: Fast and accurate
- CSRT: More accurate than KCF but slower
- MOSSE: Extremely fast but not as accurate as either KCF or CSRT
On Line 29 we initialize the multi-object tracker via the cv2.MultiTracker_create
method. The class allows us to:
- Add new object trackers to the
MultiTracker
- Update all object trackers inside the
MultiTracker
with a single function call
Let’s move on to initializing our video stream:
# if a video path was not supplied, grab the reference to the web cam if not args.get("video", False): print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(1.0) # otherwise, grab a reference to the video file else: vs = cv2.VideoCapture(args["video"])
Lines 32-35 handle creating a video stream object for a webcam. Otherwise, a video path must have been supplied, so we’ll create a video capture object which will read frames from a video file on disk (Lines 38 and 39).
It’s now time to loop over frames and start multi-object tracking!
# loop over frames from the video stream while True: # grab the current frame, then handle if we are using a # VideoStream or VideoCapture object frame = vs.read() frame = frame[1] if args.get("video", False) else frame # check to see if we have reached the end of the stream if frame is None: break # resize the frame (so we can process it faster) frame = imutils.resize(frame, width=600)
Lines 45-50 handle grabbing a frame
from the stream. We’ll break out of the loop if no more frames are available. Frames won’t be available at the end of a video file or if there is a problem with the webcam connection.
Then we resize
the frame to a known dimension on Line 53 — a smaller video frame yields faster FPS as there is less data to process.
From there, let’s update
our trackers
and draw bounding boxes around the objects:
# grab the updated bounding box coordinates (if any) for each # object that is being tracked (success, boxes) = trackers.update(frame) # loop over the bounding boxes and draw then on the frame for box in boxes: (x, y, w, h) = [int(v) for v in box] cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
Here we are updating all single object trackers inside trackers
thereby obtaining multi-object tracking (Line 57).
For each tracked object there is an associated bounding box. The box is drawn on the frame via the cv2.rectangle
drawing function on Lines 60-62.
Next, we’ll display the frame
as well as select our objects to track:
# show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the 's' key is selected, we are going to "select" a bounding # box to track if key == ord("s"): # select the bounding box of the object we want to track (make # sure you press ENTER or SPACE after selecting the ROI) box = cv2.selectROI("Frame", frame, fromCenter=False, showCrosshair=True) # create a new object tracker for the bounding box and add it # to our multi-object tracker tracker = OPENCV_OBJECT_TRACKERS[args["tracker"]]() trackers.add(tracker, frame, box)
The frame
is shown on the screen via Line 65.
A keypress is also captured on Line 66.
If the key pressed is an “s” for “select”, we’ll manually select bounding box of object to track using our mouse pointer via the cv2.selectROI
function (Lines 73 and 74). If you’re unhappy with your selection you can press “ESCAPE” to reset the selection, otherwise hit “SPACE” or “ENTER” to begin the object tracking.
Once the selection has been made we add the tracker
object to trackers
(Lines 78 and 79).
Important: You’ll need to press “s” key and select each object we want to track individually.
This is of course just an example. If you were building a truly autonomous system, you would not select objects with your mouse. Instead, you would use an object detector (Haar Cascade, HOG + SVM, Faster R-CNN, MobileNet, YOLO, etc.). I’ll be demonstrating how to do this process starting next week, so stay tuned!
Let’s handle when the “q” key (“quit”) has been pressed (or if we’ve broken out of the loop due to reaching the end of our video file):
# if the `q` key was pressed, break from the loop elif key == ord("q"): break # if we are using a webcam, release the pointer if not args.get("video", False): vs.stop() # otherwise, release the file pointer else: vs.release() # close all windows cv2.destroyAllWindows()
When the “q” key is pressed, we break out of the loop and perform cleanup. Cleanup involves releasing pointers and closing GUI windows.
Multi-object tracking results
Head over the “Downloads” section at the bottom of this post to grab the source code + video files.
From there, open up a terminal and execute the following command:
$ python multi_object_tracking.py --tracker csrt
Your webcam will be used by default since the --video
switch was not listed. You may also leave off the --tracker
argument and kcf
will be used.
Would you like to use one of the four supplied video files or a video file of your own?
No problem. Just supply the --video
command line argument along with a path to a video file. Provided OpenCV can decode the video file, you can begin tracking multiple objects:
$ python multi_object_tracking.py --video videos/soccer_01.mp4 --tracker csrt
You may also supply your desired tracking algorithm via the --tracker
command line argument (as shown).
Notice how we are able to:
- Track multiple soccer players across the pitch
- Track multiple race cars in a race
- And track multiple vehicles as they are driving in a freeway
Be sure to give the code a try when you need to track multiple objects with OpenCV!
Problems and limitations
There are two limitations that we can run into when performing multiple object tracking with OpenCV.
As my results from the previous section demonstrated, the first (and biggest) issue we ran into was that the more trackers we created, the slower our pipeline ran.
Keep in mind that we need to instantiate a brand new OpenCV object tracker for each object we want to track — we cannot use the same object tracker instance to track multiple objects.
For example, suppose we have 10 objects in a video that we would like to track, implying that:
- We need to create 10 object tracker instances
- And therefore, we’ll see the frames per second throughput of our pipeline decrease by a factor of 10.
Object trackers are not “magic” and computationally “free” to apply — each and every one of these object trackers requires computation to track an object, there is no way to get around that.
…or is there?
If you open up your activity monitor on your system you’ll notice that when running today’s script that only one processor core is being utilized.
Is there a way to distribute each of the object trackers to a separate process, thereby allowing us to utilize all cores of the processor for faster object tracking?
The answer is yes, we can — and I’ll show you how to obtain faster multi-object tracking later in this series.
Video and Audio credits
To create the examples for this tutorial I needed to use clips from a number of different videos. A big thank you and credit to Ben Sound, Dash Cam Tours, NASCAR, and FIFATV.
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In today’s tutorial, we learned how to perform multiple object tracking using OpenCV and Python.
To accomplish our multi-object tracking task, we leveraged OpenCV’s cv2.MultiTracker_Create
function.
This method allows us to instantiate single object trackers (just like we did in last week’s blog post) and then add them to a class that updates the locations of objects for us.
Keep in mind though, the cv2.MultiTracker
class is a convenience function — while it allows us to easily add and update objects locations, it’s not necessarily going to improve (or even maintain) the speed of our object tracking pipeline.
To obtain faster, more efficient object tracking we’ll need to leverage multiple processes and spread the computation burden across multiple cores of our processor — I’ll be showing you how to accomplish this task in a future post in this series.
To download the source code to this post, and be notified when the next object tracking tutorial is published, be sure to enter your email address in the form below!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!