Inside today’s tutorial, you will learn how to track multiple objects using OpenCV and Python.
This post was inspired by a question I received from PyImageSearch reader, Ariel.
Ariel writes:
Hi Adrian, thanks for last week’s blog post on object tracking.
I have a situation where I need to track multiple objects but the code last week didn’t seem to handle that use case. I’m a bit stumped on how I can modify the code to track multiple objects.
What should I do?
Great question, Ariel — as we’ll find out, tracking multiple objects is fairly similar to tracking single objects, with only one or two caveats that we need to keep in mind.
To learn how to track multiple objects with OpenCV, just keep reading!
Looking for the source code to this post?
Jump Right To The Downloads SectionTracking multiple objects with OpenCV
In the remainder of this tutorial, you will utilize OpenCV and Python to track multiple objects in videos.
I will be assuming you are using OpenCV 3.2 (or greater) for this tutorial.
If you are using OpenCV 3.1 or below you should use my OpenCV install tutorials to install an updated version.
From there, let’s get started implementing OpenCV’s multi-object tracker.
Project structure
Today’s code + video files can be obtained via the “Downloads” section of this blog post. Once you’ve downloaded the zip to your computer, you can use the following 3 commands to inspect the project structure:
$ unzip multi-object-tracking.zip $ cd multi-object-tracking $ tree . ├── multi_object_tracking.py └── videos ├── los_angeles.mp4 ├── nascar.mp4 ├── soccer_01.mp4 └── soccer_02.mp4 1 directory, 5 files
The output of tree
shows our project structure.
- We’ll be discussing one Python script —
multi_object_tracking.py
. - I’ve supplied 4 example videos for you to experiment with. Credits for these videos are given later in this blog post.
Implementing the OpenCV multi-object tracker
Let’s get started coding our multi-object tracker.
Create a new file named multi_object_tracking.py
and insert the following code:
# import the necessary packages from imutils.video import VideoStream import argparse import imutils import time import cv2
To begin, we import our required packages. You’ll need OpenCV and imutils installed in your environment.
To install imutils
, simply use pip:
$ pip install --upgrade imutils
From there we’ll parse command line arguments:
# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-v", "--video", type=str, help="path to input video file") ap.add_argument("-t", "--tracker", type=str, default="kcf", help="OpenCV object tracker type") args = vars(ap.parse_args())
Our two command line arguments consist of:
--video
: The path to our input video file.--tracker
: The OpenCV object tracker to use. There are 7 trackers listed in the next code block to choose from and by defaultkcf
is used.
From there we’ll initialize our multi-object tracker:
# initialize a dictionary that maps strings to their corresponding # OpenCV object tracker implementations OPENCV_OBJECT_TRACKERS = { "csrt": cv2.TrackerCSRT_create, "kcf": cv2.TrackerKCF_create, "boosting": cv2.TrackerBoosting_create, "mil": cv2.TrackerMIL_create, "tld": cv2.TrackerTLD_create, "medianflow": cv2.TrackerMedianFlow_create, "mosse": cv2.TrackerMOSSE_create } # initialize OpenCV's special multi-object tracker trackers = cv2.MultiTracker_create()
Please refer to the previous post on OpenCV Object Trackers for the full explanation of the 7 available tracker algorithms defined on Lines 18-26.
I recommend the following three algorithms:
- KCF: Fast and accurate
- CSRT: More accurate than KCF but slower
- MOSSE: Extremely fast but not as accurate as either KCF or CSRT
On Line 29 we initialize the multi-object tracker via the cv2.MultiTracker_create
method. The class allows us to:
- Add new object trackers to the
MultiTracker
- Update all object trackers inside the
MultiTracker
with a single function call
Let’s move on to initializing our video stream:
# if a video path was not supplied, grab the reference to the web cam if not args.get("video", False): print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(1.0) # otherwise, grab a reference to the video file else: vs = cv2.VideoCapture(args["video"])
Lines 32-35 handle creating a video stream object for a webcam. Otherwise, a video path must have been supplied, so we’ll create a video capture object which will read frames from a video file on disk (Lines 38 and 39).
It’s now time to loop over frames and start multi-object tracking!
# loop over frames from the video stream while True: # grab the current frame, then handle if we are using a # VideoStream or VideoCapture object frame = vs.read() frame = frame[1] if args.get("video", False) else frame # check to see if we have reached the end of the stream if frame is None: break # resize the frame (so we can process it faster) frame = imutils.resize(frame, width=600)
Lines 45-50 handle grabbing a frame
from the stream. We’ll break out of the loop if no more frames are available. Frames won’t be available at the end of a video file or if there is a problem with the webcam connection.
Then we resize
the frame to a known dimension on Line 53 — a smaller video frame yields faster FPS as there is less data to process.
From there, let’s update
our trackers
and draw bounding boxes around the objects:
# grab the updated bounding box coordinates (if any) for each # object that is being tracked (success, boxes) = trackers.update(frame) # loop over the bounding boxes and draw then on the frame for box in boxes: (x, y, w, h) = [int(v) for v in box] cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
Here we are updating all single object trackers inside trackers
thereby obtaining multi-object tracking (Line 57).
For each tracked object there is an associated bounding box. The box is drawn on the frame via the cv2.rectangle
drawing function on Lines 60-62.
Next, we’ll display the frame
as well as select our objects to track:
# show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the 's' key is selected, we are going to "select" a bounding # box to track if key == ord("s"): # select the bounding box of the object we want to track (make # sure you press ENTER or SPACE after selecting the ROI) box = cv2.selectROI("Frame", frame, fromCenter=False, showCrosshair=True) # create a new object tracker for the bounding box and add it # to our multi-object tracker tracker = OPENCV_OBJECT_TRACKERS[args["tracker"]]() trackers.add(tracker, frame, box)
The frame
is shown on the screen via Line 65.
A keypress is also captured on Line 66.
If the key pressed is an “s” for “select”, we’ll manually select bounding box of object to track using our mouse pointer via the cv2.selectROI
function (Lines 73 and 74). If you’re unhappy with your selection you can press “ESCAPE” to reset the selection, otherwise hit “SPACE” or “ENTER” to begin the object tracking.
Once the selection has been made we add the tracker
object to trackers
(Lines 78 and 79).
Important: You’ll need to press “s” key and select each object we want to track individually.
This is of course just an example. If you were building a truly autonomous system, you would not select objects with your mouse. Instead, you would use an object detector (Haar Cascade, HOG + SVM, Faster R-CNN, MobileNet, YOLO, etc.). I’ll be demonstrating how to do this process starting next week, so stay tuned!
Let’s handle when the “q” key (“quit”) has been pressed (or if we’ve broken out of the loop due to reaching the end of our video file):
# if the `q` key was pressed, break from the loop elif key == ord("q"): break # if we are using a webcam, release the pointer if not args.get("video", False): vs.stop() # otherwise, release the file pointer else: vs.release() # close all windows cv2.destroyAllWindows()
When the “q” key is pressed, we break out of the loop and perform cleanup. Cleanup involves releasing pointers and closing GUI windows.
Multi-object tracking results
Head over the “Downloads” section at the bottom of this post to grab the source code + video files.
From there, open up a terminal and execute the following command:
$ python multi_object_tracking.py --tracker csrt
Your webcam will be used by default since the --video
switch was not listed. You may also leave off the --tracker
argument and kcf
will be used.
Would you like to use one of the four supplied video files or a video file of your own?
No problem. Just supply the --video
command line argument along with a path to a video file. Provided OpenCV can decode the video file, you can begin tracking multiple objects:
$ python multi_object_tracking.py --video videos/soccer_01.mp4 --tracker csrt
You may also supply your desired tracking algorithm via the --tracker
command line argument (as shown).
Notice how we are able to:
- Track multiple soccer players across the pitch
- Track multiple race cars in a race
- And track multiple vehicles as they are driving in a freeway
Be sure to give the code a try when you need to track multiple objects with OpenCV!
Problems and limitations
There are two limitations that we can run into when performing multiple object tracking with OpenCV.
As my results from the previous section demonstrated, the first (and biggest) issue we ran into was that the more trackers we created, the slower our pipeline ran.
Keep in mind that we need to instantiate a brand new OpenCV object tracker for each object we want to track — we cannot use the same object tracker instance to track multiple objects.
For example, suppose we have 10 objects in a video that we would like to track, implying that:
- We need to create 10 object tracker instances
- And therefore, we’ll see the frames per second throughput of our pipeline decrease by a factor of 10.
Object trackers are not “magic” and computationally “free” to apply — each and every one of these object trackers requires computation to track an object, there is no way to get around that.
…or is there?
If you open up your activity monitor on your system you’ll notice that when running today’s script that only one processor core is being utilized.
Is there a way to distribute each of the object trackers to a separate process, thereby allowing us to utilize all cores of the processor for faster object tracking?
The answer is yes, we can — and I’ll show you how to obtain faster multi-object tracking later in this series.
Video and Audio credits
To create the examples for this tutorial I needed to use clips from a number of different videos. A big thank you and credit to Ben Sound, Dash Cam Tours, NASCAR, and FIFATV.
What's next? We recommend PyImageSearch University.
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In today’s tutorial, we learned how to perform multiple object tracking using OpenCV and Python.
To accomplish our multi-object tracking task, we leveraged OpenCV’s cv2.MultiTracker_Create
function.
This method allows us to instantiate single object trackers (just like we did in last week’s blog post) and then add them to a class that updates the locations of objects for us.
Keep in mind though, the cv2.MultiTracker
class is a convenience function — while it allows us to easily add and update objects locations, it’s not necessarily going to improve (or even maintain) the speed of our object tracking pipeline.
To obtain faster, more efficient object tracking we’ll need to leverage multiple processes and spread the computation burden across multiple cores of our processor — I’ll be showing you how to accomplish this task in a future post in this series.
To download the source code to this post, and be notified when the next object tracking tutorial is published, be sure to enter your email address in the form below!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Lu Mao
Thanks Adrian!
Adrian Rosebrock
I’m glad you liked it, Lu Mao! 🙂
Cagdas
Hi,
Thanks for the post firstly. I’m currently using KCF tracker (reason explained below) to track people (detection made by Tensorflow Object Detection API with a model)
Since its more accurate, I actually want to use CSRT. Low FPS rate is not problem for me
However, on my tests, CSRT never handle the failure even if target is lost while KCF does this well (thats why im currently using it)
Do you know any trick to improve CSRT (or any tracker doesnt report failure well) on failure reports?
Adrian Rosebrock
The term “accurate” her is really based on the context of your object tracking application. CSRT does tend to perform better than KCF in most general applications; however, there are situations when another tracker could perform better. If KCF is getting you better accuracy for your application then I would suggest sticking with KCF. Unfortunately without knowing more about your specific project it’s pretty challenging to provide any other suggestions. I hope that helps point you in the right direction though!
Tom
Hi Cagdas,
If your objective is to maintain object ID’s, I suggest you try tracking-by-detecting. I’m using SORT algorithm to do it. You do detection on every frame (or every nth frame, as long as there’s a good overlap between object’s location in consecutive nth frames), pass your detected bounding boxes to SORT, which returns ‘adjusted’ bounding boxes and their ID’s. You can fix with the level of ‘adjustment’ SORT does by fiddling with it’s parameters.
I haven’t found another satisfactory enough way to do it, but I’m hoping for Adrian to show us something better.
Best,
Tom
Adrian Rosebrock
It’s also important to note that SORT is using Kalman filters as well. It’s more than just running detection on every frame.
Austin
I think I read in the deepsort paper that the velocity components of the filters played no meaningful role in tracking performance – in your experience, what are the most effective techniques for multi-object tracking with regard only to accuracy (ignoring speed)? Do you use deep-sort or something else? Great article, thank you!
Alex
Thanks a lot Adrian! Next week be a good boy and come faster ASAP , please 😉
Alex
I apologize in connection with the ridiculous misunderstanding, was in a hurry and did not reread. was meant the desire for the soonest coming of the next week
ata
hi
i like your website and info
Adrian Rosebrock
Thank you, I’m glad you liked the post 🙂
Izak
Hello, thanks simplifying the process of learning Open CV.
I wanted to know how can I detect tracking failure for a single object in the multi track so that I can remove it from the list of objects that are being tracked and perhaps re-initiate detection to add new objects ?
Adrian Rosebrock
Unfortunately, I believe this is a bug in the current “cv2.MultiTracker” functionality. The “success” boolean is always “True”, not an array of boolean values, one for each tracker.
Tehseen Akhtar
@Izak
well i have a some logic to share. I used failure detection by tracking an object for some frames say 30 frames in the forward direction and then from the 30th frame tracking all the objects back to the first frame. Good tracks will come back to their original positions and failures will not come back to their start point so we can remove them from the list and only keep the remaining one. Hope this helps. I would like Adrian to include this in the next release as well.
Regards.
Marco
hello Adrian, congratulations for your lessons more and more ‘beautiful, is more than a week that I try to be able to run my Raspberry with Python + OpenCv unfortunately without success, I also congratulate all those who have succeeded! Unfortunately to me launching any Python application then follow various errors, in this case this is what appears to me launching multi_ subject_tracking:
$ python multi_object_tracking.py –tracker csrt
Traceback (most recent call last):
File “multi_object_tracking.py”, line 22, in
“csrt”: cv2.TrackerCSRT_create,
AttributeError: ‘module’ object has no attribute ‘TrackerCSRT_create’
do you think you can help me solve the problem?
Thanks
Adrian Rosebrock
In order to utilize the CSRT tracker you need OpenCV 3.4 or greater. My guess is that you are not using OpenCV 3.4. If you comment out Line 19 the code should work just fine for you.
Phil
Is this solution suitable for Python / OpenCV on RPi? (Assuming we omit next week’s steps of object detection with a classifier).
Adrian Rosebrock
If you’re using a Raspberry Pi I would suggest you:
1. Use the MOSSE tracker as its the fastest
2. Use a faster object detector. The deep learning SSD used here is too slow on the Pi (~1 FPS). Haar cascades, while less accurate, will be the fastest.
lionel
Hi Marco,
Same than you on my Debian Stretch (seems even to be a 2.4.9.1:s ). So I will have to build an OpenCV 3.4 for it 🙁
Adrian Rosebrock
If you follow one of my tutorials on isntalling OpenCV it should be a breeze 🙂
Dheeraj
`pip install opencv-contrib-python`
will solve the problem
Thanks
vinod
Thanks a ton <3
Rohit Thakur
Hello Adrian, Wonderful post. I am eagerly waiting for the post showing object detection and tracking simultaneously.
Adrian Rosebrock
I’ll have such a post this coming Monday, Rohit 🙂
Trinity
Hi Adrian,
Thanks for the post.
When will you post about object detection and tracking simultaneously?!
Adrian Rosebrock
I’ve already posted about it. Make sure you jon the PyImageSearch newsletter so you’re notified when new blog posts are published 🙂
Luke
Let me guess… Python threading? https://docs.python.org/2/library/threading.html
Adrian Rosebrock
I’m not sure what you mean by “guessing” — the post covers the exact method. OpenCV’s provides the
cv2.MultiTracker
function as a convenience method to accept multiple object trackers — it does not natively cover creating multiple processes to distribute to multiple cores (I will be covering that in a future blog post).Secondly, you wouldn’t use threading for computationally heavy tasks — you were use multi-processing there.
Marco
Hi Adrian, I actually use OpenCV 3.3, commenting on line 19 “csrt”: cv2.TrackerCSRT_create “as you suggest, I get another error:
“moves”: cv2.TrackerMOSSE_create
Do you have any other suggestions to give me?
Thank you
Adrian Rosebrock
It’s the same solution, Marco. Comment out the MOSSE line.
wildan
can i apply this tutorial for real-time ? and how to save object that have been bounded with this square bounding ?
i’m sorry my english is so bad ?
Adrian Rosebrock
Yes, the code does work in real-time. The code also saves an output video of the results. Be sure to give the code a look, it shows you how to accomplish both of your goals.
Mark C
Loved the aricle object tracked correctly.
Adrian Rosebrock
Thank you for the kind words Mark, I’m glad the post worked for you 🙂
Gal
Awesome – works pretty well with all trackers.
I get “AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerCSRT_create'” when I try to run with –tracker csrt.
I’m working inside a virtualenv, and when I “python” “import cv2” and “cv2.__version__” I get “3.4.0” so I’m pretty sure I’m using OpenCV 3.4.
Your solution of commenting out the “csrt” line works and the results are really impressive. Thanks for another awesome tutorial!
Adrian Rosebrock
You are using OpenCV 3.4, but you don’t have the opencv_contrib module installed. Take a look at the other comments on this post for the solution.
Zoe
Hi Adrian, as a postgraduate student, I have studied tracking for just a few months. Firstly,thanks a lot for your sharing and I will appreciate it that if you could tell me something about data association in each tracker to realize ID matching.
Adrian Rosebrock
You can combine the methods from this post and my previous post to obtain ID matching. An example of such a combined solution can be found in next week’s blog post on people counting.
Marco
Hi Adrian, I have to correct myself, removing the lines you suggested works, only that the video is very slow and therefore also the traking of the object, is there a way to speed up? I have to go to OpenCV 3.4, can you tell me the shortest way to do it? if you can do I can keep Python 2.7 or do I have to go to version 3 too?
Thank you
Adrian Rosebrock
Hey Marco — how many objects are you tracking? Are you using one of my example videos or your own? And additionally, which tracker are you using?
asıf
hi, consider the scene that has one moving object but camera is non stationary. In that case how can I do detection and tracking automatically without selecting ROI?
Adrian Rosebrock
This method will still work even if the camera is non-stationary and is moving around. Next week’s blog post will show you how to (1) automatically detect the object and (2) track the object as it moves around. Make sure you join the PyImageSearch newsletter to be notified when the post goes live!
marco
hi Adrian, I use “trackin multiple object” but I select only one object as it is very slow, the images come from my Web Cam usb
Adrian Rosebrock
Hey Marco, can you clarify what “slow” in this context means? What tracker are you using? How many objects are you tracking? Are you using the input videos I supplied or your own custom videos?
Marco
Adrian, I summarize what I do, on my raspberry a web cam usb and the Pi camera, when I start “$ python multi_object_tracking.py” opens a video window, the images I see come from the Web Cam usb, the video is not fluid If, for example, my face is framed and I move to one side the video image takes a few seconds to reach me, the same thing happens with the tracking, I have to move a lot, very slowly because the trakcing follows me. I hope I was clear.
Thanks for your patience
Adrian Rosebrock
If you are using the Raspberry Pi you should be using the MOSSE tracker. The KCF and CSRT trackers are too computationally intensive for the Pi.
Marco
Hi Adrian, you advised me this “It’s the same solution, comment out the MOSSE line.” to solve my problem: “MOSSE”: cv2.TrackerMOSSE_create
AttributeError: ‘module’ object has no attribute ‘TrackerMOSSE_create’ “I would like to understand why it does not work” MOSSE “…..
Adrian Rosebrock
Which version of OpenCV are you using?
Apoorva Vinod
Hey Adrian,
Just wanted to let you know that I worked on something similar a few months ago, Multiple object tracking with detection as well. I didn’t want to manually select bounding boxes so I passed the frames through an object detection model. I have YOLOv2 and MobilenetSSD for the detection part. After getting the bounding boxes from the detection I pass them into the trackers. Btw, IMO the multitracker is poorly implemented in python opencv since it doesn’t handle tracking failure well. So I opted to maintain my own dictionary of trackers and added/deleted trackers from them as and when objects were newly detected or failed to track. I experimented with the following trackers : BOOSTING, MIL, KCF, TLD, MEDIANFLOW and GOTURN. I found KCF to be decent in most use-cases like dashcam videos and such. I will try out CSRT and MOSSE soon. Let me know what you guys think, your input is highly appreciated! 🙂
Adrian Rosebrock
Thanks for sharing, Apoorva! If you would like to compare your implementation to mine, make sure you take a look at this post where I demonstrated how to combine object trackers with object detection.
Marco
OpenCv 3.3.0 end Python 2.7.13
Adrian Rosebrock
You need OpenCV 3.4 or greater for the CSRT tracker. You can either comment out Line 19 or install OpenCV 3.4 on your system.
Prasanna
Hi Marco, you need to install opencv-contrib-python package instead of opencv-python as the former consists of all OpenCV modules and the latter is only a subset and contains only main modules for release.
The Tracker modules are moved to contrib package and you need to install it seperately.
Please use this below link for installation reference:
https://pypi.org/project/opencv-contrib-python/
Below link gives the actual difference in the packages:
https://docs.opencv.org/master/
Cheers!
Lionel
Hi Adrian,
Something surprises me a little bit. I can select several players. Good, so I selected 3: 2 I know they always stay on the screen and another one who will disappear. The code starts to track them correctly but when the 3rd one disappears I was thinking/hopping the code will stop to track it. But actually it just decide to track another player who has no relation with the 3rd one. Why? How to avoid that?
Lionel
Adrian Rosebrock
Your question is the crux of all object tracking research — how to determine if an object is lost/out of view. Object tracking is an open area of research and far from solved. I would suggest reading the first post in the series for more context, but keep in mind object tracking is far from solved.
Markus
dear Adrian
i want to ask you that how to save the video of result to display to others.
Markus
Adrian Rosebrock
You can use the
cv2.VideoWriter
function. An example of which can be found here.Drew
Hey Adrian,
Great writeup! Only question I had was is there any way to remove one / all of the trackers or ROI’s?
Adrian Rosebrock
Unfortunately no, not really. The
cv2.MultiTracker
is really just a convenience function. If you want access to thesuccess
booleans and be manually update or remove them you’ll need to manually manage each individual object tracker.Drew
So I couldnt do anything with the success Boolean in this setup?
(success, boxes) = trackers.update(frame)
I’d be happy if I could just reset everything back to default like no ROI’s were selected and the camera had just come online.
Adrian Rosebrock
Unfortunately, no, the “success” boolean (at the time being) is not helpful for
cv2.MultiTracker
.If you want to reset everything back to the default just reinitialize the object:
trackers = cv2.MultiTracker_create()
lxc
Hello, great God.
Why do I choose only one area after I press “s”? When choosing second, the first one will disappear.
Adrian Rosebrock
The first object should still be tracked. If it “disappears” it simply means that the object tracker you used won’t be able to track the object.
Lukas
Hello Adrian,
first of all, thank you for providing these awesome ressources on OpenCV and computer vision on your blog. I am a huge fan of your articels and they already helped me out very often.
I have a question regarding the multi-processing, you were talking of. Since, I am implementing my own detection and tracking framework for multiple targets, I want to run the trackers’ update() method in a parallel pool. I am creating and initialising the trackers (CSRT) every 15 frames and want to pass them to a parallel pool via map() for calling their update methods. In theory, this should work, but the OpenCV trackers are not pickable, which is why they can’t be sent to the pool workers.
This is, why I was wondering, how you have planned to do the multi-processing implementation of your above code.
Regards,
Lukas
Adrian Rosebrock
Hey Lukas — you’re absolutely right, OpenCV tracker objects are not pickable. I’m actually writing a blog post that discusses this very topic, including how to distribute the object trackers across multiple processes. Stay tuned!
Lukas
Hey Adrian,
thanks for your answer! That is great to hear and I am really looking forward to see your solution, since I broke my head about this problem and tried a lot of approaches, which became very complex and did not really work well.
Eugene
Hey.
I’d like to find out whether you wrote a post about the problem discussed above?
Adrian Rosebrock
Yes. You can find it here.
Mohammad
Hello Adrian,
Thank you for sharing your knowledge.
Is there any implementation of MHT(Multiple Hypothesis Tracking) for python? I have heard that’s much faster than opencv implementations of tracking like “CSRT” or so.
Nurettin
Hello Adrian, how can I implement HOG feature to KCF tracker?
Adrian Rosebrock
You mean detect an object via HOG + Linear SVM and then track it?
Nurettin
Yes, couldn’t do that.
By the way, I want to detect every person comes home and take their face one by one, to detect person which implementation do you suggest?
Adrian Rosebrock
I would suggest using a deep learning-based object detector. The one I just linked you to includes a “person” class as well.
hayu
Hello Adrian.. i will question.. how to measure the diameter of objects detected?
Adrian Rosebrock
I assume you want the diameter in actual measurable units (i.e., millimeters, inches, etc.)? If so, just follow this guide on computing object size.
Sean Ng
You tutorials are amazing!
The KCF tracker serves the accuracy needed for my assignment but when they are 3 or more person in the room walking around, the fps dropped to below 8fps, I’m looking forward to your next tutorial on using multiprocessors to run the trackers.
Thank you so much for these tutorials!!
Adrian Rosebrock
Thanks Sean! The multiprocessing tutorial for object tracking will publish the end of this month (October). Keep an eye on the PyImageSearch blog for it!
Waqar Nawab
Hy Adrian, Can you please suggest me on how can I replace custom image over tracked object? Any ideas
Thanks!!!
Adrian Rosebrock
There are a few ways to do this but it really depends on how “realistic” you want the effect to look, and therefore, how complicated the algorithm will be. Could you share some more details on what you’re trying to accomplish?
Pankaj
Hi Adrian,
Thanks for the awesome tutorial.
After I run the code using command line argument : python multi_object_tracking.py –video videos/soccer_01.mp4 –tracker csrt
The video appears for about 3-4 seconds and disappears .Also,I can’t make a bounding box on a person using mouse . Can you please guide me how to resolve this issue?
Thanks in advance.
Regards,
Pankaj
Adrian Rosebrock
Hey there, Pankaj, what version of OpenCV and Python are you using and which OS? I haven’t ran into that specific problem before.
Lucas
Hi Adrian,
Thanks for your tutorial. I also have the same situation. The video play just like a fast forward mode. While I play the video using vlc player, it’s speed shows normally. And I try to run it in real-time, it also appears normally.
My system:
Ubuntu 18.04
i7 8700 / 8G RAM / 240SSD / GT 1050Ti w/4G RAM
Thanks a lot!
Regards,
Lucas
Adrian Rosebrock
Are the objects in your video successfully tracked?
Keep in mind that OpenCV doesn’t care what the FPS of the video is. The goal of OpenCV is to process video frames as fast as it possibly can. The faster OpenCV can process frames, the faster it can process your video. That’s a feature not a bug.
Ashkan
Hi Adrian,
By any chance do you have any tutorial or project in stereo calibration and 3d tracking?
Best,
Ashkan.
Adrian Rosebrock
Sorry, I do not.
Sarwat
Thanks for your tutorial
how i can detect multiple object in stream video but without selecting them by mouse, i want these objected to be defined and it search for them .
like here in car race, the script will check the cars automatic and track them
Adrian Rosebrock
Take a look at this tutorial where I provide an example of what you’re looking for.
Fanuel
hi adrian, I was wondering if I could track each object individually and store their real-world position or coordinates in a txt or csv file every half a second…
Kirill
Hi Adrian, when will the article about tracking objects using several processes and the distribution of the load on the calculations in several cores of our processor?
Adrian Rosebrock
It’s already live.
Mark
Adrian Hi,
Can this code be implemented of RPi 3 as is?
Or some problems should be accounted for?
Adrian Rosebrock
Tracking multiple objects on a Raspberry Pi using these methods is going to be far too slow. You cannot easily use a Raspberry Pi with these tracking algorithms.
Anesh Muthiah
I loved your tutorial. Can you tell me how to remove history of bounding boxes in multi object tracker?
Adrian Rosebrock
You need to re-instantiate the object, unfortunately.
Elena
Hello Adrian,
Thank you for your tutorial.
I want to add tracking to YOLOv3. I’m a bit puzzled on how I can modify the code to track multiple objects with YOLOv3. I want to include the class of object with its confidence level while tracking the object. What should I do?
Adrian Rosebrock
See this tutorial where I cover the exact answer to your question (only using a SSD).
Miranda
Hi Adrian, Thank you for the great tutorial! In this code, we need to press ‘s’ key each time we want to select a bounding box, this way we initialize trackers in different frames. Is it possible to select multiple bounding boxes at the same frame instead? Thanks!
Miranda
I figured out how to do this. The user can substitute cv2.selectROI with cv2.selectROIs and be able to choose multiple bounding boxes at once.
Adrian Rosebrock
Thank you for sharing, Miranda! That is helpful to know 🙂
Kedar
Sir Thank You So Much For Your Best Help..!! 🙂
my questions is : ” can we select ROI automatically? According to shape or colour of perticular object like ‘ball’ ? “
Adrian Rosebrock
Yes, absolutely. I cover that very question in this tutorial.
mido
i am a beginner in pyhton , i cant put the video directory , how should i put it ? ap.add_argument(“-v”, “–video”, type=str,
Adrian Rosebrock
It’s okay if you are a beginner in Python but you should first read this tutorial on argparse and command line arguments.
Aditi
Hi
Can u please tell me how to save the videos after tracking in this code
Adrian Rosebrock
Sure, all you need is the “cv2.VideoWriter” function. See this tutorial for a good example.
Abhinna Pradhan
Hi.. Adrian……After running the code I am getting output without any tracking i.e. same as input video….Plz tell me what I am doing wrong.
Azad
Hi, adrian i gone through your this code I found it cool.. But what I want it to detect automatically any object without clicking… Please help if you can
Adrian Rosebrock
See this tutorial where I do exactly that.
khiro
hello adrian , thx
how i can to create/destroy for example 10 trackers ??
if i track an object to an target , how i can destroy this thacker and re us after for new object ?
Adrian Rosebrock
You can use the “del” Python statement to delete the object and have Python’s garbage collector reclaim resources.
Andy Woods
Just wondering if it would help to specify package versions somewhere
Bharath J
Hi,
Great work.
Is this possible to integrate pre-trained model and custom model in a single python script.
Adrian Rosebrock
Yep! See this tutorial.
Raj
Hi, I am trying to use the KCF tracker from your code and after initialising the bounding box it never updates in future frames. I am on opencv 3.3.1 and running this on an ubuntu machine. do you have any debugging tips or ideas on why it might not be working?
Adrian Rosebrock
OpenCV 3.3 is pretty old at this point. Try upgrading your OpenCV install.
Priyansh
Hey thanks for this awesome blog.
Just one question can we use opencv tracker with model like mobilenet, yolo.
I tried using it but the tracker update gives worng co-ordinate
Adrian Rosebrock
Yes, you can. See this tutorial for an example.
LDag
Thanks for the tutorial!
I had to add a sleep(0.8) on line 87, in the while loop, because the video was to fast.
swa sweety
Hi Sir…
I have some doubts…Is it possible to track all the medical equipment’s usages using OpenCV python and raspberry pi 3…inorder to prevent the spread of infectious diseases towards other patients?
kadir
Hey thanks for this all tutorials.
I’m working on your all detection project. I just wondering. How can select and use different detections. For example: we detected 2 different object(or more) same time like in this video. And i want to define or fallow this detects. The player who has goal will has a goal value in database. I need some advise about that Adrian.
Adrian Rosebrock
If I’m understanding your question correctly, I think you need something like this tutorial.
Anup Swarnkar
Hi Ardrian,
Can I pass contours as input instead of manually drawing the bounding boxes? Also how to handle a situation when an object is out of the frame and comes back?
Adrian Rosebrock
Yes, just compute the bounding box of the contour and use that to seed the tracker.
John
Hey, first of all thank you for your post.
I am trying to use opencv to catch multiple of small balls, like pingpong ball.
However, it usually loose tracking. Is there any method that can improve the code?
Adrian Rosebrock
I would try more advanced object detectors and trackers. Start with HOG + Linear SVM and them move on to deep learning-based detectors such as Faster R-CNN, SSDs, and RetinaNet. The deep learning-based object detectors are covered in Deep Learning for Computer Vision with Python.
Andriy Ponomarenko
Hi! I want to do simple project to vehicles detection, tracking and determine some behavior on road. I use YOLO algorithm for object tracking and want to use Kalman Filter and Hungarian method for tracking, but I have problem with occlusion on video. Can you give me advice what to do, how to solve this issue? What tracking algorithm to use?
Bipin
The trackers in your video have not changed the size of the bounding box as the object size changed. Is there a way to change the size of the bounding box as the object comes closer or moves far away so that the bounding box adjusts according to the size of the object?
Mohamed Abdullah
Hi Adrian,
Thank you for this great tutorial. I was using dlib facial landmarks in a real time video. but it was slow as you said because for each frame we apply dlib detector and it takes more time that generating dlib facial landmarks.
so I decided to detect the face in the first frame then apply tracker on the other frames. I tried all of these trackers but the result is very bad. they update the bounding box but after applying dlib predictor on the new bounding box to predict facial landmarks, it was very bad and inaccurate.
Do you recommend any solutions for this problem?
Adrian Rosebrock
Are you using a face detector to detect the faces? If so, what face detector are you using? A combination of:
1. A fast face detector
2. Facial landmarks
3. Simple object tracking via centroid association should solve the problem