This tutorial will teach you how to perform object tracking using dlib and Python. After reading today’s blog post you will be able to track objects in real-time video with dlib.
A couple months ago we discussed centroid tracking, a simple, yet effective method to (1) assign unique IDs to each object in an image and then (2) track each of the objects and associated IDs as they move around in a video stream.
The biggest downside to this object tracking algorithm is that a separate object detector has to be run on each and every input frame — in most situations, this behavior is undesirable as object detectors, including HOG + Linear SVM, Faster R-CNNs, and SSDs can be computationally expensive to run.
An alternative approach would be to:
- Perform object detection once (or once every N frames)
- And then apply a dedicated tracking algorithm that can keep tracking of the object as it moves in subsequent frames without having to perform object detection
Is such a method possible?
The answer is yes, and in particular, we can use dlib’s implementation of the correlation tracking algorithm.
In the remainder of today’s blog post, you will learn how to apply dlib’s correlation tracker to track an object in real-time in a video stream.
To learn more about dlib’s correlation tracker, just keep reading.
Looking for the source code to this post?
Jump Right To The Downloads SectionObject tracking with dlib
We’ll start off today’s tutorial with a brief discussion of dlib’s implementation of correlation-based object tracking.
From there I will show you how to utilize dlib’s object tracker in your own applications.
Finally, we’ll wrap up today by discussing some of the limitations and drawbacks of dlib’s object tracker.
What are correlation trackers?
The dlib correlation tracker implementation is based on Danelljan et al.’s 2014 paper, Accurate Scale Estimation for Robust Visual Tracking.
Their work, in turn, builds on the popular MOSSE tracker from Bolme et al.’s 2010 work, Visual Object Tracking using Adaptive Correlation Filters. While the MOSSE tracker works well for objects that are translated, it often fails for objects that change in scale.
The work of Danelljan et al. proposed utilizing a scale pyramid to accurately estimate the scale of an object after the optimal translation was found. This breakthrough allows us to track objects that change in both (1) translation and (2) scaling throughout a video stream — and furthermore, we can perform this tracking in real-time.
For a detailed review of the algorithm, please refer to the papers linked above.
Project structure
To see how this project is organized, simply use the tree
command in your terminal:
$ tree . ├── input │ ├── cat.mp4 │ └── race.mp4 ├── output │ ├── cat_output.avi │ └── race_output.avi ├── mobilenet_ssd │ ├── MobileNetSSD_deploy.caffemodel │ └── MobileNetSSD_deploy.prototxt └── track_object.py 3 directories, 7 files
We have three directories:
input/
: Contains input videos for object tracking.output/
: Our processed videos. In the processed video, the tracked object is annotated with a box and label.mobilenet_ssd/
: The Caffe CNN model files are contained within this directory.
Today we’ll be reviewing one Python script: track_object.py
.
Implementing our dlib object tracker
Let’s go ahead and get started implementing our object tracker using dlib.
Open up track_object.py
and insert the following code:
# import the necessary packages from imutils.video import FPS import numpy as np import argparse import imutils import dlib import cv2
Here we import our required packages. Notably, we’re using dlib, imutils, and OpenCV.
From there, let’s parse our command line arguments:
# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-v", "--video", required=True, help="path to input video file") ap.add_argument("-l", "--label", required=True, help="class label we are interested in detecting + tracking") ap.add_argument("-o", "--output", type=str, help="path to optional output video file") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") args = vars(ap.parse_args())
Our script has four required command line arguments:
--prototxt
: Our path to the Caffe deploy prototxt file.--model
: The path to the Caffe pre-trained model.--video
: The path to the input video file. Today’s script works with video files rather than your webcam (but you could easily change it to support a webcam stream).--label
: A class label that we are interested in detecting and tracking. Review the next code block for the available classes that this model supports.
And two optional ones:
--output
: An optional path to an output video file if you’d like to save the results of the object tracker.--confidence
: With adefault=0.2
, this is the minimum probability threshold and it allows us to filter weak detections from our Caffe object detector.
Let’s define the classes that this model supports and load our network from disk:
# initialize the list of class labels MobileNet SSD was trained to # detect CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
We’ll be using a pre-trained MobileNet SSD to perform object detection in a single frame. From there the object location will be handed off to dlib’s correlation tracker for tracking throughout the remaining frames of the video.
The model included with the “Downloads” supports 20 object classes (plus 1 for the background class) on Lines 27-30.
Note: If you’re using a different Caffe model, you’ll need to redefine this CLASSES
list. Similarly, don’t modify this list if you’re using the model included with today’s download. If you’re confused about how deep learning object detectors work, be sure to refer to this getting started guide.
Prior to looping over frames, we need to load our model into memory. This is handled on Line 34 where all that is required to load a Caffe model is the path to the prototxt and model files (both available in our command line args
dictionary).
Now let’s perform important initializations, notably our video stream:
# initialize the video stream, dlib correlation tracker, output video # writer, and predicted class label print("[INFO] starting video stream...") vs = cv2.VideoCapture(args["video"]) tracker = None writer = None label = "" # start the frames per second throughput estimator fps = FPS().start()
Our video stream, tracker
, and video writer
objects are initialized on Lines 39-41. We also initialize our textual label
on Line 42.
Our frames-per-second estimator is instantiated on Line 45.
Now we’re ready to begin looping over our video frames:
# loop over frames from the video file stream while True: # grab the next frame from the video file (grabbed, frame) = vs.read() # check to see if we have reached the end of the video file if frame is None: break # resize the frame for faster processing and then convert the # frame from BGR to RGB ordering (dlib needs RGB ordering) frame = imutils.resize(frame, width=600) rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # if we are supposed to be writing a video to disk, initialize # the writer if args["output"] is not None and writer is None: fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter(args["output"], fourcc, 30, (frame.shape[1], frame.shape[0]), True)
We begin our while
loop on Line 48 and proceed to grab a frame
on Line 50.
Our frame is resized and the color channels are swapped on Lines 58 and 59. Resizing allows for faster processing — you can experiment with the frame dimensions to achieve higher FPS. Converting to RGB color space is required by dlib (OpenCV stores images in BGR order by default).
Optionally, at runtime, an output video path can be passed via command line arguments. So, if necessary, we’ll initialize our video writer
on Lines 63-66. For more information on writing video to disk with OpenCV, see this previous post.
Next, we’ll need to detect an object for tracking (if we haven’t already):
# if our correlation object tracker is None we first need to # apply an object detector to seed the tracker with something # to actually track if tracker is None: # grab the frame dimensions and convert the frame to a blob (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(frame, 0.007843, (w, h), 127.5) # pass the blob through the network and obtain the detections # and predictions net.setInput(blob) detections = net.forward()
If our tracker
object is None
(Line 71), we first need to detect objects in the input frame
. To do so, we create a blob
(Line 74) and pass it through the network (Lines 78 and 79).
Let’s handle the detections
now:
# ensure at least one detection is made if len(detections) > 0: # find the index of the detection with the largest # probability -- out of convenience we are only going # to track the first object we find with the largest # probability; future examples will demonstrate how to # detect and extract *specific* objects i = np.argmax(detections[0, 0, :, 2]) # grab the probability associated with the object along # with its class label conf = detections[0, 0, i, 2] label = CLASSES[int(detections[0, 0, i, 1])]
If our object detector finds any objects (Line 82), we’ll grab the one with the largest probability (Line 88).
We’re only demonstrating how to use dlib to perform single object tracking in this post, so we need to find the detected object with the highest probability. Next week’s blog post will cover multi-object tracking with dlib.
From there, we’ll grab the confidence (conf
) and label
associated with the object (Lines 92 and 93).
Now it’s time to filter out the detections. Here we’re trying to ensure we have the right type of object which was passed by command line argument:
# filter out weak detections by requiring a minimum # confidence if conf > args["confidence"] and label == args["label"]: # compute the (x, y)-coordinates of the bounding box # for the object box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # construct a dlib rectangle object from the bounding # box coordinates and then start the dlib correlation # tracker tracker = dlib.correlation_tracker() rect = dlib.rectangle(startX, startY, endX, endY) tracker.start_track(rgb, rect) # draw the bounding box and text for the object cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 255, 0), 2) cv2.putText(frame, label, (startX, startY - 15), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 255, 0), 2)
On Line 97 we check to ensure that conf
exceeds the confidence threshold and that the object is actually the class type we’re looking for. When we run the script later, we’ll use “person” or “cat” as examples so you can see how we can filter results.
We determine bounding box
coordinates of our object on Lines 100 and 101.
Then we establish our dlib object tracker and provide the bounding box coordinates (Lines 106-108). Future tracking updates will be easy from here on.
A bounding box rectangle and object class label
text is drawn on the frame
on Lines 111-114.
Let’s handle the case where we’ve already established a tracker
:
# otherwise, we've already performed detection so let's track # the object else: # update the tracker and grab the position of the tracked # object tracker.update(rgb) pos = tracker.get_position() # unpack the position object startX = int(pos.left()) startY = int(pos.top()) endX = int(pos.right()) endY = int(pos.bottom()) # draw the bounding box from the correlation object tracker cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 255, 0), 2) cv2.putText(frame, label, (startX, startY - 15), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 255, 0), 2)
This else
block handles the case where we’ve already locked on to an object for tracking.
Think of it like a dogfight in the movie, Top Gun. Once the enemy aircraft has been locked on by the “guidance system”, it can be tracked via updates.
This requires two main actions on our part:
- Update our tracker object (Line 121) — the heavy lifting is performed in the backend of this
update
method. - Grab the position (
get_position
) of our object from thetracker
(Line 122). This would be where a PID control loop would come in handy if, for example, a robot seeks to follow a tracked object. In our case, we’re just going to annotate the object in the frame with a bounding box and label on Lines 131-134.
Let’s finish out the loop:
# check to see if we should write the frame to disk if writer is not None: writer.write(frame) # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # update the FPS counter fps.update()
If the frame
should be written to video, we do so on Lines 137 and 138.
We’ll show the frame
on the screen (Line 141).
If the quit key (“q”) is pressed at any point during playback + tracking, we’ll break
out of the loop (Lines 142-146).
Our fps
estimator is updated on Line 149.
Finally, let’s perform print out FPS throughput statistics and release pointers prior to the script exiting:
# stop the timer and display FPS information fps.stop() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # check to see if we need to release the video writer pointer if writer is not None: writer.release() # do a bit of cleanup cv2.destroyAllWindows() vs.release()
Housekeeping for our script includes:
- Our
fps
counter is stopped and the FPS information is displayed in the terminal (Lines 152-154. - Then, if we were writing to an output video, we release the video
writer
(Lines 157 and 158). - Lastly, we close all OpenCV windows and release the video stream (Lines 161 and 162).
Running dlib’s object tracker in real-time
To see our dlib object tracker in action, make sure you use the “Downloads” section of this blog post to download the source code.
From there, open up a terminal and execute the following command:
$ python track_object.py --prototxt mobilenet_ssd/MobileNetSSD_deploy.prototxt \ --model mobilenet_ssd/MobileNetSSD_deploy.caffemodel --video input/race.mp4 \ --label person --output output/race_output.avi [INFO] loading model... [INFO] starting video stream... [INFO] elapsed time: 13.18 [INFO] approx. FPS: 25.80
Usain Bolt (Olympic World Record holder) was detected initially with highest confidence at the beginning of the video. From there, he is tracked successfully throughout his 100m race.
The full video can be found below:
Below we have a second example of object tracking with dlib:
$ python track_object.py --prototxt mobilenet_ssd/MobileNetSSD_deploy.prototxt \ --model mobilenet_ssd/MobileNetSSD_deploy.caffemodel --video input/cat.mp4 \ --label cat --output output/cat_output.avi [INFO] loading model... [INFO] starting video stream... [INFO] elapsed time: 6.76 [INFO] approx. FPS: 24.12
The cat above was part of a BuzzFeed segment cat owners trying to take their cats for a walk (as if they were dogs). Poor cats!
Drawbacks and potential improvements
If you watched the full output video of the demo above, you would have noticed the object tracker behaving strangely towards the end of the demo, as this GIF demonstrates.
So, what’s going on here?
Why is the tracker losing the object?
Keep in mind there is no such thing as a “perfect” object tracker — and furthermore, this object tracking algorithm is not requiring you to run a more expensive object detector on each and every frame of the input image.
Instead, dlib’s correlation tracker is combining both (1) prior information regarding the location of the object bounding box in the previous frame along with (2) data garnered from the current frame to infer where the new location of the object is.
There will certainly be times when the algorithm loses the object.
To remedy this situation, I recommend occasionally running your more expensive object detector to (1) validate the object is still there and (2) reseed the object tracking with the updated (and ideally correct) bounding box coordinates. August’s blog post on people counting with OpenCV accomplished exactly this, so be sure to check it out.
What about multi-object tracking?
Undoubtedly, I know there will be PyImageSearch readers wishing to apply this method to multi-object tracking rather than single object tracking.
Is it possible to track multiple objects using dlib’s correlation tracker?
The answer is yes, absolutely!
I’ll be covering multi-object tracking next week, so stay tuned.
Video credits
To create the examples for this tutorial I needed to use clips from two different videos. A big thank you and credit to BuzzFeed Video and GERrevolt.
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In today’s blog post we discussed dlib’s object tracking algorithm.
Unlike July’s tutorial on centroid tracking, dlib’s object tracking algorithm can update itself utilizing information garnered from the input RGB image — the algorithm does not require that a set of bounding boxes be computed for each and every frame in the input video stream.
As we found out, dlib’s correlation tracking algorithm is quite robust and capable of running in real-time.
However, the biggest drawback is that the correlation tracker can become “confused” and lose the object we wish to track if viewpoint changes substantially or if the object to be tracked becomes occluded.
In those scenarios we can re-run our (computationally expensive) object detector to re-determine the location of our tracked object — be sure to refer to this blog post on people counting for such an implementation.
In our next blog post we’ll be discussing multi-object tracking with dlib — to be notified when the next blog post goes live (and download the source code to today’s post), just enter your email address in the form below.
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Ebrahim Rupawala
Hello Adrian! As usual, your blogs are so useful and amazing, they make our projects alot easier. I was wondering if you can do a tutorial on the Intel’s new Open VINO toolkit, so that we could make use of open cv’s algorithms as well as use the Optimization, deployment tools provided by Intel in the open VINO toolkit to rapidly prototype our computer vision applications. Thank you so much!
Adrian Rosebrock
Thanks for the comment Ebrahim. And thank you for the suggestion on Intel’s Open VINO toolkit. I cannot guarantee if or when I would cover the topic but I’ll certainly give it some thought.
Gagandeep
Hii Ebrahim
I am working on project where we are using openvino kit for deep learning solution , but it not able to detect licence plate recognition for specific country , so i want to include that failure inside to the openvino.. you have any idea about that??
david Sastre
I am really enjoying this last posts about video trackers. Although I work over C++, I am finding them really helpful. I didn’t know about the possibility of dlib. I am developing a project on a raspberry pi based on video tracking, and the only tracker that can be done real time here on full resolution is the MOSSE tracker. The drawback is that it doesn’t support change in scale, and it is important for me because I am tracking from the air, form a moving UAV. You just solved me this problem on this post, thank you very much!
By the way, I’ve seen you don’t have any post regarding opencv and gstreamer use. It is a very common combination, and a bit tricky to get working. I personally have it already working, but I am sure a post on this topic would help lot of readers.
Thank you, keep the good work!
Regards.
Adrian Rosebrock
Thanks David! You’ll be able to use dlib directly from your C++ application which is awesome, I’m glad that helps resolve the problem for you 🙂 Also, thank you for the suggestion on OpenCV and gstreamer. It is a bit tricky. I’ll see if I can cover it in a blog post but I can pretty much guarantee it’s going to be apart of our upcoming Computer Vision + Raspberry Pi book.
Anand C U
Hi Adrian. Thanks for the fantastic tutorial.
I’m currently building a automobile traffic counter with SSD-Mobilenet and MOSSE Tracker.
The issue I’m facing is to correctly correlate between objects that were tracked on the previous frames and the new objects from the SSD-Mobilenet (I’m running detection module every 5 frames). Currently I am using Non-Maximum Suppression technique to compare the bounding boxes and to decide that a tracked object is same as the one detected in the current frame. However, this fails a lot of times and is leading to erroneous counts. Do you have any suggestions to achieve this in a better way? Thanks
Adrian Rosebrock
Hey Anand — have you read my previous tutorial on building na OpenCV people counter? That tutorial would give you the starting point for your traffic counter, including how to associate objects between frames.
Jasper H
Hi Adrian,
Thanks for yet another useful post. How does this dlib correlation tracker compare with those implemented in OpenCV? For example, you had a summary post on those back in July https://pyimagesearch.com/2018/07/30/opencv-object-tracking/
Dan Ionescu
Great post again Adrian!
I’ve read a previous post about opencv trackers and I was wondering how accurate and how fast is this tracker compared to those? I want to use it on a raspberry pi
Thanks
Chase Lewis
Hey Adrian,
I noticed the update of the tracker returns the ‘Peak to Side-Lobe Ratio’. All the documentation says it is a ‘confidence that the object is within the rect’. Any idea how this value is created / used? I imagine a robust ‘tracker’ would use this value and if it went below a certain threshold return to using the detector.
For a sample project i’m seeing it start at ~20 and drop to 9 but without more info hard to understand how to construct this value should be used.
An article about improving the ‘robustness’ and recovering from tracking getting off would be interesting.
Chase Lewis
Mohamed
Hi Adrian. Great post as usual ,Thank you.
I’m currently building vehicle tracking system ,do you thing using Extended Kalman Filter will be better than centroid and dlib tracker?
Adrian Rosebrock
The only way to know is to try 🙂 Run experiments with both and let the empirical results drive your decision making.
Val
Hello Adrian!
Thanks for the awesome books and tutorials!
I work on detecting additional facial features like forehead wrinkles and the nasolabial fold for better emotion prediction.
Can you please advise on what the approach to find these features should look like?
Adrian Rosebrock
For your project I would suggest first performing facial landmark prediction. Inside the comments of the post you’ll see my recommendations for detecting the forehead region as well.
adi
Hey Adrian,
Do you have an email you can be reached at?
Adi
Adrian Rosebrock
You can use the PyImageSearch contact form and your message will go to my inbox.
zana zakaryaie nejad
Hi
It was a really good news that dlib has implemented multi-scale MOSSE (DSST).
Is there anybody who have tested it and compared with KCF? which one is faster?
MOSSE itself is much much faster than KCF but I wanted to know about its multi-scale version.
Adrian Rosebrock
I actually provide a full review of MOSSE, KCF, and other trackers in this tutorial.
Falahgs
Thank you Adrian for the post
But I wonder why it was Detection and tracking only one person the first player
within the video, especially there are several players in the scene
Thank you so much
Adrian Rosebrock
The focus of the tutorial is on object tracking not object detection. Object detection is only performed in the very first frame. Each object is then tracked. He was not detected in that frame and thus he was not tracked. For a full-fledged person tracking application see this tutorial on people tracking.
sophie
Hello Adrian,thanks for your great post. I want to ask if i can use the other deep network, for example YOLOv3.
Thank you so much.
Adrian Rosebrock
Yes, you can swap out the SSD for whatever object detector you would like.
Ibn Ahmad
Thanks for the nice tutorials once more. In a situation where we change our Video stream to read video from webcam, what will be the –video argument to pass in the command line?
Adrian Rosebrock
You would supply the index of your webcam which is typically “0”:
vs = VideoStream(src=0)
Gagandeep
Hii adraian
For last 3 sec of video i not able to predict or tracking the location of person.. box start moving away from person.. how we can improve the accuracy of this system
Adrian Rosebrock
You can apply an intermediate object detector every N frames like I do in this OpenCV People Counter tutorial.
JessicaCao
Thanks for this tutorial, and it is very useful for me. But I have a question that, which dataset is used for the pretrained MobielNet?
Adrian Rosebrock
The COCO dataset was used.
Catherine
There is an mistake named “no module named imutils.video”,I want to look for a solution about this problem.Please help me,thank you!
Adrian Rosebrock
You need to install the “imutils” library:
$ pip install imutils
Wajid Ali
Hey Adrian,
After Running Object Tracking Program, [INFO] loading model, [INFO] starting video stream,
[INFO] time elapsed and [INFO] FPS , before frame shows the Raspberry Pi 3 is restarted i tried many times but Raspberry pi 3 is restarted every time. so please can u get me solution of this.
Thankyou.
Adrian Rosebrock
It sounds like the Pi is becoming overloaded and shutting itself down. I personally haven’t tested this code on the Pi but my guess is it’s either (1) overheating, (2) running out of memory, or (3) it’s a hardware-related issue.
Gaston Scazzuso
file 107:
rect = dlib.rectangle(int(startX), int(startY), int(endX), int(endY))
to avoid dlib.rectangle error !. uff it was hard to find the problem, i don´t know why i get that error, maybe because i am running in Jupyter (without avoiding args ) ?
Adrian Rosebrock
What was the dlib.rectangle error you had?
Gaurav
hi how the faces are tracked… in this corelation model???
Adrian Rosebrock
Yes, it is a correlation model.
Gaurav
@Adrian can i track the faces using this model?????
if not then which model is best for that.
Adrian Rosebrock
It’s a two step process:
1. Detect the face using a face detector
2. Apply an object tracker
See this tutorial as a starting point.
Chris
Hi Adrian –
When I use dlib, the cpu utilization on the Pi is very, very high, across all CPUs, even if I’m only tracking 1 or 2 objects.
Is that normal? If I try and track 4 or 5 objects, all the cores on the PI go to 100%.
Any advice would be appreciated.
Adrian Rosebrock
Absolutely normal. The Pi is underpowered compared to laptops/desktops and object tracking algorithms are computationally expensive. If you are tracking multiple objects the processor will be 100% utilized.
Gaurav
@Adrian How can i use the dnn model for face recognition technique…
Adrian Rosebrock
See this tutorial.
Gaurav
You implemented 2 techniques of face recognition which is the accurate one??
Adrian Rosebrock
Make sure you are reading the tutorials I’m providing you. I’m happy to help but do take some time to read them and digest them. Specifically, you need to read the “Drawbacks, limitations, and how to obtain higher face recognition accuracy” section of the face recognition tutorial I linked you to.
Eric
Hi Adrian,
Thanks for your tutorials.
How can I select the exact object I want to follow? In this case you are following the object with the highest probability but what if I want to track a different one? What would you do?
Thank you very much!!
Adrian Rosebrock
Are you manually defining the (x, y)-coordinates for the object? Or do you want the object detector to detect it?
Eric
I would like to select the object manually using the mouse.
Adrian Rosebrock
You can use the
cv2.selectROI
function. This post includes an example of doing such.Eric
Thanks Adrian.
Would you use dlib for object tracking with a drone? or would you use any cv2 algorithm as csrt or tld?
Adrian Rosebrock
I would run experiments trying all of them and letting your empirical results determine which algorithm(s) you continue using.
JaneX
hello!I want to konw how to judge whether the tracking object is lost?
Adrian Rosebrock
You can monitor the confidence returned by the dlib correlation tracker. You can also use your own heuristics (such as the object is at the boundary of the frame for N consecutive frames with no update).
Rakeally
Is it possible to track a particular person who is in a dataset with a webcam?
If yes, then how?
Ifty
Hi, Dr Adrian, I would love to integrate this with YOLO.I tried swapping out SSD but getting some major errors on array indices after detections . I was following the YOLO article to swap ssd out. Is there a way to easily integrate the tracker with YOLO? Thanks in advance
Adrian Rosebrock
You would need to follow the YOLO tutorial which it sounds like you are doing. Without knowing the errors it’s hard for me to pinpoint the issue. Keep at it though!
JANG
Hi Adrian! I am following up with your amazing tutorial. Thank you!
However, I am in need of your advice. My scenario is as follows: camera 1 and camera 2 is setup next to each other covering each side of the hallway. (So there is minimum space that the camera doesn’t cover). Is it possible to move the tracking target bounding box to next camera and keep on tracking the same person? Then what should be moved to the next camera? bounding box? frame? both?
Thanks in advance.
Adrian Rosebrock
Do you have an example output I could take a look it? Have you considered just stitching the frames together?
Alessandro Basso
Hi Adrian,
first of all thank you for this tutorial. I have a doubt: I’m able to run the script at 9 FPS, whilst you get 25/26 FPS. I’m using OpenCV 3.4.3. Perhaps you succeeded in using the GPU (I didn’t)?
Thanks!
Adrian Rosebrock
No, I was using my CPU. What processor are you using?
Alessandro Basso
I apologize for the delay, I missed your reply… Intel i7-6700, 8 cores.
Azad
Hello @adrain Sir, please help me out for the query that is ,i am working on a project ( human detection and tracking) , where i am done with this but now i want to assign unique ID’s to different person i detected and traking
Adrian Rosebrock
See this tutorial.
Fernando
Hi Adrian, thanks for the tutorials, they are great.
I’ve implemented a version of both, your centroid and dlib tracking algorithm. They both perform the same aren’t they?
They are pretty much the same tracker. The centroid just adds a little bit of work because of the distance calculations.
Which, in your opinion, has a better performance in terms of tracking precision, as in being able to track the object with more precision.
Also are you planning on doing an implementation of “deep SORT”?
Adrian Rosebrock
No, they are actually very different. Centroid tracking simply associates centroids in subsequent frames. The dlib tracker is performing “correlation tracking”, a dedicated tracking algorithm.
Fernando
Thanks for the fast response
But then, why the centroid tracker uses a full implementation of the dlib tracking and then just adds the distance calculations?
It seems like either uses de dlib algorithms and then adds some work for better performance or, uses de dlib algorithms for some information and then ignores it.
Adrian Rosebrock
Dlib’s correlation tracker is just that — a tracker. It doesn’t have the ability to associate a unique ID with an object. We use centroid tracking to associate unique IDs to each tracked object.
Fernando
Hi Adrian, thanks for the tutorials, they are all great, i’m eager to buy the book.
What is the difference between tracking with centroids (like in your tutorial) and tracking with just dlib.
Thanks in advance
Adrian Rosebrock
See my reply to your previous comment.
Alexis
Hi, I’m really confused about this concept of running detector every N frames. If the detection cannot run in real-time and always lags behind the live video, how can we track in real-time even if the tracker can run ultra fast? The object will already move to somewhere else after the detection gets computed for that Nth frame right?
Adrian Rosebrock
The object tracker takes over after the detector runs. The tracker is far faster than the detector.
Gabriel
Is there a way to feed dlib a polygon shape instead of a rectangle?
Adrian Rosebrock
No, dlib is expecting a bounding box. I would suggest computing the bounding box of your polygon.
Bartolo
Thank you for this tutorial.
Adrian Rosebrock
You are welcome!