Last Updated on April 30, 2022
Table of Contents
- Intersection over Union for object detection
- What is Intersection over Union?
- Where are you getting the ground-truth examples from?
- Why do we use Intersection over Union?
- Implementing Intersection over Union in Python
- Comparing predicted detections to the ground-truth with Intersection over Union
- Alternative Intersection over Union implementations
- Summary
Intersection over Union (IoU) is used to evaluate the performance of object detection by comparing the ground truth bounding box to the preddicted bounding box and IoU is the topic of this tutorial.
A solid understanding of IoU requires practical applications. Access to a well-curated dataset allows learners to engage with real-world challenges, enhancing their understanding of object detection and how IoU is applied for accuracy.
Roboflow has free tools for each stage of the computer vision pipeline that will streamline your workflows and supercharge your productivity.
Sign up or Log in to your Roboflow account to access state of the art dataset libaries and revolutionize your computer vision pipeline.
You can start by choosing your own datasets or using our PyimageSearch’s assorted library of useful datasets.
Bring data in any of 40+ formats to Roboflow, train using any state-of-the-art model architectures, deploy across multiple platforms (API, NVIDIA, browser, iOS, etc), and connect to applications or 3rd party tools.
With a few images, you can train a working computer vision model in an afternoon. For example, bring data into Roboflow from anywhere via API, label images with the cloud-hosted image annotation tool, kickoff a hosted model training with one-click, and deploy the model via a hosted API endpoint. This process can be executed in a code-centric way, in the cloud-based UI, or any mix of the two.
Over 250,000 developers and machine learning engineers from companies such as Cardinal Health, Walmart, USG, Rivian, Intel, and Medtronic build computer vision pipelines with Roboflow. Get started today, no credit card required.
Today’s blog post is inspired from an email I received from Jason, a student at the University of Rochester.
Jason is interested in building a custom object detector using the HOG + Linear SVM framework for his final year project. He understands the steps required to build the object detector well enough — but he isn’t sure how to evaluate the accuracy of his detector once it’s trained.
His professor mentioned that he should use the Intersection over Union (IoU) method for evaluation, but Jason’s not sure how to implement it.
I helped Jason out over email by:
- Describing what Intersection over Union is.
- Explaining why we use Intersection over Union to evaluate object detectors.
- Providing him with some example Python code from my own personal library to perform Intersection over Union on bounding boxes.
My email really helped Jason finish getting his final year project together and I’m sure he’s going to pass with flying colors.
With that in mind, I’ve decided to turn my response to Jason into an actual blog post in hopes that it will help you as well.
To learn how to evaluate your own custom object detectors using the Intersection over Union evaluation metric, just keep reading.
- Update July 2021: Added section on alternative Intersection over Union implementations, including IoU methods that can be used as loss functions when training deep neural network object detectors.
- Update Apr 2022: Added TOC and linked the post to a new intersection over Union tutorial.
- Update Dec 2022: Removed link to the dataset as the dataset is no longer publicly available and refreshed the content.
Looking for the source code to this post?
Jump Right To The Downloads SectionIntersection over Union for object detection
In the remainder of this blog post I’ll explain what the Intersection over Union evaluation metric is and why we use it.
I’ll also provide a Python implementation of Intersection over Union that you can use when evaluating your own custom object detectors.
Finally, we’ll look at some actual results of applying the Intersection over Union evaluation metric to a set of ground-truth and predicted bounding boxes.
What is Intersection over Union?
Intersection over Union is an evaluation metric used to measure the accuracy of an object detector on a particular dataset. We often see this evaluation metric used in object detection challenges such as the popular PASCAL VOC challenge.
You’ll typically find Intersection over Union used to evaluate the performance of HOG + Linear SVM object detectors and Convolutional Neural Network detectors (R-CNN, Faster R-CNN, YOLO, etc.); however, keep in mind that the actual algorithm used to generate the predictions doesn’t matter.
Intersection over Union is simply an evaluation metric. Any algorithm that provides predicted bounding boxes as output can be evaluated using IoU.
More formally, in order to apply Intersection over Union to evaluate an (arbitrary) object detector we need:
- The ground-truth bounding boxes (i.e., the hand labeled bounding boxes from the testing set that specify where in the image our object is).
- The predicted bounding boxes from our model.
As long as we have these two sets of bounding boxes we can apply Intersection over Union.
Below I have included a visual example of a ground-truth bounding box versus a predicted bounding box:
In the figure above we can see that our object detector has detected the presence of a stop sign in an image.
The predicted bounding box is drawn in red while the ground-truth (i.e., hand labeled) bounding box is drawn in green.
Computing Intersection over Union can therefore be determined via:
Examining this equation you can see that Intersection over Union is simply a ratio.
In the numerator we compute the area of overlap between the predicted bounding box and the ground-truth bounding box.
The denominator is the area of union, or more simply, the area encompassed by both the predicted bounding box and the ground-truth bounding box.
Dividing the area of overlap by the area of union yields our final score — the Intersection over Union.
Where are you getting the ground-truth examples from?
Before we get too far, you might be wondering where the ground-truth examples come from. I’ve mentioned before that these images are “hand labeled”, but what exactly does that mean?
You see, when training your own object detector (such as the HOG + Linear SVM method), you need a dataset. This dataset should be broken into (at least) two groups:
- A training set used for training your object detector.
- A testing set for evaluating your object detector.
You may also have a validation set used to tune the hyperparameters of your model.
Both the training and testing set will consist of:
- The actual images themselves.
- The bounding boxes associated with the object(s) in the image. The bounding boxes are simply the (x, y)-coordinates of the object in the image.
The bounding boxes for the training and testing sets are hand labeled and hence why we call them the “ground-truth”.
Your goal is to take the training images + bounding boxes, construct an object detector, and then evaluate its performance on the testing set.
An Intersection over Union score > 0.5 is normally considered a “good” prediction.
Why do we use Intersection over Union?
If you have performed any previous machine learning in your career, specifically classification, you’ll likely be used to predicting class labels where your model outputs a single label that is either correct or incorrect.
This type of binary classification makes computing accuracy straightforward; however, for object detection it’s not so simple.
In all reality, it’s extremely unlikely that the (x, y)-coordinates of our predicted bounding box are going to exactly match the (x, y)-coordinates of the ground-truth bounding box.
Due to varying parameters of our model (image pyramid scale, sliding window size, feature extraction method, etc.), a complete and total match between predicted and ground-truth bounding boxes is simply unrealistic.
Because of this, we need to define an evaluation metric that rewards predicted bounding boxes for heavily overlapping with the ground-truth:
In the above figure I have included examples of good and bad Intersection over Union scores.
As you can see, predicted bounding boxes that heavily overlap with the ground-truth bounding boxes have higher scores than those with less overlap. This makes Intersection over Union an excellent metric for evaluating custom object detectors.
We aren’t concerned with an exact match of (x, y)-coordinates, but we do want to ensure that our predicted bounding boxes match as closely as possible — Intersection over Union is able to take this into account.
Implementing Intersection over Union in Python
Now that we understand what Intersection over Union is and why we use it to evaluate object detection models, let’s go ahead and implement it in Python.
Before we get started writing any code though, I want to provide the five example images we will be working with:
These images are part of the CALTECH-101 dataset used for both image classification and object detection.
This dataset was publicly available but as of December of 2022 it is not longer public.
Inside the PyImageSearch Gurus course I demonstrate how to train a custom object detector to detect the presence of cars in images like the ones above using the HOG + Linear SVM framework.
I have provided a visualization of the ground-truth bounding boxes (green) along with the predicted bounding boxes (red) from the custom object detector below:
Given these bounding boxes, our task is to define the Intersection over Union metric that can be used to evaluate how “good (or bad) our predictions are.
With that said, open up a new file, name it intersection_over_union.py
, and let’s get coding:
# import the necessary packages from collections import namedtuple import numpy as np import cv2 # define the `Detection` object Detection = namedtuple("Detection", ["image_path", "gt", "pred"])
We start off by importing our required Python packages. We then define a Detection
object that will store three attributes:
image_path
: The path to our input image that resides on disk.gt
: The ground-truth bounding box.pred
: The predicted bounding box from our model.
As we’ll see later in this example, I’ve already obtained the predicted bounding boxes from our five respective images and hardcoded them into this script to keep the example short and concise.
For a complete review of the HOG + Linear SVM object detection framework, please refer to this blog post. And if you’re interested in learning more about training your own custom object detectors from scratch, be sure to check out the PyImageSearch Gurus course.
Let’s go ahead and define the bb_intersection_over_union
function, which as the name suggests, is responsible for computing the Intersection over Union between two bounding boxes:
def bb_intersection_over_union(boxA, boxB): # determine the (x, y)-coordinates of the intersection rectangle xA = max(boxA[0], boxB[0]) yA = max(boxA[1], boxB[1]) xB = min(boxA[2], boxB[2]) yB = min(boxA[3], boxB[3]) # compute the area of intersection rectangle interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1) # compute the area of both the prediction and ground-truth # rectangles boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1) boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1) # compute the intersection over union by taking the intersection # area and dividing it by the sum of prediction + ground-truth # areas - the interesection area iou = interArea / float(boxAArea + boxBArea - interArea) # return the intersection over union value return iou
This method requires two parameters: boxA
and boxB
, which are presumed to be our ground-truth and predicted bounding boxes (the actual order in which these parameters are supplied to bb_intersection_over_union
doesn’t matter).
Lines 11-14 determine the (x, y)-coordinates of the intersection rectangle which we then use to compute the area of the intersection (Line 17).
The interArea
variable now represents the numerator in the Intersection over Union calculation.
To compute the denominator we first need to derive the area of both the predicted bounding box and the ground-truth bounding box (Lines 21 and 22).
The Intersection over Union can then be computed on Line 27 by dividing the intersection area by the union area of the two bounding boxes, taking care to subtract out the intersection area from the denominator (otherwise the intersection area would be doubly counted).
Finally, the Intersection over Union score is returned to the calling function on Line 30.
Now that our Intersection over Union method is finished, we need to define the ground-truth and predicted bounding box coordinates for our five example images:
# define the list of example detections examples = [ Detection("image_0002.jpg", [39, 63, 203, 112], [54, 66, 198, 114]), Detection("image_0016.jpg", [49, 75, 203, 125], [42, 78, 186, 126]), Detection("image_0075.jpg", [31, 69, 201, 125], [18, 63, 235, 135]), Detection("image_0090.jpg", [50, 72, 197, 121], [54, 72, 198, 120]), Detection("image_0120.jpg", [35, 51, 196, 110], [36, 60, 180, 108])]
As I mentioned above, in order to keep this example short(er) and concise, I have manually obtained the predicted bounding box coordinates from my HOG + Linear SVM detector. These predicted bounding boxes (And corresponding ground-truth bounding boxes) are then hardcoded into this script.
For more information on how I trained this exact object detector, please refer to the PyImageSearch Gurus course.
We are now ready to evaluate our predictions:
# loop over the example detections for detection in examples: # load the image image = cv2.imread(detection.image_path) # draw the ground-truth bounding box along with the predicted # bounding box cv2.rectangle(image, tuple(detection.gt[:2]), tuple(detection.gt[2:]), (0, 255, 0), 2) cv2.rectangle(image, tuple(detection.pred[:2]), tuple(detection.pred[2:]), (0, 0, 255), 2) # compute the intersection over union and display it iou = bb_intersection_over_union(detection.gt, detection.pred) cv2.putText(image, "IoU: {:.4f}".format(iou), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2) print("{}: {:.4f}".format(detection.image_path, iou)) # show the output image cv2.imshow("Image", image) cv2.waitKey(0)
On Line 41 we start looping over each of our examples
(which are Detection
objects).
For each of them, we load the respective image
from disk on Line 43 and then draw the ground-truth bounding box in green (Lines 47 and 48) followed by the predicted bounding box in red (Lines 49 and 50).
The actual Intersection over Union metric is computed on Line 53 by passing in the ground-truth and predicted bounding box.
We then write the Intersection over Union value on the image
itself followed by our console as well.
Finally, the output image is displayed to our screen on Lines 59 and 60.
Comparing predicted detections to the ground-truth with Intersection over Union
To see the Intersection over Union metric in action, make sure you have downloaded the source code + example images to this blog post by using the “Downloads” section found at the bottom of this tutorial.
After unzipping the archive, execute the following command:
$ python intersection_over_union.py
Our first example image has an Intersection over Union score of 0.7980, indicating that there is significant overlap between the two bounding boxes:
The same is true for the following image which has an Intersection over Union score of 0.7899:
Notice how the ground-truth bounding box (green) is wider than the predicted bounding box (red). This is because our object detector is defined using the HOG + Linear SVM framework which requires us to specify a fixed size sliding window (not to mention, an image pyramid scale and the HOG parameters themselves).
Ground-truth bounding boxes will naturally have a slightly different aspect ratio than the predicted bounding boxes, but that’s okay provided that the Intersection over Union score is > 0.5 — as we can see, this still a great prediction.
The next example demonstrates a slightly “less good” prediction where our predicted bounding box is much less “tight” than the ground-truth bounding box:
The reason for this is because our HOG + Linear SVM detector likely couldn’t “find” the car in the lower layers of the image pyramid and instead fired near the top of the pyramid where the image is much smaller.
The following example is an extremely good detection with an Intersection over Union score of 0.9472:
Notice how the predicted bounding box nearly perfectly overlaps with the ground-truth bounding box.
Here is one final example of computing Intersection over Union:
Alternative Intersection over Union implementations
This tutorial provided a Python and NumPy implementation of IoU. However, there are other implementations of IoU that may be better for your particular application and project.
For example, if you are training a deep learning model using a popular library/framework such as TensorFlow, Keras, or PyTorch, then implementing IoU using your deep learning framework should improve the speed of the algorithm.
The following list provides my suggested alternative implementations of Intersection over Union, including implementations that can be used as loss/metric functions when training a deep neural network object detector:
- TensorFlow’s MeanIoU function, which computes the mean Intersection over Union for a sample of object detection results.
- TensorFlow’s GIoULoss loss metric, which was first introduced in Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression by Rezatofighi et al. Just as you train a neural network to minimize mean squared error, cross-entropy, etc., this method acts as a drop-in replacement loss function, potentially leading to higher object detection accuracy.
- A PyTorch implementation of IoU (which I have not tested or used), but seems to be helpful to the PyTorch community.
- We have a great Mean Average Precision (mAP) Using the COCO Evaluator tutorial that will walk you through using intersection over union for evaluating YOLO performance. Learn the theoretical concepts of Mean Average Precision (mAP) and evaluate the YOLOv4 detector using the gold standard COCO Evaluator.
Of course, you can always take my Python/NumPy implementation of IoU and convert it to your own library, language, etc.
Happy hacking!
What's next? We recommend PyImageSearch University.
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this blog post I discussed the Intersection over Union metric used to evaluate object detectors. This metric can be used to assess any object detector provided that (1) the model produces predicted (x, y)-coordinates [i.e., the bounding boxes] for the object(s) in the image and (2) you have the ground-truth bounding boxes for your dataset.
Typically, you’ll see this metric used for evaluating HOG + Linear SVM and CNN-based object detectors.
To learn more about training your own custom object detectors, please refer to this blog post on the HOG + Linear SVM framework along with the PyImageSearch Gurus course where I demonstrate how to implement custom object detectors from scratch. If you’d like to dive deeper, consider studying computer vision with our free course.
Finally, before you go, be sure to enter your email address in the form below to be notified when future PyImageSearch blog posts are published — you won’t want to miss them!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
wajih
I was translating a code, was wondering of IoU, and now really I OWE YOU ONE 🙂 Thanks for the explanation. Helped me finish the translation in a breeze 🙂
Adrian Rosebrock
Awesome, I’m happy I could help Wajih! 🙂
Anne
Ha, I love the play of words… I o U 1 XD
Walid Ahmed
Thanks a lot.
This was really in time for me
however, I am still clueless how to build a classifier for object detection.
I already built my classifier for object classification using CNN.
Any advice?
Adrian Rosebrock
Hey Walid — I would suggest starting by reading about the HOG + Linear SVM detector, a classic method used for object detection. I also demonstrate how to implement this system inside the PyImageSearch Gurus course. As for using a CNN for object detection, I will be covering that in my next book.
kiran
Awesome. You made it pretty simple to understand. I request you to post a similar blog on evaluation metrics for object segmentation tasks as well. Thank you.
Adrian Rosebrock
Most of my work focuses on object detection rather than pixel-wise segmentations of an image but I’ll certainly consider it for the future.
jsky
I implemented this in my car detection framework for my FYP too.
Its called the Jaccard Index, and is a standard measure for evaluating such record retrieval.
https://en.wikipedia.org/wiki/Jaccard_index
abby
thank you for the tutorial
Adrian Rosebrock
No problem, happy I can help!
Aamer
what if our hog+svm model predicts multiple bounding boxes.
In that case, will we iterate over all such predicted bounding boxes and see for the one which gets the max value for the Intersection/Union ratio ?
Adrian Rosebrock
If you detector predicts multiple bounding boxes for the same object then you should be applying non-maxima suppression. If you are predicting bounding boxes for multiple objects then you should be computing IoU for each of them.
elsa
How do I computing IoU for each of them ? thanks.
Adrian Rosebrock
You loop over the detected bounding boxes and compute IoU for each. You may decide to associate bounding boxes with ground-truth bounding boxes by computing the Euclidean distance between their respective centroids. Objects that minimize the distance should be associated together.
Miej
This code fails. Specifically, it gets broken when comparing two non-overlapping bounding boxes by providing a non-negative value for interArea when the boxes can be separated into diagonally opposing quadrants by a vertical and a horizontal line. This could be easily remedied with a simple catch for such cases.
Miej
that said, it’s still quite handy. thanks!
auro
Do we need to consider the case where the two boxes to not intersect at all?
Look up the MATLAB code at https://github.com/rbgirshick/voc-dpm/blob/master/utils/boxoverlap.m
Adrian Rosebrock
Good point, thank you for pointing this out Auro.
Rimphy Darmanegara
So, in case of negative result just return zero?
Thank you for the code.
Adrian Rosebrock
Correct. In case of negative (non-overlapping) objects the return value would be zero.
Anivini
How to give bounding box parameters from line 33 to 38?
Adrian Rosebrock
These bounding boxes were obtained from a custom object detector I trained inside the PyImageSearch Gurus course.
Jere
This is helpful knowledge. You have made me understand about the method IoU used in fast rcnn. Thanks you are one of the best people in explaining concepts easily
Adrian Rosebrock
Thank you for the kind words Jere 🙂
sabrine
Good evening, I worked with HOG detector and I used Matlab software, I want to calculate the rate of overlap between the detection of the ground truth and my detection file, please help me.
Adrian Rosebrock
If you have the predictions from your MATLAB detector you can either (1) write them to file and use my Python code to read them and compare them to the ground-truth or (2) implement this function in MATLAB.
sabrine
OK thanks
I will try to implement on matlab
Because I do not know how to use python
Anivini
I am confused about detection.gt[:2] and detection.gt[2:] in lines 47 to 50. What is actually specifed by :2 and 2: . I surfed but couldn’t get an answer.
Adrian Rosebrock
These are called array slices.
goingmyway
Awesome post. A clear explanation of IoU.
Adrian Rosebrock
Thank you, I’m happy to hear you enjoyed it! 🙂
secret
Hey I would like to know how to compute the repeatability factor ,corresponding count and recall and precision to evaluate feature detector in python
I would like to know is there a function in python similar to the one in C++ if not then how do I proceed
Paarijaat Aditya
Very helpful post! Thanks! A small correction, line 17 should be:
interArea = max(0, xB – xA + 1) * max(0, yB – yA + 1)
This is to avoid those corner cases where the rectangles are not overlapping but the intersection area still computes to be greater than 0. This happens when both the brackets in the original line 17 are negative. For e.g. boxA is on the upper right of the picture and boxB is somewhere on the lower left of the picture, without any overlap and with significant separation between them.
Adrian Rosebrock
Thanks for sharing Paarijaat!
Evan
Hey,
I am still confused as to why you add 1 in line 17: (xB – xA + 1) * (yB – yA + 1).
Doesn’t xB-xA give the width of the intersecting area and same with yB-yA? What does adding one do?
Thanks!
Adrian Rosebrock
Adding one simply prevents the numerator from being zero.
Danny
The actual reason of adding 1 is because xB, xA both represent pixel coordinates. Suppose you have 6 pixels, the coordinates are from 0 to 5. When you try to calculate the span of these 6 pixels, it should be (5-0+1) = 6, there you have the extra 1.
Johannes
Thanks, helped me out understanding the YOLO9000 paper.
Adrian Rosebrock
Fantastic, I’m glad to hear it Johannes!
Mohammad
Hey Adrian,
I tested your code but I think it is a little buggy:
Assume that we have two below bounding boxes: (the structure of bounding boxes are (x1,y1,x2,y2), it is just a tuple in python. (x1,y1) denotes to the top-left coordinate and (x2,y2) denotes to the bottom-right coordinate.)
boxA = (142,208,158,346)
boxB = (243,203,348,279)
Based on these BBoxes, the IoU should be zero. Because there is no intersection between them. But your code give a negative value! By the way, I should say that my origin is up-left corner of the image.
So how can I solve this problem?
Adrian Rosebrock
This issue is simply to resolve:
I will update the blog post in the future to reflect this change.
Byungsoo
@adrian That doesn’t look enough to resolve the issue. `interArea = (xB – xA + 1) * (yB – yA + 1)` could be positive when two terms are both negative. It should return 0 if either `(xB – xA + 1)` or `(yB – yA + 1)` is equal or less than 0.
Filippo
Please, fix this issue, IOU should be 0 when interArea is the product of two negative terms. Not that you owe anyone anything, but this is the first result on google when searching intersection over union, it would be great not to have to scroll down to the comments to find out the code is buggy. Paarijaat Aditya solution works pretty fine and handles both corner cases of non overlapping boxes.
Thanks for the nice tutorial!
Islam Saad
Many thanks Adrian for the great topic You.
I wanna ask if I have dataset groundtruth as contour shows all object outline ( not rectangle box shape). Then, is there another method dealing with free shape bounding contours? In order to compute Lou.
Adrian Rosebrock
Intersection over Union assumes bounding boxes. You can either convert your contour points to a bounding box and compute IoU or you can simply count the number of pixels in your mask that overlap with the detection.
Javier
I based on your simple implementation to port it to Tensorflow to create the IoU matrix of two sets of bounding boxes:
https://gist.github.com/vierja/38f93bb8c463dce5500c0adf8648d371
Thanks!
Adrian Rosebrock
Great job Javier!
MMD
Hi Adrian Please am interested in your project but am not a developer.
I will like you to develop a project for me the intended purpose is mainly to Classify Vehicle base on Length, Height and Axle count.
Please any one interested in this commercial project is welcome.
Thanks
Adrian Rosebrock
I personally have too much on my plate to take on any new projects right now but I would suggest you post on PyImageJobs where there are thousands of OpenCV developers who can help you build your project.
K M Ibrahim Khalilullah
Thanks for this tutorial
K M Ibrahim Khalilullah
How can I calculate precision-recall from IoU?
Walid Ahmed
Thanks
But I found something strange
box1=[265.0, 103.0, 372.0, 268.0]
box2=[12, 34, 32, 61]
has a high value of IOU based on the code although they are exclusive
how Can this issue be resolved?
Thanks
Oyesh
Very insightful. Loved your explanation.
Adrian Rosebrock
Thanks Oyesh! 🙂
hema
Hi Adrian Rosebrock,
I am trying a HOG descriptor with SVM classifier to detect objects.And after creating annotation xml file if i use it for training it gives me the following error: “An impossible set of object boxes was given for training.All the boxes need to have a similar aspect ratio and also not be smaller than 400 pixels in area…” How do i fix it??
Thanks in advance
Adrian Rosebrock
I assume you are using dlib to train your own custom object detector? The HOG + Linear SVM implementation in dlib requires your bounding boxes to have a similar aspect ratio (i.e., similar width and height of the ROI). It sounds like your objects do not have this or you have an invalid bounding box in your annotations file.
Mohamed Judi
Hi Adrian, thank you so much for this excellent blog. It explains IoU very well. It is a gift to find someone, not only knows his stuff but also knows how to explain it to people in simple terms.
I’m working on the Kaggle’s 2018 Data Science Bowl competition with very little knowledge of deep learning, let alone Biomedical Image Segmentation and U-Nets 🙂 However, I’m taking this challenge to learn. Winning is secondary at this stage of my career.
In your blog, you are explaining how to calculate IoU for rectangular shapes. What about irregular shapes like the masks of cells or nucleus? They are not perfectly rectangular and therefore the formula to calculate the area is not quite useful.
I would be very grateful if you could help me calculate the IoU for each threshold, as well as the IoU mean over multiple thresholds.
I know this might be too much to ask but I’m willing to discuss options offline if your time permits.
Thank you!
MJ
Adrian Rosebrock
There are two components to need to consider here (as is true with object detection): precision and recall. You first need to detect the correct object. In the case of object detection and semantic segmentation, this is your recall. For example, if you detect a “cat” but the actual label is a dog, then your recall score goes down.
To measure the precision (accuracy) of the detection/segmentation we can use IoU. Exactly how IoU is used for segmentation depends on the challenge so you should consult the Kaggle documentation and/or evaluation scripts but typically it’s the ratio of ground-truth mask and the predicted mask.
Mohamed Judi
Thank you, Adrian!
Qaisar Tanvir
How can we do this with polygons? where bounding box may be a little rotated (RBOX). i dont have the angle of rotation. So my output bounding box cannot be drawn with top-left and bottom-right. it has four points. Any help or suggestion will be highly appreciated guys
Adrian Rosebrock
Why not convert your rotated bounding box to a “normal” bounding box and then compute IoU on the non-rotated version?
Glen
Hi.
I think there will be a problem when two boxes have no intersection area at all. The computation for interArea could be potentially problematic.
I suggest change from ‘interArea = (xB – xA + 1) * (yB – yA + 1)’
to
interArea = max(0,(xB – xA + 1)) * max(0,(yB – yA + 1))
Adrian Rosebrock
Indeed, you are right Glen. I will get the code updated.
Johannes
Hi, I am not 100% sure, but I think that your code tends to overestimate the IoU.
Given two boxes [[0,0], [10, 10]] and [[1,1], [11, 11]]. The area of intersection should be 81. However, with your function it computes 100. The same problem occurs with the box size which are obviously 100 but are computed as 121. Your code gives an IoU of 0.7, while the analytical solution should be 0.68…
Maybe you could clarify, why you are always adding +1 to the coordinates. This somehow seems to deviate from the standard. Used in many other implementations.
Adrian Rosebrock
The “+1” here is used to prevent any division by zero errors. That said, I will reinvestigate this implementation in the near future.
ptyshevs
Hi, I’ve decided to check computations of IoU by hand and it seems that “+1” in your code is responsible for the incorrect result.
Let bboxA = [0, 0, 2, 2], bboxB = [1, 1, 3, 3], then Union(bboxA, bboxB) = 7, Intersection(bboxA, bboxB) = 1, yielding IoU = 1/7 = 0.1428…
Your version will give 0.2857…
I suggest the following snippet:
interArea = abs((xB – xA) * (yB – yA))
if interArea == 0:
return 0
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = abs((boxA[2] – boxA[0]) * (boxA[3] – boxA[1]))
boxBArea = abs((boxB[2] – boxB[0]) * (boxB[3] – boxB[1]))
Johannes
Hi,
I made a github gist with the correct implementation of the bb_intersection_over_union feel free to check it out:
https://gist.github.com/meyerjo/dd3533edc97c81258898f60d8978eddc
The correction of ptyshevs is almost correct. However, it does not handle the cases in which boxes have no overlap.
Cheers,
Johannes
prb
@Johannes will this code work for one rectangle inside other?
JP
Hi,
What happens if the ground truth bounding box is much larger than the actual object? Sometimes when using the automation option in the ground truth labeler app in Matlab the bounding box will grow and shrink depending on what the object is doing.
If you have a low-level object detector then you should have a predicted bounding box that tightly encloses the object’s contour but because the ground truth box is much bigger, your IoU score will be very low. Is there any way to address this?
Adrian Rosebrock
The ground-truth bounding box is just a set of coordinates, it has absolutely no knowledge regarding the size of the actual object itself. IoU would have to operate on the ground-truth bounding boxes and assume they are correct. If they are not then you should re-label/adjust your data.
prb
@ Adrian Thanks for the great tutorial. What if there are multiple bounding boxes of different objects. I wanted to find the overlap between different bbs for other detected objects.
Adrian Rosebrock
Loop over your combinations of the bounding boxes and apply IoU to each of them.
Kartik Podugu
Nice elaborate explanation.
Adrian Rosebrock
Thanks Kartik!
jeffinaustin
It’s a great explanation, but other than reviving some old math (Jaccard) why would IoU be better in any way than just the ratio of the intersection over the correct ground truth area? Has anyone showed there IS a reason to reach back for some old math, or is a method that would more tightly award correctness (with simpler code) be just as good?
Adrian Rosebrock
Jaccard and the Dice coefficient are sometimes used for measuring the quality of bounding boxes, but more typically they are used for measuring the accuracy of instance segmentation and semantic segmentation.
Aditya Singh
Hi Adrian,
What should I do, if on my test data, in some frames , for some objects the bounding boxes aren’t predicted, but they are present in the ground truth labels. The interArea would be zero, but the loss should be high in this case.
Mina
Hi Adrian,
Thank you for the great post!
I have a question. My main problem is segmentation, but I’d like to detect the object first, and then segment it. I have better results this way rather than end- to- end segmentation.
I have developed the object detection algorithm, and now I’d like to segment the detected objects. But in some cases the detected bounding box is smaller than the true (ground-truth) box. This will effect the segmentation accuracy. What do you suggest to solve this problem? shall I consider a larger box when I want to do the segmentation? Is it a good idea to for example double the size of the detected box before feeding the segmentation network?
Thank you
Han-Cheol Cho
Thank you for a great article on every aspect of IoU 🙂
Adrian Rosebrock
You are welcome!
Sahi Chachere
I am training custom object detector from yolov3, calculated anchors using darknet and got avg IoU = 69.74, I have 6 classes in my dataset and total images of 10,225, how can I improve IoU for my dataset.
Adrian Rosebrock
Take a look at Deep Learning for Computer Vision with Python where I provide my tips, suggestions, and best practices for training your own custom object detectors.
Malhar
Nice post, just wanted to point out that Figure 1 and 2 are incorrectly captioned ?
They mention “Intersection of Union” instead of “Intersection Over Union”
Adrian Rosebrock
Thanks for pointing out the typo! I’ve gone ahead and fixed them.
Krone
Hello, your blog is really good. May I translate it in Chinese and put it on my blog@csdn.
Adrian Rosebrock
Refer to my FAQ for translation requests.
Imane
Hello, thanks for your post it was very helpful to me to understand IoU. I would like to apply it to evaluate the precision of my window detector in façades, and i have many bounding boxes at each image. My question is how can i do that ? and how do you extract ground truth boxes coordinates (I use LabelImg to draw them) ? Thanks in advance, this will help me alot with my master thesis.
cww
I love this tutorials.
Adrian Rosebrock
Thanks, I’m glad you’re enjoying them.