Well. I’ll just come right out and say it. Today is my 27th birthday.
As a kid I was always super excited about my birthday. It was another year closer to being able to drive a car. Go to R rated movies. Or buy alcohol.
But now as an adult, I don’t care too much for my birthday — I suppose it’s just another reminder of the passage of time and how it can’t be stopped. And to be totally honest with you, I guess I’m a bit nervous about turning the “Big 3-0” in a few short years.
In order to rekindle some of that “little kid excitement”, I want to do something special with today’s post. Since today is both a Monday (when new PyImageSearch blog posts are published) and my birthday (two events that will not coincide again until 2020), I’ve decided to put together a really great tutorial on texture and pattern recognition in images.
In the remainder of this blog post I’ll show you how to use the Local Binary Patterns image descriptor (along with a bit of machine learning) to automatically classify and identify textures and patterns in images (such as the texture/pattern of wrapping paper, cake icing, or candles, for instance).
Read on to find out more about Local Binary Patterns and how they can be used for texture classification.
Looking for the source code to this post?
Jump Right To The Downloads SectionPyImageSearch Gurus
The majority of this blog post on texture and pattern recognition is based on the Local Binary Patterns lesson inside the PyImageSearch Gurus course.
While the lesson in PyImageSearch Gurus goes into a lot more detail than what this tutorial does, I still wanted to give you a taste of what PyImageSearch Gurus — my magnum opus on computer vision — has in store for you.
If you like this tutorial, there are over 29 lessons spanning 324 pages covering image descriptors (HOG, Haralick, Zernike, etc.), keypoint detectors (FAST, DoG, GFTT, etc.), and local invariant descriptors (SIFT, SURF, RootSIFT, etc.), inside the course.
At the time of this writing, the PyImageSearch Gurus course also covers an additional 166 lessons and 1,291 pages including computer vision topics such as face recognition, deep learning, automatic license plate recognition, and training your own custom object detectors, just to name a few.
If this sounds interesting to you, be sure to take a look and consider signing up for the next open enrollment!
What are Local Binary Patterns?
Local Binary Patterns, or LBPs for short, are a texture descriptor made popular by the work of Ojala et al. in their 2002 paper, Multiresolution Grayscale and Rotation Invariant Texture Classification with Local Binary Patterns (although the concept of LBPs were introduced as early as 1993).
Unlike Haralick texture features that compute a global representation of texture based on the Gray Level Co-occurrence Matrix, LBPs instead compute a local representation of texture. This local representation is constructed by comparing each pixel with its surrounding neighborhood of pixels.
The first step in constructing the LBP texture descriptor is to convert the image to grayscale. For each pixel in the grayscale image, we select a neighborhood of size r surrounding the center pixel. A LBP value is then calculated for this center pixel and stored in the output 2D array with the same width and height as the input image.
For example, let’s take a look at the original LBP descriptor which operates on a fixed 3 x 3 neighborhood of pixels just like this:
In the above figure we take the center pixel (highlighted in red) and threshold it against its neighborhood of 8 pixels. If the intensity of the center pixel is greater-than-or-equal to its neighbor, then we set the value to 1; otherwise, we set it to 0. With 8 surrounding pixels, we have a total of 2 ^ 8 = 256 possible combinations of LBP codes.
From there, we need to calculate the LBP value for the center pixel. We can start from any neighboring pixel and work our way clockwise or counter-clockwise, but our ordering must be kept consistent for all pixels in our image and all images in our dataset. Given a 3 x 3 neighborhood, we thus have 8 neighbors that we must perform a binary test on. The results of this binary test are stored in an 8-bit array, which we then convert to decimal, like this:
In this example we start at the top-right point and work our way clockwise accumulating the binary string as we go along. We can then convert this binary string to decimal, yielding a value of 23.
This value is stored in the output LBP 2D array, which we can then visualize below:
This process of thresholding, accumulating binary strings, and storing the output decimal value in the LBP array is then repeated for each pixel in the input image.
Here is an example of computing and visualizing a full LBP 2D array:
The last step is to compute a histogram over the output LBP array. Since a 3 x 3 neighborhood has 2 ^ 8 = 256 possible patterns, our LBP 2D array thus has a minimum value of 0 and a maximum value of 255, allowing us to construct a 256-bin histogram of LBP codes as our final feature vector:
A primary benefit of this original LBP implementation is that we can capture extremely fine-grained details in the image. However, being able to capture details at such a small scale is also the biggest drawback to the algorithm — we cannot capture details at varying scales, only the fixed 3 x 3 scale!
To handle this, an extension to the original LBP implementation was proposed by Ojala et al. to handle variable neighborhood sizes. To account for variable neighborhood sizes, two parameters were introduced:
- The number of points p in a circularly symmetric neighborhood to consider (thus removing relying on a square neighborhood).
- The radius of the circle r, which allows us to account for different scales.
Below follows a visualization of these parameters:
Lastly, it’s important that we consider the concept of LBP uniformity. A LBP is considered to be uniform if it has at most two 0-1 or 1-0 transitions. For example, the pattern 00001000
(2 transitions) and 10000000
(1 transition) are both considered to be uniform patterns since they contain at most two 0-1 and 1-0 transitions. The pattern 01010010
) on the other hand is not considered a uniform pattern since it has six 0-1 or 1-0 transitions.
The number of uniform prototypes in a Local Binary Pattern is completely dependent on the number of points p. As the value of p increases, so will the dimensionality of your resulting histogram. Please refer to the original Ojala et al. paper for the full explanation on deriving the number of patterns and uniform patterns based on this value. However, for the time being simply keep in mind that given the number of points p in the LBP there are p + 1 uniform patterns. The final dimensionality of the histogram is thus p + 2, where the added entry tabulates all patterns that are not uniform.
So why are uniform LBP patterns so interesting? Simply put: they add an extra level of rotation and grayscale invariance, hence they are commonly used when extracting LBP feature vectors from images.
Local Binary Patterns with Python and OpenCV
Local Binary Pattern implementations can be found in both the scikit-image and mahotas packages. OpenCV also implements LBPs, but strictly in the context of face recognition — the underlying LBP extractor is not exposed for raw LBP histogram computation.
In general, I recommend using the scikit-image implementation of LBPs as they offer more control of the types of LBP histograms you want to generate. Furthermore, the scikit-image implementation also includes variants of LBPs that improve rotation and grayscale invariance.
Before we get started extracting Local Binary Patterns from images and using them for classification, we first need to create a dataset of textures. To form this dataset, earlier today I took a walk through my apartment and collected 20 photos of various textures and patterns, including an area rug:
Notice how the area rug images have a geometric design to it.
I also gathered a few examples of carpet:
Notice how the carpet has a distinct pattern with a coarse texture.
I then snapped a few photos of the keyboard sitting on my desk:
Notice how the keyboard has little texture — but it does demonstrate a repeatable pattern of white keys and silver metal spacing in between them.
Finally, I gathered a few final examples of wrapping paper (since it is my birthday after all):
The wrapping paper has a very smooth texture to it, but also demonstrates a unique pattern.
Given this dataset of area rug, carpet, keyboard, and wrapping paper, our goal is to extract Local Binary Patterns from these images and apply machine learning to automatically recognize and categorize these texture images.
Let’s go ahead and get this demonstration started by defining the directory structure for our project:
$ tree --dirsfirst -L 3 . ├── images │ ├── testing │ │ ├── area_rug.png │ │ ├── carpet.png │ │ ├── keyboard.png │ │ └── wrapping_paper.png │ └── training │ ├── area_rug [4 entries] │ ├── carpet [4 entries] │ ├── keyboard [4 entries] │ └── wrapping_paper [4 entries] ├── pyimagesearch │ ├── __init__.py │ └── localbinarypatterns.py └── recognize.py 8 directories, 7 files
The images/
directory contains our testing/
and training/
images.
We’ll be creating a pyimagesearch
module to keep our code organized. And within the pyimagesearch
module we’ll create localbinarypatterns.py
, which as the name suggests, is where our Local Binary Patterns implementation will be stored.
Speaking of Local Binary Patterns, let’s go ahead and create the descriptor class now:
# import the necessary packages from skimage import feature import numpy as np class LocalBinaryPatterns: def __init__(self, numPoints, radius): # store the number of points and radius self.numPoints = numPoints self.radius = radius def describe(self, image, eps=1e-7): # compute the Local Binary Pattern representation # of the image, and then use the LBP representation # to build the histogram of patterns lbp = feature.local_binary_pattern(image, self.numPoints, self.radius, method="uniform") (hist, _) = np.histogram(lbp.ravel(), bins=np.arange(0, self.numPoints + 3), range=(0, self.numPoints + 2)) # normalize the histogram hist = hist.astype("float") hist /= (hist.sum() + eps) # return the histogram of Local Binary Patterns return hist
We start of by importing the feature
sub-module of scikit-image which contains the implementation of the Local Binary Patterns descriptor.
Line 5 defines our constructor for our LocalBinaryPatterns
class. As mentioned in the section above, we know that LBPs require two parameters: the radius of the pattern surrounding the central pixel, along with the number of points along the outer radius. We’ll store both of these values on Lines 8 and 9.
From there, we define our describe
method on Line 11, which accepts a single required argument — the image we want to extract LBPs from.
The actual LBP computation is handled on Lines 15 and 16 using our supplied radius and number of points. The uniform
method indicates that we are computing the rotation and grayscale invariant form of LBPs.
However, the lbp
variable returned by the local_binary_patterns
function is not directly usable as a feature vector. Instead, lbp
is a 2D array with the same width and height as our input image — each of the values inside lbp
ranges from [0, numPoints + 2], a value for each of the possible numPoints + 1 possible rotation invariant prototypes (see the discussion of uniform patterns at the top of this post for more information) along with an extra dimension for all patterns that are not uniform, yielding a total of numPoints + 2 unique possible values.
Thus, to construct the actual feature vector, we need to make a call to np.histogram
which counts the number of times each of the LBP prototypes appears. The returned histogram is numPoints + 2-dimensional, an integer count for each of the prototypes. We then take this histogram and normalize it such that it sums to 1, and then return it to the calling function.
Now that our LocalBinaryPatterns
descriptor is defined, let’s see how we can use it to recognize textures and patterns. Create a new file named recognize.py
, and let’s get coding:
# import the necessary packages from pyimagesearch.localbinarypatterns import LocalBinaryPatterns from sklearn.svm import LinearSVC from imutils import paths import argparse import cv2 import os # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-t", "--training", required=True, help="path to the training images") ap.add_argument("-e", "--testing", required=True, help="path to the tesitng images") args = vars(ap.parse_args()) # initialize the local binary patterns descriptor along with # the data and label lists desc = LocalBinaryPatterns(24, 8) data = [] labels = []
We start off on Lines 2-7 by importing our necessary command line arguments. Notice how we are importing the LocalBinaryPatterns
descriptor from the pyimagesearch
sub-module that we defined above.
From there, Lines 10-15 handle parsing our command line arguments. We’ll only need two switches here: the path to the --training
data and the path to the --testing
data.
In this example, we have partitioned our textures into two sets: a training set of 4 images per texture (4 textures x 4 images per texture = 16 total images), and a testing set of one image per texture (4 textures x 1 image per texture = 4 images). The training set of 16 images will be used to “teach” our classifier — and then we’ll evaluate performance on our testing set of 4 images.
On Line 19 we initialize our LocalBinaryPattern
descriptor using a numPoints=24 and radius=8.
In order to store the LBP feature vectors and the label names associated with each of the texture classes, we’ll initialize two lists: data
to store the feature vectors and labels
to store the names of each texture (Lines 20 and 21).
Now it’s time to extract LBP features from our set of training images:
# loop over the training images for imagePath in paths.list_images(args["training"]): # load the image, convert it to grayscale, and describe it image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) hist = desc.describe(gray) # extract the label from the image path, then update the # label and data lists labels.append(imagePath.split(os.path.sep)[-2]) data.append(hist) # train a Linear SVM on the data model = LinearSVC(C=100.0, random_state=42) model.fit(data, labels)
We start looping over our training images on Line 24. For each of these images, we load them from disk, convert them to grayscale, and extract Local Binary Pattern features. The label (i.e., texture name) is then extracted from the image path and both our labels
and data
lists are updated, respectively.
Once we have our features and labels extracted, we can train our Linear Support Vector Machine on Lines 36 and 37 to learn the difference between the various texture classes.
Once our Linear SVM is trained, we can use it to classify subsequent texture images:
# loop over the testing images for imagePath in paths.list_images(args["testing"]): # load the image, convert it to grayscale, describe it, # and classify it image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) hist = desc.describe(gray) prediction = model.predict(hist.reshape(1, -1)) # display the image and the prediction cv2.putText(image, prediction[0], (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 255), 3) cv2.imshow("Image", image) cv2.waitKey(0)
Just as we looped over the training images on Line 24 to gather data to train our classifier, we now loop over the testing images on Line 40 to test the performance and accuracy of our classifier.
Again, all we need to do is load our image from disk, convert it to grayscale, extract Local Binary Patterns from the grayscale image, and then pass the features onto our Linear SVM for classification (Lines 43-46).
I’d like to draw your attention to hist.reshape(1, -1)
on Line 46. This reshapes our histogram from a 1D array to a 2D array allowing for the potential of multiple feature vectors to run predictions on.
Lines 49-52 show the output classification to our screen.
Results
Let’s go ahead and give our texture classification system a try by executing the following command:
$ python recognize.py --training images/training --testing images/testing
And here’s the first output image from our classification:
Sure enough, the image is correctly classified as “area rug”.
Let’s try another one:
Once again, our classifier correctly identifies the texture/pattern of the image.
Here’s an example of the keyboard pattern being correctly labeled:
Finally, we are able to recognize the texture and pattern of the wrapping paper as well:
While this example was quite small and simple, it was still able to demonstrate that by using Local Binary Pattern features and a bit of machine learning, we are able to correctly classify the texture and pattern of an image.
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this blog post we learned how to extract Local Binary Patterns from images and use them (along with a bit of machine learning) to perform texture and pattern recognition.
If you enjoyed this blog post, be sure to take a look at the PyImageSearch Gurus course where the majority this lesson was derived from.
Inside the course you’ll find over 166+ lessons covering 1,291 pages of computer vision topics such as:
- Face recognition.
- Deep learning.
- Automatic license plate recognition.
- Training your own custom object detectors.
- Building image search engines.
- …and much more!
If this sounds interesting to you, be sure to take a look and consider signing up for the next open enrollment!
See you next week!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Neeraj
Happy Birthday Adrian!, God bless you with happiness,peace in your life. Thanks always for sharing knowledge with us and teaching us so many new things. Always love your all posts.
Adrian Rosebrock
Thanks so much Neeraj!
Arun
really great Article.
Charles
Hi Adrian.
Thanks for this material. Great text, like always. And btw, happy birthday to you! 🙂
Adrian Rosebrock
Thanks Charles! 🙂
michael
happy birthday adrian i hope this can classify all your presents !!!
Adrian Rosebrock
Thanks Michael!
Sveder
Great article, thank you!
Any chance to get the images you used to train this and the test images?
Thanks!
Nevermind, found it, kinda looks like every other generic “sign up here” so ignored it.
Adrian Rosebrock
Thanks for the feedback Sveder, I’ll see if I can make the form stand out more in the future.
Sveder
No problem, I got it to work, great job. I was especially curious how well it would do with different types of keyboards (or carpets etc) and it worked amazingly.
One thing though – you should use the
os.path
module instead of splitting by “/” (specifically this line:labels.append(imagePath.split("/")[-2])
as it doesn’t work like you expect on windows, that uses \ as path delimiters.(for completeness, I changed the above line to be:
labels.append(os.path.split(os.path.dirname(imagePath))[-1])
)
Once again – thank you for the awesome post!
Adrian Rosebrock
Thanks for the tip Sveder. I honestly haven’t used a Windows system in over 9 years so I usually miss those Windows nuances.
Abu
Thanks for the awesome tutorial! Love reading your blog.
Adrian Rosebrock
Thanks Abu! 🙂
suyuancheng
hi,Adrian. i have some problems. as i known svn just can classify two kind data, so i think neural network is better to prove the LBP
Adrian Rosebrock
The original implementation of SVMs were only intended for binary classification (i.e., two classes); however, modern implementations of SVMs (such as the one used in this course) can handle multi-class data without a problem.
suyuancheng
ooh,i learn something new,thinks
abu
i have problem to create the pyimagesearch module.. can u show steps on how u create the module??
Adrian Rosebrock
I would suggest downloading the source code under the “Downloads” section of this post. The source code download will properly explain how to create the PyImageSearch module. All you need to do is create a
pyimagesearch
directory and then place a__init__.py
file inside of it.abu
is it possible to separate the coding for the training and testing?? for example i run the training first and after that i run the testing
Adrian Rosebrock
Sure, that’s not an issue at all. Once the model has been trained, dump it to file using pickle or cPickle:
You can then load it from disk in a separate Python script:
Dharmendra
in which platform this code is running , I am trying in spyder but it is giving error ,please provide some suggesition
Adrian Rosebrock
I would suggest you try to execute the code first via the command line. Then you can update your IDE settings.
Pradeep
Adrian, can you please also share how to use LBP to train an object detector? I googled but don’t see any simple, concrete example
Adrian Rosebrock
You normally wouldn’t use LBPs strictly for object detection. Normally HOG + Linear is better suited for object detection. Is there a reason why you would like to use LBPs for object detection?
Zheng Rui
Thanks Adrian, very nice tutorial as usual, one thing i found is the histogram returned from ‘class LocalBinaryPatterns’, if set ‘bins=np.arange(0, self.numPoints + 2) in np.histogram()’, the number of bins returned will be only ‘self.numPoints+1’ rather than ‘self.numPoints+2’, as ‘np.arange(0, self.numPoints+2)’ will generate ‘[0, 1, …, self.numPoints+1]’, which generates bins ‘[0,1), [1,2), …, [self.numPoints, self.numPoints+1]’ for ‘np.histogram()’.
Either just use ‘bins=self.numPoints + 2’ or use ‘bins=np.arange(0, self.numPoints+3)’ will return `self.numPoints+2` bins
Adrian Rosebrock
Thanks for pointing this out! The code was correct in the “Downloads”, but not in the actual blog post itself. The actual code should read:
Wanderson
Wow, you’re the best!
Adrian, what would be the best solution to work with the counting crowd in small place. For example, count for passengers crossing the train door (camera on the ceiling and out of the train). The input data would be the overhead and / or shoulders.
Adrian Rosebrock
For simple counting with a fixed camera, basic motion detection and background subtraction will work well. I would recommend starting with this post.
Wanderson
Hi Adrian,
It all worked!
Nice work!
But instead of using static images, how do you do to train, test and classify with video cameras?
Adrian Rosebrock
You’ll want to train your classifier using static images/frames. But classifying with a video can easily be accomplished by modifying the code to access the video stream. I recommend using this blog post as a starting point.
danieal
hi Adrian really great work, but the question is if I want to compute lbp for the first pixel
in your first example 5 how to do this ? do u mind explaining it plz ?
Adrian Rosebrock
Are you referring to pixels being on “border” of the image and therefore not having a true neighborhood? If so, just pad image with zeros such that you have pixels to fit the neighborhood. Other types of padding into replication, where you “replicate” the borders along the border to create the neighborhood. You can also “wrap around” and use the pixel values from the opposite side of the image. But in general, zero padding is normally used.
danieal
thank u very much 🙂
mahmod
ImportError: No module named scipy.ndimage
Has anyone faced this issue (OS X 10.11) ??
Adrian Rosebrock
You need to install
scipy
and likelyscikit-image
:mahmod
Thanks man, you rock!
I needed to install the following:
`scipy matplotlib scikit-learn imutils`
mahmod
Hi again,
The code is working as expected; however, the following warning is thrown:
“`
python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
“`
so in the future, it will start throwing exceptions!? Any idea how to avoid that?
Adrian Rosebrock
There are two ways to avoid this issue. The first is to re-shape the feature vector:
prediction = model.predict(hist.reshape(1, -1)[0]
The second is to simply wrap
hist
as a list:prediction = model.predict([hist])[0]
Both options will give the desired result.
Aquib
Hey, Adrian how can I use LBP for face recognition?
Adrian Rosebrock
I cover how to use LBPs for face recognition inside the PyImageSearch Gurus course. Be sure to take a look!
Paul G.
Adrian, how can I output the picture representation of the LBP Histogram?
Adrian Rosebrock
I would suggest using matplotlib. Plot the bins of the LBPs on the x-axis and the counts on the y-axis. I further detail Local Binary Patterns inside the PyImageSearch Gurus course.
Srinidhi Bhat
Hi Adrian great set of tutorials keep continuing please
just one doubt , i have applied LBP on a set of faces and have extracted a histogram but how do you get the face, as you have shown before the tutorial above. Kindly guide
Adrian Rosebrock
If you’re using simple LBPs with 2^8 = 256 possible combinations of LBP codes, then you just take the output of the LBP operation, change it to an unsigned 8-bit integer data type, and display it. See the PyImageSearch Gurus course for more details.
Nisha
Hello Adrian, is it possible to include camshift and LBP to track the object efficiently for a live video in python.
Adrian Rosebrock
CamShift is typically used for color histograms. For objects that cannot be tracked based on color, I would instead use something like correlation tracking.
sonic
Thanks for this tutorial. Actually thanks for the whole website 🙂
I have one additional question. I don’t want to classify pictures, but extract small area with texture, calculate lbp histogram and then try to match histograms and find similar textures in the entire image. Something similar to Opencv Back Projection for color histograms. And actually I am trying to play with calcBackProject() function, but I have trouble with data types and can’t make it work.
Other solution on my mind is to calculate the lbp histogram on the template image, and then manually iterate through picture (like we do for convolution), calculate lbp histogram for every region, compare that with template histogram using compareHist() and Chi-Square, and declare similarity. But that would be pretty coarse. Any other option?
Adrian Rosebrock
This certainly sounds like a texture matching problem, which I admittedly don’t have much experience in. Using a combination of image pyramids and sliding windows you can get an scale and location independent method to matching textures, but this by definition is very computationally expensive. I would suggest starting with this method and seeing how far it gets you. I would also suggest treating the problem as a texture connected-component labeling problem as well.
sonic
Thanks for the answer.
One more complication is the fact that I want to do this for a live video feed, on an ARM processor 🙂
I implemented naive method: getting lbp hist for template region, and then manually iterating through patches of the image, calculating lbp hist for them, comparing histograms, and then setting whole region to 0 or 255 depending on the Chi Square distance. Result is not great: 1) manual iteration through image is slow, just few fps (but there has to be some way to vectorize that operation); 2) result is coarse (I am using 10×10 blocks on a 240×320 image) and kinda looks like an edge detector
Of well. I’ll try to play with it a bit more before discarding the idea.
Adrian Rosebrock
Correct, this method will be painfully slow due to having to loop over the entire image. In some cases, you might be able to speed this up by implementing the function in C/C++ and then calling the method from Python.
David
Hi, this is exactly what I want to do as well. Did you get far with this approach? Any tips for texture matching and/or texture segmentation in OpenCV?
Sarang
Hi Adrian,
Can we print the prediction/accuracy % of the training samples.
If yes, how?
Adrian Rosebrock
You can apply the
.score
function of themodel
. For example:print(model.score(trainData))
ezza
Hi,
what is this input “trainData” ?? As this is not in the code.
Adrian Rosebrock
I meant to say
print(model.score(data))
. I use “trainData” as a variable name in other blog posts.Abdul Baset
Hi,Adrian
Thanks for the lovely post. I downloaded your code and ran it . i can only see the area_rug classified in the output. The rest images are not showing . I also get a warning after i run the scripts which is same as someone else pointed out that is :
“DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.”
Is this the reason for getting only one kind of class classified?
Adrian Rosebrock
That is very strange regarding only the “area rug” class being utilized — that should not happen. I haven’t heard of that happening before either. As for the DeprecationWarning that can be resolved by wrapping the LBP
hist
as a list before prediction:prediction = model.predict([hist])[0]
Or by reshaping the array via NumPy:
prediction = model.predict(hist.reshape(1, -1))[0]
Abdul Baset
ok im sorry …it was my bad …its working fine now !
One more question , so im working on this college project and i need to extract only eyes and lips using Local binary patterns. Can you give me a lead as to how i can do that and in what format are they stored after extraction ?
Adrian Rosebrock
LBP features are simply NumPy arrays. You can write them to file using cPickle, HDF5, or simple CSV. For what it’s worth, I demonstrate how to train custom object detectors inside the PyImageSearch Gurus course.
Romanzo
Hey Adrian,
Tanks for the great post.
Regarding the numpy histogram, i am not sure about your code. Shouldn’t the number of bins be equal to p + 2 (not p + 3). There are p + 1 bins for uniform patterns and 1 bin for non uniform patterns (total p + 2 bins). And why is range equals to [0, p + 2] and not the number of pixels in the image?
Also do you get number of uniform patterns equals p + 1 because it’s rotation invariant? Otherwise it will be p * (p – 1) + 2 (equals 58 for p=8)
Thanks.
Adrian Rosebrock
The code can be a bit confusing due to subtleties in the
range
andnp.histogram
function.To start, the
range
function is not inclusive on the upper bound, therefore we have to use p + 3 instead of p + 2 to get the proper number of bins. Open up a Python shell and play with therange
function to confirm this for yourself.The
range
parameter tonp.histogram
is p + 2 because there are p + 1 uniform patterns. We then need to add an extra bin to the histogram for all non-uniform patterns.For more information on LBPs, please see the PyImageSearch Gurus course.
Suresh
I have Problem with error please solve this
from pyimagesearch.localbinarypatterns import LocalBinaryPatterns
ImportError: No module named pyimagesearch.localbinarypatterns
Adrian Rosebrock
Hey Suresh — make sure you download the source code to this blog post using the “Downloads” section in this post. The .zip archive of the code includes the exact directory and project structure to ensure the code works out-of-the-box. My guess is that your project structure is incorrect/does not include a
__init__.py
file in thepyimagesearch
directory.Suresh
Thank you Adrian, It is working fine.
that is my mistake only.
I am using python recognize.py this only
actually we use this only at command prompt (# or $) python recognize.py –training images/training –testing images/testing
Suresh
Which formula your using for SVM ? (Classification)
Adrian Rosebrock
I’m not sure what you mean my “formula”, but this is just a SVM with a linear kernel.
Sarvesh
Hey, Thanks for the excellent tutorial!!! I downloaded the source code and ran it, However, it gives me following error :
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’
Any idea how should I go about solving this???
Any help is appreciated 🙂
Thank you 🙂
Adrian Rosebrock
Hey Sarvesh — it sounds like the paths to your input images may be incorrect. What is the command that you are running to execute the script?
Aurora Guerra
Hello Adrian
Your tutorials are very good, I’m learning a lot
I have the same problem
how do I solve it?
command: python recognize.py –training images/training –testing images/testing
SO: windows
Thank you
Romanzo
Hi Adrian,
Just a note, If you are using local_binary_pattern from skimage, the value assigned to the centre pixel is the opposite of what you are describing in the blog.
In skimage it is: “If the intensity of the center pixel is greater-than-or-equal to its neighbor, then we set the value to 0; otherwise, we set it to 1”. You might want to keep everything uniform.
Adrian Rosebrock
Hey Romanzo — thanks for pointing this out.
Aritro Saha
Since I couldn’t find the comment I posted, this is the error I got:
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
DeprecationWarning)
model.fit(data, labels)
ValueError: Found array with 0 feature(s) (shape=(1, 0)) while a minimum of 1 is required.
Adrian Rosebrock
First, it’s important to note that this isn’t an error (yet) — it’s a warning. Secondly, to resolve the issue, just follow the instructions in the warning:
prediction = model.predict(hist.reshape(1, -1))[0]
You can also just wrap the
hist
as a list:prediction = model.predict([hist])[0]
Maheswari
What is the computation time for this program.How to calculate the time .
Adrian Rosebrock
A quick, easy method to determine approximate computation time is to simply use the
time
command:$ time python recognize.py --training images/training --testing images/testing
Milap Jhumkhawala
Hey Adrian, first of all great post, really useful.
I tried implementing the code and I am stuck with this error: “ValueError: Found array with 0 feature(s) (shape=(1,0)) while a minimum of 1 is required”. How do I get rid of this error?
Milap Jhumkhawala
Never mind, solved it. Image path was incorrect.
Adrian Rosebrock
Congrats on resolving the issue Milap!
sarra
Hay,I’am working on my code and I use the function “lbp = feature.local_binary_pattern(b, npoints,radius, method=”Uniform”)” to display just image lbp, so how should I put the numPoints and radius settings
Ameer
Hey Adrian
i downloaded your code and when i tried to run it i had some import error i googled them and downloaded them but i still have issues
(cv) ameerherbawi@clienta:~/Desktop/local-binary-patterns$ python3 recognize.py –training images/training –testing images/testing
…
ImportError: No module named ‘scipy’
am sure i downloaded scipy and installed it since i get this when i tried again
Requirement already satisfied: scipy in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied: numpy>=1.8.2 in /usr/local/lib/python2.7/dist-packages (from scipy)
just to help you a bit i followed your tut. Ubuntu 16.04: How to install OpenCV and then i downloaded the code and it didn’t run i installed opencv 3.1 as you guided in addition to python 3
thanks for your time
Adrian Rosebrock
Did you install SciPy into your “cv” virtual environment? You can use
pip freeze
to check:Ameer
don’t worry i found that my bad i didn’t read the rest of the comments, one more thing, i tried to add a new testing image (baby face) and it recognized it as keyboard !
where should i go to do face recognition ?
Adrian Rosebrock
I cover face recognition inside the PyImageSearch Gurus course.
Ameer
Hey Adrian
I followed your code and upgraded it a bit, but i have noticed that if i add another image the code will miss identify it, i searched and saw that cheeking confidence value will help to print unknown on the low confidence values images, and also found that OpenCV3 isn’t supporting confidence any more, what can i do to be able to find the confidence to build a further work on its value ?
i used this and it didn’t work, whenever i remove conf it works so what can i do then ???
Thanks again
id,conf = recognizer.predict(objectNp[y: y + h, x: x + w])
Adrian Rosebrock
I’m not sure what you mean by OpenCV 3 not supporting confidence anymore. This blog post uses scikit-learn for the machine learning functionality.
JKCS
Hi Adrian,
Thanks for the excellent tutorial!!. I downloaded the source code and ran it, However, it gives me following error :
usage: recognize.py [-h] -t TRAINING -e TESTING
recognize.py: error: the following arguments are required: -t/–training, -e/–testing
Any idea how should I go about solving this issue?
Your help is appreciated.
Thank you.
Adrian Rosebrock
I suggest you read up on command line arguments before continuing.
Rafflesia
Hi JKCS ,
I am having the same problem while trying to run this code,
Did you solved this problem ?
If yes, could you please tell me how you solved it ,
Your help is appreciated.
Thank you.
Vhulenda
Hi Rafflesia.
Use this to run the code from terminal.
subhiran
i have also got same error . how to fix it. pls help me.. your help will be appreciated.
Adrian Rosebrock
You can fix the error by reading my reply to JKCS. You need to read this tutorial on how to use command line arguments.
Gaby
Hello Adrian,
I’m working on a Rasberry Pi using Python. I want to use LBP for face recognition, I read your earlier comment that it was in your Guru book, how would I go about accessing that specific module?
Also, I just tried running the first code on this post, but I get the error that the module skimage doesnt exits. I have already install scikit-image and matplots succesfully. Can you think of any other reason why that would be the case?
Thanks in advance. I hope you can get back to me soon!
Adrian Rosebrock
The LBP for face recognition is part of the Face Recognition Module inside PyImageSearch Gurus. Computer vision topics tend to overlap and intertwine (you would need to understand feature extractors and a bit of machine learning first before applying face recognition) so I would suggest working through the entire course.
As for the scikit-image not existing, did you use a Python virtual environment when installing them? Perhaps you installed them outside of the Python virtual environment where you normally access OpenCV.
syamsul
please lbp for delphi 7..I find it difficult
Adrian Rosebrock
Hi Syamsul — this blog primarily covers OpenCV and Python, not the Delphi programming language. I am unaware of an LBP implementation for the Delphi programming language.
Shravani
Hi Adrian,
I am getting an error No module named sklearn.svm while executing this code. Can you plz tell me how to solve this.
Adrian Rosebrock
Make sure you install scikit-learn:
$ pip install scikit-learn
sapikum
Hi Adrian
Can we use a 10 fold cross validation on your images folder?
if yes , how?
Adrian Rosebrock
The dataset used in this blog post really isn’t large enough to apply 10 fold cross validation. I would suggest using a larger dataset, extracting features from each image, and then use scikit-learn’s cross-validation methods to help you accomplish this.
Rafflesia
Hi Adrian,
You are great,
I am learning python recently and i follow some of your tutorials,
All of them are really great and helpful ,
Thank you .
Adrian Rosebrock
Thank you for the kind words, Rafflesia.
Ethan
You commented in the post that LBP implementations can be found in scikit-image and mahotas packages (or in OpenCV more specifically in the context of facial recognition). Is there any other package that contains LBP implementations or just those same ones?
Adrian Rosebrock
Those are the primary ones, at least in terms of Python bindings. I highly recommend you use the scikit-image implementation.
Ethan
Instead of using SVM, is it possible to use CNN? Or is it unnecessary to use LBP to extract features since CNN does basically the same thing? Correct me if I’m wrong.
Adrian Rosebrock
A CNN will learn various filters to discriminate amongst object classes. These filters could learn color blobs, edges, contours, and eventually higher-level, more abstract features. CNNs and LBPs are not the same. If you’re interested in learning more about feature extraction and CNNs, take a look at the PyImageSearch Gurus course and Deep Learning for Computer Vision with Python.
Ethan
So is it possible to use it together with LBP? right?
Adrian Rosebrock
No, a CNN will learn its own filters. An LBP is a feature extraction algorithm. You would then feed these features into a standard machine learning classifier like an SVM, Random Forest, etc. A CNN is an end-to-end classifier. An image comes in as input and classifications at the output. You wouldn’t use LBPs as an input to a CNN.
Robert
Hi Adrian,
I have just started programming in Python and I found your website is a great source to learn from. Your tutorials also helped me a lot. Thank you so much.
Now, I am trying to do exactly what you have done above, however, instead of using the LBP features, I want to use the BRIEF (Binary Robust Independent Elementary Features) as the texture features. Therefore, I would appreciate if you could provide me with any tips on how to do this. Thanks in advance.
Cheers,
Robert
Adrian Rosebrock
Hi Robert — it’s great to hear that you’ve enjoyed PyImageSearch! In order to use BRIEF you actually need to build what’s called a “bag of visual words” model (BOVW). From there you can apply machine learning, image search engines, etc. I provide a detailed guide on the BOVW and the applications to image classifiers and scalable image search engines inside the PyImageSearch Gurus course. I would definitely suggest that you take a look.
Robert
Hi Adrian,
Thanks for your reply. That’s helped me a lot, but I am a bit confused since I am new to Python.
How about if I use only BRIEF descriptor for image classification without using the keypoint detector such as (StarDetector)? Would this be possible? So, it will be exactly as same as what you have done above.
If I used the BOVW model, I would have to use the keypoint detector to compute the BRIEF features and then perform k-means clustering and calculate the histogram of features.
To be clear, I would like to compute the BRIEF features of the images, and then use the BRIEF features to build a histogram of features without using the keypoint detector for conducting image classification.
Cheers,
Robert
Adrian Rosebrock
You need both a keypoint detector and local invariant descriptor. You could skip the actual keypoint detection and use a Dense detector instead, but again, you need to mark certain pixels in an image as “keypoints” (whether via an explicit algorithm or the Dense detector) before you extract the features and apply k-means to build your BOVW.
Again, I would highly recommend that you work through the PyImageSearch Gurus course as I explain keypoint detectors, feature extractors, and the BOVW model in tons of detail with lots of code.
Robert
Hi Adrian,
Sorry for asking you so many questions.
I hope you got what I was trying to explain and I am so sorry again for not being clear in the first question.
Cheers,
Robert
Robert
Hi Adrian,
Thanks for your reply and of course I will have a look at PyImageSearch Gurus course. I am very excited to join this course.
This is the last question 🙂
Basically, I need to extract the BRIEF features from a region surrounding each pixel in an image, not certain pixels. So, I need to compute the BRIEF over all the pixels in the image and then build a histogram of BRIEF features and perform image classification based on the histogram. Thus, I don’t want to use keypoints detector since the BRIEF features will be extracted from each pixel in the image.
Would this be possible? If yes, could you provide me with any tips on how to do that, please?
I am so sorry again for any inconvenience.
Cheers,
Robert
Adrian Rosebrock
By the very definition of BRIEF and how all local invariant descriptors work you need to examine the pixels surrounding a given center pixel to build the feature vector. If you want to extract BRIEF features from every single pixel in the image simply create a
cv2.KeyPoint
object for every (x, y)-coordinate and then pass the keypoints list into the extractor.Robert
Got it worked. That’s helped me a lot. Many thanks, Adrian.
Cheers,
Robert
Jabr
Hello Adrian,
Is it possible to set the parameters for the BRIEF descriptor (cv2.xfeatures2d.BriefDescriptorExtractor_create()) in python, namely (the sample pairs and the patch size)?
For example, set the patch size to 25 instead of the default one, which is 48.
Thank you.
Adrian Rosebrock
As far as I understand from the documentation you can set the number of bytes used but not the patch size.
Ian Maynard
Hello Adrian,
I tried to follow your instructions, got the scikit-image installed(following instruction on the link), then i tried to run the localbinarypatterns.py in python3.4. It always give me this error
Traceback (most recent call last):
File “/home/pi/pythonpy/videofacedet/craft/localbinarypatterns.py”, line 1, in
from skimage import feature
ImportError: No module named ‘skimage’
I tried to search the internet for answers, I even tried to do my own solution, but none works for it. Then i tried to run that program in python2.7, and it did not give any error statement so i assume that it works in python2.7. How can I make it work for python 3.4? because I read somewhere that scikit only works in python3 and newer version.
Adrian Rosebrock
Hi Ian — it sounds like scikit-image is not installed on your system. Either (1) scikit-image failed to install or (2) you did not install scikit-image into the Python virtual environment where you have OpenCV installed.
Also scikit-image will work with BOTH Python 2.7 and Python 3.
Sagar Patil
I really want to know how you did what you did in figure 4. Can you please give me the function? I am actually doing a project in which I can open my garage using my face. Also, I am 13 years old.
Adrian Rosebrock
It’s great to hear that you are getting involved with computer vision at such a young age, awesome!
To generate the figure I computed the LBP image pixel-by-pixel. From there I took the output LBP image and scaled it to the range [0, 255].
Sagar Patil
Thank You, but can I please see the code, because, I don’t really know how to compute the LBP of an image in code. Also, I don’t know how to scale a matrix. I am really new to machine learning and computer vision.
Sagar Patil
I want to do what you did in figure 4. Can you please provide the function to do that?
Adrian Rosebrock
Hi Sagar — unfortunately I do not think I have that code anymore. I just checked the repo but I couldn’t find it. I’ll check again, but please understand that while I’m happy to help and point you in the right direction it’s extremely rare that I can provide or even write code for you outside what is covered in the freely available tutorials. Between keeping PyImageSearch updated, writing new content, and releasing a new book I’m simply too busy. I do hope you understand.
Sagar Patil
Thank you for trying your best. I really do appreciate it. I am doing that because I don’t want the image affected by lighting. If there is any other way to do it, please mention it.
Adrian Rosebrock
In general illumination invariance is extremely challenging is highly dependent on your application. LBPs are theoretically robust to illumination — they key word being “theoretically”. In practice you might get varying results.
isd
thank you for tutorial plz can you tell me how I can use LBP to extract facial expressions from image a
Adrian Rosebrock
Depending on the facial expressions you want to recognize, LBPs may not be the best choice. I actually cover facial expression recognition inside my new book, Deep Learning for Computer Vision with Python.
Karl Sonnen
Hello,
I’ve been trying to implement this for days now.
I’ve tried it on a fresh machine with Ubuntu and tried installing open cv but constant errors, so I stopped that.
I’ve tried it on a pre-compiled ubuntu vmware machine that has pycharm and working opencv examples but getting errors with the sklearn module not being found even though I have done the pip install scikit-learn.
The closes I have come to getting this to work is using winpython. However, I have encountered this error:
Traceback (most recent call last):
File “recognize.py”, line 49, in
model.fit(data_shuf, labels_shuf)
File “C:\RED\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\sklearn\svm\classes.py”, line 235, in fit
self.loss, sample_weight=sample_weight)
File “C:\RED\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\sklearn\svm\base.py”, line 853, in _fit_liblinear
” class: %r” % classes_[0])
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’
This is the command I used to run the program:
python recognize.py –training images/training –testing images/testing
any tips on how to fix it?
I’m tried running it on both the Python 2.7 and 3.x versions.
Adrian Rosebrock
Hi Karl — I don’t recommend using Windows for image processing. Windows support is usually an afterthought for the open source libraries required for image processing. Line 31 of the script calls for a ‘/’ which is different for the Windows system. Try ‘\\’ (you need two backslashes because one is an escape character). Also, see Sveder’s comment.
Sheyna
Hi Adrian,
Is there any simple(r) version to implement LBP to a 3D data for pattern/feature classification? I saw a paper, but I am not sure if I want to implement through that route for a basic test as I need to check many other feature extraction methods.
Thanks in advance.
Adrian Rosebrock
Hi Sheyna — I have never used LBPs for 3D data so I’m unfortunately not sure here.
romuald
have you found a solution on how to implement it to a 3D data ?
supraja
hai sir,
i want help regarding image processing which takes plant leaf image as input and shows is it healthy or any disease.it will specify which disease it has im confusing with so many methods and techniques to use
can u suggest which method to use and how to code
thank you
Adrian Rosebrock
It sounds like you should be using a bit of machine learning instead. I would suggest working through Practical Python and OpenCV where I include a few examples of classifying images (such as predicting a species of flower in an image). If you have any examples of diseased leaf images I can take a look and suggest a specific algorithm to apply.
Marina Castro
This was so nice! One of the best tutorials and overall explanations I’ve ever read!
I really hope you had a fantastic birthday back in 2015 because it was well-deserved!
Best wishes from Portugal!
Adrian Rosebrock
Thank you Marina, I really appreciate that 🙂
doug
hello adrian a very good tutorial served me a lot
I do not know if you could help me I want to make a classifier of faces that based on an image tell me the mood of the person I am new to python and it would be very helpful thank you very much
Adrian Rosebrock
Hey Doug, thanks for the comment. Are you referring to emotion/facial expression recognition? LBPs are a good start. You could perform face detection, extract the LBPs, and then classify the emotion via a machine learning algorithm (I would recommend Logistic Regression or a Linear SVM). I cover all of these techniques inside the PyImageSearch Gurus course.
Otherwise, I think a better solution would be to use deep learning. I actually cover exactly how to build an emotion recognition system inside my new book, Deep Learning for Computer Vision with Python. Be sure to take a look!
Vhulenda
Hi
Thanks for an awesome tutorial.
I have a question though. How can I in testing instead of images to use video? I saw one comment which Adrain said can implement function in C++ and then call the method from python, if that’s it can you elaborate? I’m new in both computer vision and python/C++.
Thanks in advance.
Adrian Rosebrock
Hey Vhulenda — I think you are asking two separate questions. To use this code in video you need to access your video stream, such as your USB camera or builtin webcam. If you’re new to Python and OpenCV I would recommend reading through Practical Python and OpenCV where I discuss this quite extensively for beginners.
Secondly, there are a number of ways to implement a Python function in C++ and call it from Python. The easiest method is to use Cython.
Vhulenda
Thank you
Steve
I would like it to show me the accuracy of the training, and then the test accurácia, instead of opening all the images … for example, test accuracy: 100% 100/100
Adrian Rosebrock
Hi Steve, take a look at the classification_report function inside scikit-learn. You can make predictions on your training and testing sets and then view the output.
Steve
And I would like to do a test with the traditional LBP, how to do?
Norm
This looks amazing–great job, though a bit complex for the newbie. I purchased some of your material & working through it, albeit a slow pace. Is LBP well-suited for locating a small object of a known pattern in a “noisy” background, say a small leaf in a lawn…you can easily “see” there is a leaf there (on top or slightly buried), but the lawn represents quite a bit of “noisy coverage”. It seems like a difficult challenge to separate the two, since they may share similar colors & lighting can vary. Only the edges or patterns seem viable.
Adrian Rosebrock
Hi Norm — if the edges are only viable and perhaps the veins of the leaf themselves I think a more structural image descriptor such as Histogram of Oriented Gradients would work better.
Norm
This is awsume! I am wondering a few things
yousef
Hi Adrian , it was so useful to me thank you for supporting me and for helping me .
Adrian Rosebrock
Thanks Yousef 🙂
Ratih
Hi Adrian, how to import cross validation to LBP? thank you
Son Vo
Hi Adrian,
Thanks for the useful post.
I just wonder one thing: after you find lbp, then you calculate histogram for the whole lbp image using numpy. Is there any way we can only calculate hist of a particular roi of lbp image based on given mask?
If we want to compare objects in images, we should only care about hist of the object rather than the whole image. I know that opencv supports calculating hist based on mask. But, when I try to apply that function to lbp image, it shows an error. Please give me an advice. Thanks Adrian.
Adrian Rosebrock
Great question. Technically yes, you can compute an LBP based only on a mask, but there are a lot of problems with implementation. To start, consider how the LBP algorithm works — you need to access the pixels surrounding a center one. What would you do with pixels that lie along the border of a mask? If these pixels are treated as center pixels then their neighborhoods would fall outside the mask. You would need to make a decision on how to handle this. I would suggest looking into “NumPy masked arrays” if you’re interested in doing this, then applying the masked array to the LBP array generated from scikit-image before computing your histogram. I hope that helps!
Son Vo
Thanks Adrian for the advice. I’ll try that way.
Laurent
I also would like to use a mask, but I don’t really see why this is an issue. You have the same problem with the border of the image (you don’t have ‘outside’ pixels), so I guess the implementation has a way of handling this. Also, you can maybe just ignore pixels that don’t have all neighbors in the mask, so you’re representation will be few pixels smaller than your mask, but it should be okay I think.
Adrian Rosebrock
Either will work, my point was simply to say that you’ll need to implement that level of functionality yourself. You would need to make the decision on how to handle those types of cases.
hassan
Hy Andrian . I am quite beginner and trying my best to follow your blog.
I have some basic queries the first class localbinarypatterns. What is meant by eps=1e-7, why we are using that?
and second please slightly explain that code snippet.
(hist, _) = np.histogram(lbp.ravel(),
bins=np.arange(0, self.numPoints + 3),
range=(0, self.numPoints + 2))
What are you doing here?
(Sorry for quite basic question, but i am quite beginner and not getting that at all)
Adrian Rosebrock
1. The “eps” value prevents a “division by zero” error in Line 23. If the histogram has zero entries and the sum is zero then we cannot divide by zero.
2. The snippet you are referring to constructs a histogram of each unique LBP prototype. Please see the section that starts with “Thus, to construct the actual feature vector…” for a detailed explanation.
If you’re new to OpenCV and computer vision/image processing, I would recommend working through Practical Python and OpenCV where I teach the fundamentals. Be sure to take a look, I’m confident that after going through it you’ll get up to speed quickly 🙂
hassan
Hy Adrian,
I need some explanation.
1) As I have read that SVM is used to classify the images as positive and negative. Mean,
vector is sketched between 2 types of sample classes. But in your case there are four classes(carpet, paper, area_rug.,keyboard) then how classification will be done by vector??
2)In your case your input to classifier is hist matrix, while you can also pass LBP features, Why it is so?
3)in your code you used desc = LocalBinaryPatterns(24, 8), how to choose these parameters??
Please help me to satisfy these questions.
[Suggestion: You should also add option to add snapshot in the comments/reply, it would be more useful]
(Sorry for basic questions).
Adrian Rosebrock
1. You are thinking of a 2-class SVM. Multi-class SVMs can be created via “one versus rest” or “one versus all” schemes. See the LinearSVC documentation in scikit-learn for more information.
2. You wouldn’t want to pass in the raw LBP matrix as the LBP matrix could be significantly different for every input image. The matrix encodes local LBP information. We need to put it into a histogram to make it more robust.
3. You normally perform hyperparameter tuning experiments to determine the parameters.
I cover all of this and more inside the PyImageSearch Gurus course. Take a look, I think it would really help you on your computer vision and machine learning journey! 🙂
hassan
Thanks Adrian for your reply. I need some explanation that :
1) You mean in Sklean.svm.LinearSVC there is already builtin implementation for one versus
rest schemes. Isn’t it?
2) In your case you are not passing “multiclass” as an argument to LinearSVC, then how it is performing multi class classification?
Thanks in anticipation for kind reply.
Adrian Rosebrock
The scikit-learn implementation can automatically infer whether it’s binary (two class) or multi-class (more than two classes) based on the number of unique class labels.
hassan
Hy Adrian, thanks for your healthy replies. I have some queries :
1) What is C and how you did choose the values of C and randrom_State in linear_SVC arguments? Is it based on hit and trial method? I have data set with 3 types of labels, how I can select best values of C and random_state ?
2) I need your suggestion :
I am trying to classify human facial expressions. What should be the best algorithm to extract facial features from face to train the model (as LBP histogram is not showing satisfactory results)? What would be your suggestion in that regard.
Thanks in anticipation for your reply
Adrian Rosebrock
1. C is a hyperparameter you tune. It controls the “strictness” of the SVM. You normally tune it via cross-validation.
2. There are many algorithms for facial expression recognition. LBPs are actually quite good depending on the number of facial expressions you want to recognize.
I actually cover facial expression recognition inside my book, Deep Learning for Computer Vision with Python. This book would also help address many of your machine learning questions. Be sure to take a look!
Barathy
Hi Adrian,
Your tutorial is very help to me, I am doing my research project to detect bloody texture. is LBP is suitable for detect bloody texture in the image.
Adrian Rosebrock
I’m not sure what “bloody texture” means in this context. Could you elaborate?
Barathy
It means an image contains blood region. i want to detect if it contains blood regions or not.
Adrian Rosebrock
You’ll have to be more specific. Are you working with blood cultures/cells? Are you trying to detect blood in trauma photos? Keep in mind that I can only help you if you be more specific and put effort into describing exactly what you are trying to accomplish.
Barathy
Thank You Adrian, I am working with blood trauma photos like an image has violence. for example accident, blood on the floor.
Adrian Rosebrock
LBPs likely wouldn’t be a good here. You could give them a try but I think CNNs will give you better accuracy but you would need a lot of data first.
Amy
Hi Adrian!
I plan to print out the histogram as what u have shown in Figure 5. Do you know what to add in the codes?
Thanks!
Adrian Rosebrock
I do not have the code handy, but you need to use matplotlib to construct a plot with the LBP prototype of on the x-axis (the bin of the histogram) and the number of prototypes assigned to the bin on the y-axis. If you’re new to matplotlib I would suggest playing with it and getting used to it before trying to create this plot.
Ravi
I have executed the above code in jupyter notebook and i got the following error
usage: ipykernel_launcher.py [-h] -t TRAINING -e TESTING
ipykernel_launcher.py: error: the following arguments are required: -t/–training, -e/–testing
I read the above comments and tried to execute the same but still the error persists.
plz help
Adrian Rosebrock
Please refer to this post on command line arguments. It’s an easy fix but you’ll want to educate yourself on command line arguments first.
walter figueiredo
Hi Adrian, first I want to thank you for this well-explained tutorial, as a beginner and in a windows environment, I could follow everything and even solved a small problem because of your response rate in the comments.
My Thesis topic it is a Natural Scene Classification where the program has to tell if a picture was taken on an indoor-outdoor environment.The LBP could do the job?
Thanks in advance and I’m looking forward to buying the Hardcopy Bundle.
ThankYou…
Adrian Rosebrock
Hey Walter, LBPs could be used here but I would recommend using the “Gist Descriptor”. Take a look at the ,a target=”blank” href=”http://people.csail.mit.edu/torralba/code/spatialenvelope/”>spatial envelope for more information. Best of luck with your thesis!
Danilo Borges
Hello Adrian, thank you for your incredible work. Can LPB be used for hand recognition?
Adrian Rosebrock
Do you mean hand gesture recognition? Recognizing if a hand is in the field of view of the camera? Recognizing someone’s specific hand? Could you elaborate a bit?
Amy
hi Adrian, im newbie to programming.
can i use the printed information to proceed to the next level? for example, if “wrapping_paper” then …….
Adrian Rosebrock
Hey Amy — I’m not sure what you mean by “proceed to the next level”? Could you clarify?
Amyraah
Hi Adrian, may i know how u specify the value for line 18 as below:
desc = LocalBinaryPatterns(24, 8)
because when i try to change a random value here, it gives me different prediction result.
Could u pls help me to explain why? 🙂
Thanks Adrian!
Adrian Rosebrock
The values here are the number of points and radius to the Local Binary Patterns descriptor. Be sure to refer to the scikit-image documentation on the local_binary_pattern function.
Ashutosh Gupta
Hi Adrian
Great Article.Thank you. I am looking for a feature vector for image texture description , that can be used to compare images directly using distance measures of images. As described in article can we use histogram feature vector of LBP directly to compare images using euclidean,chi square etc. instead of doing training on dataset. and if not that which texture descriptor I can use for such direct image comparisons.
ashutosh gupta
Hi Adrian,
I want to apply some texture descriptor for my project. Can I use histogram feature vector obtained from LBP directly to compare two images texture instead of doing testing/training on images in dataset ?
Thanks.
Adrian Rosebrock
Yep! You can simply compare the LBP histograms using some similarity metric/distance function.
ashutosh gupta
Thanks Adrian. Like color histogram, dividing image into some parts and compare individual LBP histogram of such parts between two images will improve efficiency of LBP descriptor?
Adrian Rosebrock
I think you meant to say “accuracy” rather than “efficiency”. In terms of efficiency it will actually be slower since you are computing a LBP histogram for each cell in the image.
In terms of accuracy, that’s highly dependent. If your images are spatially arranged, then yes, dividing the image into parts will help improve accuracy.
Jean
Hi Adrian,
after calculating the LBP of a given image, one typically takes the histograms of 16×16 blocks from the original image. How could you treat the case where a 16×16 block near the original image boundaries doesn’t fit ? Suppose that you have a 100×50 image and you want to split it in 16×16 blocks, obviously there will be a region near the right and bottom image border where the 16×16 block won’t fit exactly the give image.
Regards
Adrian Rosebrock
There are a few ways to handle this but the two most popular ways include:
1. Zero-padding, where we fill the boundary pixels with zero to ensure a 16×16 region
2. Replicate padding where we use the border pixel values themselves to pad to a 16×16 region
Zero-padding is often used in deep learning and machine learning for efficiency. Replicate padding is also used quite a bit. You would need to refer to the documentation of a given library to see exactly which method is used.
Sushil
This is such a brilliant piece of an algorithm. Simple, neat yet good. But Adrian, will you please suggest me any ideas, blog or your posts on how to train the model for texture (or background ONLY) and later predict in test image where the learned textures possibly are. I appreciate your attention, thanks. Keep posting. I love your blog.
Adrian Rosebrock
You can use LBPs for texture classification, in fact, that was a primary motivation behind why they were developed. It sounds like your problem here is segmenting the background from the foreground. Is that the case?
Joey Dela Cruz
Good day Adrian, can we ask what is the use of the following declarations:
desc = LocalBinaryPatterns (24, 8)
data = [ ]
labels = [ ]
Thank you
Adrian Rosebrock
The “desc” is the instantiation of our LocalBinaryPattern feature extractor object. The “data” list will hold the extracted LBP histograms and the “labels” list holds their corresponding class labels.
Joey Dela Cruz
Can this run on windows 10 OS?
Adrian Rosebrock
It can, but I haven’t used Windows in over 10 years. Once you are able to install OpenCV and relevant packages on Windows, you shouldn’t have a problem though.
Rishabh
Could please share the code for the face example that you have shown, i did try it out your way but i think it is not working out for me .
Adrian Rosebrock
The face example you are referring to is covered inside the PyImageSearch Gurus course.
ezza
Hi,
What is this input “trainData” in the following line of code you recommended to check the accuracy? As this is not in the whole code?
What should I pass here if I have the same code?
print(model.score(trainData))
Adrian Rosebrock
Sorry, could you clarify exactly which line of code or paragraph you are referring to?
Supratim
Hi Adrian,
thanks a lot for the post. I have a question on how to extract Geometric Texton Histograms and include this along with LBP together.
Thanks in advance.
Adrian Rosebrock
I have experience with both “textons” and “LBPs” but I’m not sure what you mean by “Geometric Texton Histograms” — are you referring to some particular paper?
fromCN
Adrian you are so cool!!
Adrian Rosebrock
Thank you, you are too kind 🙂
Dicky R.
Adrian, can you give directions how to plotting the decision function of svm classifier of this LBP project. many thanks before.
Adrian Rosebrock
The scikit-learn library has a few examples of plotting an SVM decision boundary. I would suggest starting there.
Venu
Hello Adria, great explanation, but i still have a question about the LPB, how about the corner pixels? ex: bottom left corner, is it still using 3×3 neighbour? or using the three neighbour around it, thankyou..
Adrian Rosebrock
Hey Venu — are you referring to a particular figure/image in the post? I’m not sure which 3×3 region you’re referring to.
Venu
thankyou for answering Adrian, its refering to the figure 3, in this post, how do we calculate the pixel at the bottom, since it only have 3 neighbour around it, thankyou for always helping Adrian, hv a great day
Adrian Rosebrock
You would apply either:
1. Zero-padding to pad the border of the image with zero
2. Or replicate padding to pad the border of the image with its corresponding pixel value
Venu
thankyou Adrian,
ri
hi adrian i am quiet beginner in machine learning so could you jelp to determine the mean accuracy of this model i try acc=model.score(data,labels) give me nothing
Adrian Rosebrock
The “acc” should return the accuracy of the model. How have you verified that it’s returning nothing?
xiaoyang cui
Hello Adria. I was trying to run this demo. but when i typed the ‘python recognize.py –training images/training –testing images/testing’ in the terminal, it raised the error as follow. ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’. Could you please tell me the reasons. thank you very much!
Adrian Rosebrock
This sounds like a path issue of some sort. Double-check your input paths. Additionally, are you on a Unix machine or Windows?
xiaoyang cui
Thank you very much. It is indeed a path issue indeed。
Adrian Rosebrock
Awesome, nice job resolving the issue!
Kalina
Hi Adrian 🙂
I have a problem and I’m currently stuckon this code.
in this fragment, in help I put a path to folders with training and testing data. But when I run the program, it says “recognize.py [-h] -t TRAINING -e TESTING
recognize.py: error: the following arguments are required: -t/–training, -e/–testing”. Where I’m making a mistake and where I should path? I use Windows 7 btw.
Adrian Rosebrock
It’s okay if you are new to command line arguments, but make sure you read up on them first. From there you’ll be all set 😉
Ganesh
i want to do edge detection using lbp if you have any sample code please share the link
Ralph
For those having the “ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’.” on Windows, I changed the following line:
labels.append(imagePath.split(“/”)[-2])
to:
labels.append(imagePath.split(“\\”)[-2]).
Awesome tutorial Adrian.
Adrian Rosebrock
Thanks Ralph!
Poovarasi
Hi, Could you please describe the automatic determination of threshold value for an image?
Adrian Rosebrock
Hm, are you referring to adaptive thresholding? That doesn’t have anything to do with Local Binary Patterns which is the subject of this post so I’m a bit confused by your request.
KuA
Thank you for the tutorial!
I’m getting such an error …
usage: recognize.py [-h] -t TRAINING -e TESTING
recognize.py: error: the following arguments are required: -t/–training, -e/–testing
Any help?
Adrian Rosebrock
You need to supply the command line arguments to the script. Read this tutorial.
lia
Hi Adrian,
thanks for the post and your source code works like a charm. i just have a question, do you know how to print the value of 8 pixel neighborhood surrounding a center pixel?
Adrian Rosebrock
Given the (x, y)-coordinates of the center pixel you would use NumPy array slicing to derive the coordinates. For example, the “north” pixel would be located at (x, y – 1) and the “south” pixel would be located at (x, y + 1). You derive the other coordinates in the same manner. If you need more help refer to Practical Python and OpenCV where I discuss how to access pixel values in more detail.
arun
What is the format of LBP output image???
It’s not color or grayscale…actually I need to process this LBP outputted image to some other feature extraction algorithm…Please help me to do that..
Adrian Rosebrock
The LBP output image is a floating point data type MxN matrix. It contains the actual LBP codes. We take that matrix and use it to derive the actual histogram. In your case you can take that matrix and pass it on to another component of your algorithm.
ana
hi adrian,
Is LBP image processing faster than haar cascade image processing?
Adrian Rosebrock
It really depends on how to use them and the size of the parameters to the LBP descriptor — what are you trying to build?
ana
so, my pi camera will capture an image and use LBP to detect either human or animal on the picture and then send an sms , warning me of an intruder or animal.
ana
do you have any python script based on my project?
Adrian Rosebrock
I cover a very similar project inside the PyImageSearch Gurus course. You’ll learn how to detect a person in a frame, and if they are found, a photo of the person will be sent to you via MMS.
Poulami
Hi, How to calculate the overall accuracy of the model? as in after testing?
Adrian Rosebrock
See my reply to Steve.
abhishek
Hi Adrain,
Will u please help me to divide code into training and testing separate for this code.
Adrian Rosebrock
You can use scikit-learn’s
train_test_split
function.Luis Villanueva
Hi Adrian, I’m doing a research about microexpressions. As other researchers, I divide the face in blocks and build local descriptions extracting features using LBP. So, my question is… What are the best parameters for LBP in this case? What would be a suitable points-radius combination?
Thanks!
Adrian Rosebrock
Hey Luis — have you taken a look at the PyImageSearch Gurus course? I cover how to use LBPs for facial recognition which can then be extended to micro-expressions. Additionally, Deep Learning for Computer Vision with Python covers facial expression recognition in detail. I would suggest starting there.
Widhera
When I used cv2,imwrite to get image of lbp from sklearn, I just got line picture. Could u help for get image like figure 5?
Adrian Rosebrock
Did you write the histogram to disk or the LBP itself?
MyungChan Kim
Could you tell me how I can set suitable radius and number of points for local binary pattern descriptor?
Adrian Rosebrock
You manually tune it yourself via trial and error. Try with different values and examine the results. These are hyperparameters that you must tune.
lia
Hi Adrian,
do you think I can do image recognition by using euclidean distance of the LBP value of the testing image with the training image?
Adrian Rosebrock
That really depends on your actual dataset and project. I would recommend taking a look at the PyImageSearch Gurus course where I teach you how to use LBPs for machine learning and image search.
cristina
how about LBP-TOP? how can I compute them?
polosepian
Dear Adrian,
Great tutorial you create here and I believe still relevant even after some years.
Btw Im curious what is the best way to represent the normalize histogram if we are to fuse the LBP with other feature (e.g., GLCM derivative). The LBP width usually can be up to 256, whereas the GLCM usually produce single value. So my question is, what is the best representstion to combineto these two variable and input them into the classifier?
Really appreciate if you can give some guidance.
Adrian Rosebrock
That question is answered inside the PyImageSearch Gurus course. I would suggest starting there.
Xiao Chen
Hello, how did you implement Gray Level Co-occurrence Matrix? Thanks for sharing me.
Adrian Rosebrock
The scikit-image package has an implementation of GLCM.
Sonali
Hello Sir, How to extract the facial landmarks and store it in a file from a facial dataset?
Xiao
Thanks for sharing us so wonderful course. And I have a question that
1:other algorthims like SIFT,ORB,HOG,GLAM can used for image classification. They all need to thransform 2D array to histogram using np.histogram?
2:Which library did you recommand to implement SIFT,ORB,HOG,GLAM?
Adrian Rosebrock
Take a look at the PyImageSearch Gurus course where I cover how to use SIFT, ORB, HOG, etc. using OpenCV.
TEJAS
Hey Adrian,
Could you please tell me how Local Binary Pattern’s can distinguish a real face from a fake face coz both the real and fake faces will have the same patterns.
Adrian Rosebrock
The approach you’re referring to is called “liveness detection”. You can read more about here.
Oscar vL
This worked remarkably well with very minimal effort to optimize it! Using the famous kylberg texture dataset (used in lots of texture classification papers) I achieved 0.94 f-1 score first try.
I have one question, why do you reshape the data only when predicting against the test data? Why don’t you train on the reshaped histogram too?
Adrian Rosebrock
The training histograms have already been reshaped (i.e., stacked). We’re making predictions one-at-a-time so we need to manually reshape.
arafat
Hey Adrian!
Is it possible to build model with more than one data type? Like adding Color histogram or dominant color histogram to model with LBP data.
great work! cheers!
Adrian Rosebrock
It absolutely is. I cover how to do so inside the PyImageSearch Gurus course.
Ilgaz
Adrian, hi!
I want to visualize the histograms. What module would you suggest me to use? Should it be matplotlib?
Adrian Rosebrock
Yes, I recommend matplotlib for plotting with Python.