Over the past two weeks on the PyImageSearch blog, we have discussed how to use threading to increase our FPS processing rate on both built-in/USB webcams, along with the Raspberry Pi camera module.
By utilizing threading, we learned that we can substantially reduce the affects of I/O latency, leaving the main thread to run without being blocked as it waits for I/O operations to complete (i.e., the reading of the most recent frame from the camera sensor).
Using this threading model, we can dramatically increase our frame processing rate by upwards of 200%.
While this increase in FPS processing rate is fantastic, there is still a (somewhat unrelated) problem that has been bothering me for for quite awhile.
You see, there are many times on the PyImageSearch blog where I write posts that are intended for use with a built-in or USB webcam, such as:
- Basic motion detection and tracking with Python and OpenCV
- Tracking object movement
- Finding targets in drone/quadcopter streams
All of these posts rely on the cv2.VideoCapture
method.
However, this reliance on cv2.VideoCapture
becomes a problem if you want to use the code on our Raspberry Pi. Provided that you are not using a USB camera with the Pi and are in fact using the picamera module, you’ll need to modify the code to be compatible with picamera
, as discussed in the accessing the Raspberry Pi Camera with Python and OpenCV post.
While there are only a few required changes to the code (i.e., instantiating the PiCamera
class and swapping out the frame read loop), it can still be troublesome, especially if you are just getting started with Python and OpenCV.
Conversely, there are other posts on the PyImageSearch blog which use the picamera
module instead of cv2.VideoCapture
. A great example of such a post is home surveillance and motion detection with the Raspberry Pi, Python, OpenCV and Dropbox. If you do not own a Raspberry Pi (or want to use a built-in or USB webcam instead of the Raspberry Pi camera module), you would again have to swap out a few lines of code.
Thus, the goal of this post is to a construct a unified interface to both picamera
and cv2.VideoCapture
using only a single class named VideoStream
. This class will call either WebcamVideoStream
or PiVideoStream
based on the arguments supplied to the constructor.
Most importantly, our implementation of the VideoStream
class will allow future video processing posts on the PyImageSearch blog to run on either a built-in webcam, a USB camera, or the Raspberry Pi camera module — all without changing a single line of code!
Read on to find out more.
Looking for the source code to this post?
Jump Right To The Downloads SectionUnifying picamera and cv2.VideoCapture into a single class with OpenCV
If you recall from two weeks ago, we have already defined our threaded WebcamVideoStream
class for built-in/USB webcam access. And last week we defined the PiVideoStream
class for use with the Raspberry Pi camera module and the picamera
Python package.
Today we are going to unify these two classes into a single class named VideoStream
.
Depending on the parameters supplied to the VideoStream
constructor, the appropriate video stream class (either for the USB camera or picamera
module) will be instantiated. This implementation of VideoStream
will allow us to use the same set of code for all future video processing examples on the PyImageSearch blog.
Readers such as yourselves will only need to supply a single command line argument (or JSON configuration, etc.) to indicate whether they want to use their USB camera or the Raspberry Pi camera module — the code itself will not have to change one bit!
As I’ve mentioned in the previous two blog posts in this series, the functionality detailed here is already implemented inside the imutils package.
If you do not have imutils
already installed on your system, just use pip
to install it for you:
$ pip install imutils
Otherwise, you can upgrade to the latest version using:
$ pip install --upgrade imutils
Let’s go ahead and get started by defining the VideoStream
class:
# import the necessary packages from webcamvideostream import WebcamVideoStream class VideoStream: def __init__(self, src=0, usePiCamera=False, resolution=(320, 240), framerate=32): # check to see if the picamera module should be used if usePiCamera: # only import the picamera packages unless we are # explicity told to do so -- this helps remove the # requirement of `picamera[array]` from desktops or # laptops that still want to use the `imutils` package from pivideostream import PiVideoStream # initialize the picamera stream and allow the camera # sensor to warmup self.stream = PiVideoStream(resolution=resolution, framerate=framerate) # otherwise, we are using OpenCV so initialize the webcam # stream else: self.stream = WebcamVideoStream(src=src)
On Line 2 we import our WebcamVideoStream
class that we use for accessing built-in/USB web cameras.
Line 5 defines the constructor to our VideoStream
. The src
keyword argument is only for the cv2.VideoCapture
function (abstracted away by the WebcamVideoStream
class), while usePiCamera
, resolution
, and framerate
are for the picamera
module.
We want to take special care to not make any assumptions about the the type of hardware or the Python packages installed by the end user. If a user is programming on a laptop or a desktop, then it’s extremely unlikely that they will have the picamera
module installed.
Thus, we’ll only import the PiVideoStream
class (which then imports dependencies from picamera
) if the usePiCamera
boolean indicator is explicitly defined (Lines 8-18).
Otherwise, we’ll simply instantiate the WebcamVideoStream
(Lines 22 and 23) which requires no dependencies other than a working OpenCV installation.
Let’s define the remainder of the VideoStream
class:
def start(self): # start the threaded video stream return self.stream.start() def update(self): # grab the next frame from the stream self.stream.update() def read(self): # return the current frame return self.stream.read() def stop(self): # stop the thread and release any resources self.stream.stop()
As we can see, the start
, update
, read
, and stop
methods simply call the corresponding methods of the stream
which was instantiated in the constructor.
Now that we have defined the VideoStream
class, let’s put it to work in our videostream_demo.py
driver script:
# import the necessary packages from imutils.video import VideoStream import datetime import argparse import imutils import time import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--picamera", type=int, default=-1, help="whether or not the Raspberry Pi camera should be used") args = vars(ap.parse_args()) # initialize the video stream and allow the cammera sensor to warmup vs = VideoStream(usePiCamera=args["picamera"] > 0).start() time.sleep(2.0)
We start off by importing our required Python packages (Lines 2-7) and parsing our command line arguments (Lines 10-13). We only need a single switch here, --picamera
, which is used to indicate whether the Raspberry Pi camera module or the built-in/USB webcam should be used. We’ll default to the built-in/USB webcam.
Lines 16 and 17 instantiate our VideoStream
and allow the camera sensor to warmup.
At this point, all the hard work is done! We simply need to start looping over frames from the camera sensor:
# loop over the frames from the video stream while True: # grab the frame from the threaded video stream and resize it # to have a maximum width of 400 pixels frame = vs.read() frame = imutils.resize(frame, width=400) # draw the timestamp on the frame timestamp = datetime.datetime.now() ts = timestamp.strftime("%A %d %B %Y %I:%M:%S%p") cv2.putText(frame, ts, (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1) # show the frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop()
On Line 20 we start an infinite loop that continues until we press the q
key.
Line 23 calls the read
method of VideoStream
which returns the most recently read frame
from the stream (again, either a USB webcam stream or the Raspberry Pi camera module).
We then resize the frame (Line 24), draw the current timestamp on it (Lines 27-30), and finally display the frame to our screen (Lines 33 and 34).
This is obviously a trivial example of a video processing pipeline, but keep in mind the goal of this post is to simply demonstrate how we can create a unified interface to both the picamera
module and the cv2.VideoCapture
function.
Testing out our unified interface
To test out our VideoStream
class, I used:
- A Raspberry Pi 2 with both a Raspberry Pi camera module and a USB camera (a Logitech C920 which is plug-and-play compatible with the Pi).
- My OSX laptop with built-in webcam.
To access the built-in camera on my OSX machine, I executed the following command:
$ python videostream_demo.py
As you can see, frames are read from my webcam and displayed to my screen.
I then moved over to my Raspberry Pi where I executed the same command to access the USB camera:
$ python videostream_demo.py
Followed by this command to read frames from the Raspberry Pi camera module:
$ python videostream_demo.py --picamera 1
The results of executing these commands in two separate terminals can be seen below:
As you can see, the only thing that has changed is the command line arguments where I supply --picamera 1
, indicating that I want to use the Raspberry Pi camera module — not a single line of code needed to be modified!
You can see a video demo of both the USB camera and the Raspberry Pi camera module being used simultaneously below:
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
This blog post was the third and final installment in our series on increasing FPS processing rate and decreasing I/O latency on both USB cameras and the Raspberry Pi camera module.
We took our implementations of the (threaded) WebcamVideoStream
and PiVideoStream
classes and unified them into a single VideoStream
class, allowing us to seamlessly access either built-in/USB cameras or the Raspberry Pi camera module.
This allows us to construct Python scripts that will run on both laptop/desktop machines along with the the Raspberry Pi without having to modify a single line of code — provided that we supply some sort of method to indicate which camera we would like to use, of course, This can easily be accomplished using command line arguments, JSON configuration files, etc.
In future blog posts where video processing is performed, I’ll be using the VideoStream
class to make the code examples compatible with both your USB camera and the Raspberry Pi camera module — no longer will you have to adjust the code based on your setup!
Anyway, I hope you enjoyed this series of posts. If you found me doing a series of blog posts (rather than one-off posts on a specific topic) beneficial, please let me know in the comments thread.
And also consider signing up for the PyImageSearch Newsletter using the form below to be notified when new blog posts are published!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Kenny
Awesome stuff, Adrian! Thanks for your continual enthusiasm in sharing your breadth of knowledge in computer vision!
Harvey
I would be interested in what you think of this as a vision platform: https://www.kickstarter.com/projects/pine64/pine-a64-first-15-64-bit-single-board-super-comput
Adrian Rosebrock
It seems very similar to the Pi, depending on which model is used. The fact that the processor is faster is nice. But personally, the 64-bit support is what would make me excited. It will be interesting to see how the project evolves.
Remy
Phenomenal work Mr. Rosebrock and great tutorial. I went from approx. 12 FPS to 86 FPS (81 FPS displayed) using a crappy ip camera. (trendnet tv-ip572PL).
Adrian Rosebrock
Very nice! However, it’s important to keep in mind that you’re likely not getting 86 FPS from the physical camera sensor. Instead, your video processing loop is fast enough to process 86 FPS, hence why I use the term FPS processing rate in the blog post. It’s a subtle, but important nuance to keep in mind. 🙂 In any case, congrats on the improvement!
Mats Önnerby
The latest Raspbian comes with V4L2 drivers preinstalled that make the picamera show up the same way as a webcamera. All you need to do is to add a line bcm-2835-v4l2 to /etc/modules and then reboot. I have tested and the program above works in both modes. I also tested the Real-time barcode detection and it works too, even if it’s a bit slow.
Adrian Rosebrock
Awesome, thanks for the tip Mats. I didn’t realize Raspbian Jessie came with V4L2 drivers pre-installed, that’s great.
patrick
Thanks Mats, V4l2 makes things simpler.
BTW the line should be bcm2835-v4l2 (no dash after bcm).
Rishabh
Hi guys, I’m a big newbie at this. Do i write this line in the terminal or in my python code? Thanks!
HAJIRA
Mats Önnerby — Could you please provide me the the way to install the v4l2 drivers…Need it badly for my project.!
Ark Nieckarz
Great tutorials but will this work under Windows?
Meaning using a built-in camera (like on laptops), USB or some other video input stream under Windows.
Adrian Rosebrock
Yes, provided that you can access your webcam stream (either USB or otherwise) using the
cv2.VideoCapture
method, this code should work with Windows.Bob
Adrian,
Once you define the VideoStream class, you apparently load it from the videostream_demo.py with:
from imutils.video import VideoStream
What file is this new class put into or called, and where is it located?
Adrian Rosebrock
I have already implemented the functionality in imutils, my open-source set of OpenCV convenience functions.
You can install
imutils
using pip:$ pip install imutils
And from there you can import the
VideoStream
class like I do invideostream_demo.py
Paul
Don’t forget that this script also requires cv2 (found in opencv)
sudo pip install opencv
Adrian Rosebrock
I do not recommend installing OpenCV via pip just yet. There are a number of optimizations not used in the pip install. You also will not have the additional contrib packages as well. Installing OpenCV on the Raspberry Pi (for the time being) is best done when compiled from source. I have anumber of tutorials on this.
patrick
Great !! now I see double….Getting closer to stereoscopic stuff Doctor Rosebrock ?
Always a pleasure going through your tutorials, keep on doing this great work Adrian .
Adrian Rosebrock
I honestly have never done any stereoscopic work before, although that is an avenue I would like to explore 🙂
VIJAYA KUMAR
hello adrian i’m vijaya i want to do my B.E project in digital image processing will you please help me in how to configure the opncv and the python in windows……….thanks in advance…………….
Adrian Rosebrock
Hey Vijaya, congrats on working on your BE project, that’s great. I’m sure you’re excited to graduate. But to be honest, I haven’t used a Windows system in 9+ years and have never setup OpenCV on Windows OS. If you have a question related to Raspbian, Ubuntu, or OSX, I can do my best to point you in the right direction though.
CAO
Hi Adrian,
Do you think this method could work with the DS325 camera from SoftKinetics ?
Adrian Rosebrock
I personally haven’t used that particular camera before, but from what I understand, it’s a 3D camera. I don’t have much experience with 3D sensors, although it’s something that I hope to explore in future blog posts. In short, I can’t give an honest answer to your question.
Bosten
Hi,
So I have installed imutils and am trying to run your program on my mac OS X machine with its built in webcam. For some reason I can’t get idle to work with imutils but it does work with the terminal. Also, when I run your program from python in the terminal, no window displaying the feed shows up. What am I doing wrong? I’m a beginner in all this so there is most likely something I’m doing wrong.
I also have a question about your program. Does it continue displaying the webcam feed? All the other things that I have tried result in a crash in python after about a minute of showing the feed.
Adrian Rosebrock
If you’re not getting a video to show up and the Python script is automatically exiting, then you should double check that your cameras are properly connected to the Pi. I would also start with this blog post on accessing the Raspberry Pi camera. It will give you a good starting point with less complex code.
Marcus
ADrian,
Hey is there a reason why when I get to the line:
frame = vs.read()
it tells me vs is not defined?
Adrian Rosebrock
It sounds like you may have not downloaded the source code to the blog post and are missing part of the
videostream.py
implementation. Make sure you use the “Downloads” form at the bottom of this post to grab all the code in the post.Lukas Vosyka
Hi Adrian,
just curious – is there a reason you do not use the resolution for the VideoStream class in case of using a USB webcam. I mean the cv2.VideoCature would support it, kind of like this:
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
This would make the imutils in this case more versatile, no?
Cheers,
Lukas
Adrian Rosebrock
Great suggestion. Although the problem I’ve ran into with this is that not all webcams obey/use the
.set
settings. Instead, I leave this to the programmer and user of theimutils
library to determine if they want to use this functionality or not.wally
Sorry for the reply to old stuff, but the:
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
gives:
AttributeError: ‘WebcamVideoStream’ object has no attribute ‘set’
I’m using imutils-0.4.6
The resolution= with the PiCamera module works although it usually “rounds” the height to be something mod 8, i.e. a quarter Picam video frame ends up height 544 instead of 540.
I know this webcam honors the setting with raw cv2 captures, but then the code changes ripple as cv2 requires:
ret, frame = vs.read()
whereas your imutil gets a frame with:
frame = vs.read()
If I’m missing something obvious about python and CV2 I appologise, but Google brings me back to this blog.
Adrian Rosebrock
You cannot call the .set method on the code>WebcamVideoStream object. You need to call it on the
cv2.VideoCapture
object.MP
Hi Adrian,
i am new to python as per your tutorial i have installed the imultils on my raspberry pi and also created the videostream_demo.py and put all the code shown above in one file.
i guess i am doing wrong here.
can you guide which file need to be created what code goes in which file.
one more question can any usb camera will work i have microsoft lifecam vx-1000 which was lying with me.
thanks,
MP
Adrian Rosebrock
If you’re just getting started learning Python, I would use the “Downloads” form on this page to grab the source code to this post. This will demonstrate how you need to structure your files and where to put the code in each file.
As for your webcam, I have never used the Microsoft Lifecam before. Are you trying to use it on your laptop/desktop or on a Raspberry Pi?
Alexandra
Hello Adrian!
Thanks for all the information you provide! You explain unbelievably well!
I have a question: do you know if it’s possible to access a GigE 5 Allied Vision camera into Python( it’s an IP camera) for further image processing ?
Thanks!
Adrian Rosebrock
I personally don’t have any experience with that camera — but I’ll try to do some IP streaming tutorials in the near future.
Siju
Do you have similar posts on Wifi IP cameras
Adrian Rosebrock
Sorry, no, I do not have any posts for WiFi IP cameras. I may cover that as a future topic but I cannot guarantee if/when that will be.
Andy
How I can replace camera.release? with the Video Stream, because I think you didn’t define this.
Adrian Rosebrock
Great point Andy. You’ll want to call
vs.camera.release()
before callingcv2.destroyAllWindows
.Jeff Ward
‘PiCamera’ has no ‘release()’ method. It does have ‘close()’.
Jeff Ward
When using WebcamVideoStream, would it be ‘vs.stream.release()’?
Adrian Rosebrock
Correct, I meant to say
vs.stream.release()
. Thank you for pointing this out.Marcelo Aragao
Congratulations Adrian! Great blog!
How do I use other resolution?
I have changed:
def __init__(self, src=0, usePiCamera=False, resolution=(1024, 768), framerate=32):
And comment:
#frame = imutils.resize(frame, width=400)
But still 320×240 image resolution
Adrian Rosebrock
You can change the resolution when you initialize the object, like this:
vs = VideoStream(resolution=(1024, 768))
Joanacelle
this solution does not solve the problem of resolution =( help me please
kwseow
did you manage to solve this?
Adrian Rosebrock
A less optimal solution, but one useful for debugging, would be to access the picamera object directly and see if you can modify the resolution during initialization.
vivek
where we create the first code? after installing imutlis what are the steps for making class? i didnt get it where it stores? explain briefly
Adrian Rosebrock
Open up a text editor (whichever text editor you prefer) and start inserting the code. I would also suggest that you use the “Downloads” section of this blog post to download the source code — that way you will have a copy of the code that is working. From there you should try to code the example yourself.
H
Hi Adrian, is it possible to apply this to code that is being used to do multi-scale image template matching? I’ve had an attempt and kept getting this error
”The camera is already using port %d ‘ % splitter_port)
picamera.exc.PiCameraAlreadyRecording: The camera is already using port 0 ‘
I think the problem is because I’m trying to put every frame into an array so I can grayscale to template match better.. Do you know a way around this?
Adrian Rosebrock
It sounds like you might have another script/program that is accessing your Raspberry Pi camera module. If you want to perform template matching with the code in this blog post you’ll need to combine the two scripts together.
Chris
Hi Adrian, thanks for your post and a few of the others I have used! I am working with a remote camera on Raspberry Pi. I was planning on sending the frame back to my mac with Pyro4. At first glance, it seems like it might be tricky to get picamera running on OS X but that Pyro4 is trying to deserialize the object I send from my Pi back to a picamera type.
I am doing this to avoid doing heavy processing on the Pi. I eventually need to do face recognition on the frame too which I am doing on the frame I get from cv2. How good was your performance on the Pi doing face detection? Is it worthwhile for me to continue this video streaming approach?
Thanks again!
Adrian Rosebrock
Are you asking whether the Pi is suitable for face detection or face recognition? Face detection can easily be run on the Pi without a problem. Face recognition on the other hand is substantially slower. You would be lucky to get more than 1-2 FPS for face recognition using basic algorithms.
I would consider using a message passing library like zeromq and then passing frames with detected faces to a system with more computational power if you intend on using any type of advanced face recognition algorithms (such as OpenFace).
Nick
I just have to say a big thank you Adrian! I’m doing a project with a Pi + OpenCV and I’ve had several problems but your awesome guides have helped me greatly! Thanks again!
Keep on rocking
Adrian Rosebrock
Thanks Nick, I’m happy I could help 🙂 Have a great day.
vivek
can i get same class for raspberry pi camera with c++ for capturing video and image operations like this one?
help me on this
Adrian Rosebrock
I only offer Python + OpenCV code on this blog, not C++. Perhaps another reader can convert this implementation to C++ for you.
vicky
hi adrian , I am doing my BE project in image processing and i am new to pi can u hlp me how to detect a shape by using pi camera video stream
Adrian Rosebrock
I cover shape detection in this tutorial. You’ll need to utilize the code in this post to access the frames from the Raspberry Pi video stream, then apply the shape detector to each frame. If you’re just getting started with computer vision and OpenCV, I would suggest going through Practical Python and OpenCV.
Biswajit
Hi Adrian.what is the type of the frame in frame = vs.read()?Is it a direct image or not ? how can I encode it into base64 or any other string representation of this image ? Thanks in advance.
Adrian Rosebrock
The frame itself is a NumPy array with shape (h, w, d) where “h” is the height, “w” is the width, and “d” is the depth. You can use these functions to encode/decode base64.
Adrian Rosebrock
Nice tip Christian. I also cover queueing in this post as well.
Anbu
i got the error like
ImportError: No module named webcamvideostream
but i have installed numpy(array)
Adrian Rosebrock
The command would actually be:
$ pip install "picamera[array]"
Not “numpy(array)”.
Also make sure you are running the latest version of imtuils:
$ pip install --upgrade imutils
Pallawi
https://github.com/jrosebr1/imutils/blob/master/imutils/video/webcamvideostream.py
use this link , copy the class code and paste it with your source code and then run the code again.
Reshal
Adrian fantastic work with these Raspberry Pi tutorials. Well done! It is actually amazing with what one can do with the Pi.
I have a few questions, would it be possible to use this and then stream it to a server? The server would be setup on the Pi. And an html page is created which contains the a url to access this feed. If so how would one do it?
The aim is to access this image processed feed through a browser on an android, or ios or pc device.
Adrian Rosebrock
If you want to send the stream directly to another source I would skip OpenCV entirely and use something like gstreamer.
Dustin
Hey Adrian,
Thank you for posting these tutorials. They have been incredibly helpful for my senior design project. We are creating a robot that tracks swimmers moving in a pool to help monitor their form.
I am encountering an intermittent issue while using this class with the PiCamera. When I run the demo script the first time I boot, everything works fine. I get this error if I try to close the program and re-run it:
frame = imutils.resize(frame,width=800)
File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py”, line 69, in resize
(h, w) = image.shape[:2]
AttributeError: ‘NoneType’ object has no attribute ‘shape’
This error doesn’t occur when I run the script using a webcam, only with the PiCamera. Do you have any idea why this occurs?
Adrian Rosebrock
This error could be due to a variety of reasons. To start, have you updated your instantation of
VideoStream
to access the Raspberry Pi camera module? Can you access your Raspberry Pi camera module via the command line?Twinkle
Thank you for your previous post that helped in increasing the fps of my raspberry pi camera.
However while running videostream_demo.py, I am getting the following error:
What could be the solution?
Thanks in advance.
Twinkle
I even tried running the program through command line , still same error occurs
Paul
Make sure you see a /dev/video0 or like device. I had the same problem with my USB cam not being detected. I had to replug it to get it to run.
Adrian Rosebrock
The problem here is that your system is not able to correctly access your webcam/video stream/Raspberry Pi camera. Please see this post on NoneType errors for more information.
Vince
Thank you Adrian for this amazing write up and for the write up on how to install opencv 3 for python. 10/10.
I got this working, however when I try to display the image with a larger resolution, the performance drops fps wise. If I comment out the resize and setup up the resolution when declaring vs, I get what I want resolution wise, but the rates drop. Any ideas what I am doing wrong? Is opencv just this slow with the pi? If I just use pythons picam module, it can show this resolution at high fps and you mention getting high fps with your methods here.
Thanks! (Using Pi model 2B with picam )
Adrian Rosebrock
The higher the resolution, the more data has to be read from the camera. The more data, the slower the performance. This is simply a side effect of using the picamera module.
Vince
While I agree with the statement, more data = slower performance, as I said earlier using just picamera module I get fast fps at high resoultion, but I dont have access to the frame, hence where opencv comes into play.
picamrea script:
from picamera import PiCamera
import time
camera = PiCamera()
camera.start_preview()
time.sleep(10)
camera.stop_preview()
With this script I get full resolution at a fast fps, I would say 30-60.
When I run the script in your post, when I go to full resolution, the system bogs down. I have to let resolution = (200,200) to get anything acceptable (low lag, decent fps).
Im just wondering why, if the picam and picamera module are capable of fast performance, why I am not getting that with opencv? Is it my execution? What fps are expected just showing an image within opencv at full picam resoution? Just want to make sure this is what is expected using opencv on the pi or if I missed a setting.
I tried using a usb cam, with the same results.
Adrian Rosebrock
To be honest, this is to be expected when writing video processing scripts with OpenCV. The frames have to be read from the camera, converted to a NumPy array, and displayed to your screen. As far as I understand,
start_preview
does not have to convert to a NumPy array and can instead display the stream directly to your desktop. For more details I would suggest asking the picamera developers.Franko
How can I import raspi_cam_iterface into opencv on another computer
Jim Liu
Hi Adrian,
When I used the VideoStream class for the input video file, the output of ‘frame=vs.read()’ is None. Debugging into the function read(self) of WebcamVideoStream, I found self.frame is None. Can you help me on this? Thanks.
Adrian Rosebrock
It sounds like OpenCV cannot access your webcam. Please double-check that your webcam is properly connected and you can access it via the
cv2.VideoCapture
function.Syed Tauseef
This guide really helped me out for my project , now i want to get a resolution of 1280×720 @ @25 fps where should i change the code only in VideoStream and how ? please explain .
And would like to get your opinion ,My project is avoiding obstacle using optical flow later adding SURF for robust detection in quadcopters .Now purely I m concentration on optical flow and to detect objects. so I basically get video frames from pi camera using pi3 and calculate the optical flow (CalcOpticalFlowPyrLK) for object detection in selected ROI . My question is how can we reduce the computation time is there any way of threading ? since I will be using SURF in future .
Adrian Rosebrock
Are you trying to do this with your USB webcam? Or the Raspberry Pi camera module?
As for your project, I’m actually a little worried that 2D algorithms wouldn’t be sufficient. Quadcopters can move quite fast so you’ll need to balance speed with accuracy. The other issue here is that avoiding obstacles best works with stereo/depth so you can compute the depth of the image. I also think you should include other sensors into the copter (radar for instance). A purely CV approach would be very tricky to build and you would get better results if you incorporated multiple sensors and didn’t rely strictly on CV.
Syed Tauseef
First of Thanks for the replies !
I m going to work with Raspberry pi camera . Can Raspberry pi 3 handle Optical flow and SURF algorithm like hybrid algorithm ? without using much computation time .
Planning to incorporate ultrasonic sensor with this .I m having payload constrains so using 2 camera is not possible in my qaud.
Adrian Rosebrock
It’s worth testing, but realistically applying optical flow and real-time keypoint matching would likely be too much for the Raspberry Pi.
Syed Tauseef
Optical flow runs smooth should try along with key-point matching only the videoStream hangs and lags even with threading but cpu usage is 50 % approx donno why it lags .will update you with both running as hybrid algorithm
daniel
Im using VideoStream script, can I rotate the video output from raspi cam ?
Adrian Rosebrock
Yes. Take a look at the
cv2.warpAffine
andcv2.flip
functions. I would also suggest reading through my book, Practical Python and OpenCV where I discuss the basics of computer vision and image processing.olivia
hallo again adrian.. thank for saving my life..
adrian is that your code is compatible with logitech webcam c270?
or just c920?
i’m using raspberry pi 3 model b
Adrian Rosebrock
Yes, the code should work with the C270. If you are getting NoneType errors please refer to this blog post.
olivia
thank you so much adrian
David
Hi Adrian,
I found a big difference between Picamera and OpenCv image capture. When I’m using VideoCapture class, I can modify the array with my own pixels (for example, I can overlayed a photo in the image (after convert it to an array too). But with Picamera.array class, array is defined like read-only, my technic to overlay doesn’t work.
And I found my video faster refreshed with videoCapture.
David
fariborz
Hello Mr Adrian
Thank you for a good tutorial
I have a question
How to use the PiCamer library features and commands when using this method for picamera
Like the camera.iso = 100 command
Or Framerate command, or other commands for this library
Because when I use this method to capture the camera, I can not use the rest of its library commands in the program.
Thanks if you can guide me
Adrian Rosebrock
I would suggest creating your own “VideoStream” and/or “PiVideoStream” class and then modifying either (1) the constructor to accept any relevant parameters you would need or (2) modifying the class directly.
Additionally, before you call
.start
you could also reinstantiate theself.stream
object as well.I hope that helps!
Ben
Hi Adrian, I am very new to this so please excuse my ignorance.
you start by saying “get started by defining the VideoStream class” can you explain how to do this? Do we create a new file called VideoStream with our chosen text editor or am I missing something here?
Adrian Rosebrock
Hey Ben, make sure you download the imutils package which includes the VideoStream class.
Can you also install it via:
$ pip install imutils
Jamie
Hey Adrian, great work. Your articles are always well written and easy to follow.
I’ve been running into some trouble attempting to stream video from the Raspberry Pi over a network to a pipe created on another machine. Have you ever explored this method of transmitting and consuming a video feed with opencv?
Any advice on the following would be greatly appreciated. https://stackoverflow.com/questions/48611517/os-x-10-12-6-netcat-nc-cannot-use-mkfifo-named-pipe-raspberry-pi-3-camera-stre
Adrian Rosebrock
Are you trying to stream all frames as fast as possible to a separate system? Or just stream a few frames such as frames that have certain objects, etc.?
Jamie
Adrian, the idea was to use a PI’s camera as an input over a network and stream the data to a computer that is capable of processing multiple feeds simultaneously. So the best quality as fast as possible.
On the server side of things that’s where I’d want to process frames, grab faces, and store the frame plus the faces it captured.
I think I found a recipe that could do the trick. Check out 4.9. Capturing to a network stream (http://picamera.readthedocs.io/en/release-1.9/recipes1.html#capturing-to-a-network-stream) from the Raspberry PI…
The only problem is that they’re not pulling in the stream with openCV. I assume I can adapt the method you illustrate in this article (https://pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/) to do that. Step 6, test_python.py Line 17.
Am I on the right track, do you think that would work?
Let me know if I’m making any incorrect assumptions.
Jamie
I’m trying to stream all frames as fast as possible to a separate system. I left another comment about a possible approach but it looks like it was deleted.
Any direction is appreciated. Thanks.
Adrian Rosebrock
Hey Jamie — PyImageSearch gets a lot of comments and due to spam reasons, I need to moderate them all. I cannot spend my whole day inside the blog waiting for new comments to come in so I only go through them once every 48-72 hours. I appreciate your patience. I see you have resolved the issue in another comment. I have replied there as well.
Jamie
Adrian, ended up successfully getting this working. https://stackoverflow.com/a/48675107/2355051
Any advice for optimizing the stream at larger resolutions would be appreciated.
Thanks again for showcasing your research and tests, without them it would have taken forever to find a solution to illustrate this proof of concept.
Adrian Rosebrock
Hey Jamie, congrats on getting the stream working, nice job! Take a look at gstreamer and see if you can stream the raw capture directly through gstreamer to your endpoint. This will enable you to encode and compress the stream.
Phil
I’m getting this error when I try to run the code in this tutorial. Any idea what’s going on?
Traceback (most recent call last):
File “test.py”, line 18, in
…
from picamera.array import PiRGBArray
ImportError: No module named picamera.array
Phil
Ignore this. I somehow missed installing python-picamera!
Adrian Rosebrock
Congrats on resolving the issue 🙂
Phil
How can I write the captured stream to a file? With cv2, I used to use cv2.VideoWriter_fourcc and cv2.VideoWriter , but with this code, it just produces an empty .avi file.
Adrian Rosebrock
If it cv2.VideoWriter is producing an empty video file it’s likely that your system does not have the proper video codecs installed. Working with OpenCV and output video can be a pain, but I do my best to detail the process in this blog post.
Daoud Ghannam
hello,
thank you for the tutorials you provide.
i followed each step above and when i run videostream_demo.py i get the following error:
File “videostream_demo.py”, line 2, in
from imutils.video import VideoStream
ImportError: No module named imutils.video
im not sure where is the problem even though i installed imutils and updated and im running from cv.
what to do ?
Adrian Rosebrock
Make sure you installed “imutils” into the “cv” Python virtual environment:
Daoud Ghannam
Already did this before but didnt work :/
Daoud Ghannam
(cv) pi@raspberrypi:~ $ sudo pip install imutils
Requirement already satisfied: imutils in /usr/local/lib/python3.5/dist-packages
Adrian Rosebrock
Leave off the “sudo”.
Rafael
can i use this library for connect to RTSP stream from IP camera ???
Adrian Rosebrock
Yes, update the
src
of theVideoStream
to point to your RTSP stream. That should work.Fezile Stofile
I am new to OpenCV. Specifically where or on which line would I have to update the src if my RTSP url is rtsp://192.168.1.31/user=admin&password=&channel=1&stream=0.sdp?
Adrian Rosebrock
Change the
src
of theVideoStream
to your URL. From there it should work.Ebrahim
Hello Adrian
I don’t understand this line, and in programming i have problem from this line.
ap.add_argument(“-p”, “–picamera”, type=int, default=-1,
help=”whether or not the Raspberry Pi camera should be used”)
>>
usage: q.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
q.py: error: the following arguments are required: -p/–prototxt, -m/–model
please help me
Adrian Rosebrock
If you are new to command line arguments and how to use them, that’s okay, but make sure you read up on them before continuing.
Gianni
Hi Adrian,
This is a very good tutorial, it fixed a lot of my problem when I am doing real time tracking.
I have a question related to your tutorial series.
I have an issue with naming the camera, I have 2 camera on pointing on the right another is left. I define in the program which camera is which.
The index of videocapture is always changing when I restart my embedding system, so the program always mixed up left and right.
I try to find online how to define the index and how to relate it to my comport but I can’t see anything useful.
because the image from 2 cameras is similar so I cant use image processing to distinguish between the two. the only thing I can use is com port.
Do you know how to relate the com port to the opencv video capturing index?
Many Many thanks, and please keep it up !!!!!!!!!!!!!!!
Gianni
so my question in simplify is can we define camera index in videocapture by comport (USB)
Adrian Rosebrock
Hi Gianni, I’m glad you found the code useful! However, I’m not sure why your cameras may be changing indexes. I did a quick Google search for “opencv videocpature index changes” and it appears that others are encountering this issue as well. I read a few of the answers and it seems like it may be an OS issue, not OpenCV itself. I’m sorry I don’t have the answer to the question but rest assured, you’re not the only OpenCV user with the problem. If you find out what the problem was please come back and let us know.
Prajesh Sanghvi
Thanks a million Adrian to you and your team!!, you guys are amazing!
Adrian Rosebrock
Thanks Prajesh 🙂
LONG ZHAI
if I use usb webCamera, PiVideoStream is going to cast error.
Adrian Rosebrock
Could you share your exact error message?
abdul
import cv2 is not working.(no module found)
Adrian Rosebrock
Make sure you have correctly installed OpenCV on your system.
Fulvio Mascara
Hi Adrian,
First of all, congratulations for remarkable work in your blog, opening our minds to the Computer Vision world, with simplicity and easy explanations.
I’m building a facial recognition with Raspberry Pi and I have a doubt about improving FPS:
Using a picamera, do i need to implement the code of your second article to improve the FPS (opening threads for I/O) or i just need to call PiVideoStream from your imutils lib instead of OpenCV VideoCapture?
Thanks in advance.
Best Regards,
Fulvio
Adrian Rosebrock
You can just use the “VideoStream” class — that class will automatically call “PiVideoStream” and will use threads under the hood.
Macoy
Hi! Is it possible to run this code using a usb camera. And if ever how could I implement it.
Help is much appreciated! Thank you!
Adrian Rosebrock
All code in this blog post is compatible with both USB cameras and the Raspberry Pi camera module. Are you running into an issue with the code?
Macoy
vs = VideoStream(usePiCamera=args[“picamera”] > 0).start()
Is this line necessary if I will use USB camera?
Adrian Rosebrock
That line checks to see whether or not the Raspberry Pi camera is to be used. The flag is set via command line argument. You don’t need to modify the code to use a USB camera, just ensure
--picamera 0
when executing the script.Dayle
Hi Adrian,
I can’t say thank you enough for all the work you do and highly recommend your books to anyone diving in to computer vision/deep learning.
I find VideoStream works brilliantly on a Raspberry Pi for detecting and tracking objects with a low resolution video source. Based on movement of objects detected in my low resolution video stream, I want to grab images from a higher resolution video stream for identification with a CNN.
My problem is I find OpenCV consumes a lot of processing power grabbing and decoding every high resolution video frame it receives rather than passively waiting to grab and decode just the one frame you want.
Any thoughts on addressing this problem?
Thanks Again
Adrian Rosebrock
I’m glad you’re finding the VideoStream class helpful, Dayle!
As far as decoding the high resolution frame have you tried explicitly setting the resolution via:
vs = VideoStream(usePiCamera=True, resolution=(320, 240))
Dayle
Hi Adrian,
Thanks for replying so quickly. I got side trapped down another wormhole and finally can get back to my original problem – this one.
I use VideoStream and set the resolution in the URL. For example the low resolution video feed I do motion detection on is
rtspurl = “rtsp://192.168.2.24/axis-media/media.amp?videocodec=h264&resolution=320×180&fps=30”
vs = VideoStream(src=rtspurl, usePiCamera=False).start()
image = vs.read()
That works brilliantly.
My problem is I also want to take the occasional hi resolution 1280×720 image snapshot from another video feed from the same camera and then identify the objects in it using the CNN approaches you have been teaching.
The issue as I understand it is that to use opencv, or your implementation of it inside VideoStream, you cannot simply sample an arbitrary image frame on demand. Instead you must read each video frame in sequence and discard every frame and copy the odd one you actually want. In my case, around 25% of my Pi’s CPU is being used just to read hires video images and immediately discard them. Sadly I can’t computationally afford that.
I’m wondering if there is a process efficient manner I can sample a single image frame from rtsp source. I’m also working with gstreamer and ffmpeg if that is of any help.
Once again,
Thanks
Adrian Rosebrock
I’m not sure what you mean by “sampling an arbitrary image frame on demand”. The VideoStream class is threaded and will constantly keep fetching frames from the camera. You then read the frame from the main “while” loop of your code. You’ll need to insert logic to handle saving any frames — once “vs.read” is called then a new frame is grabbed from the VideoStream class.
Dayle
Progress!!!
My understanding is VideoStream() uses VideoCapture.read() to ingest every video frame and convert it to an image, which is then accessed via VideoStream.read(), which is great when you need to analyze every video frame, but computationally inefficient if you only need to access the occasional video frame.
My solution is to ingest every video frame using VideoCapture.grab() and only convert those frames I want to an image using VideoCapture.retrieve().
For 1280×720 H.264 video at 15 fps, CPU usage on a RPi3 for waiting to capture an image goes from 19% down to 11%, while a similar 640×360 video improves from 6% down to 3%.
Does this make sense as an improvement to VideoStream()?
Adrian Rosebrock
VideoCapture will grab each and every frame from the stream and will block execution until a new frame is ready. VideoStream runs in a thread behind the scenes, always grabbing the most recent frame and always having it ready for you. It’s a non-blocking operation.
Joy
Hi Adrian,
Thanks for the tutorial, it helped a lot.
But I had a problem while adding some codes.
I want to capture a frame, save it and send it to email while running OpenVino using NSC, but I have an error that it doesn’t save any data(jpg). When I searched for solutions, I read that cv2.VideoCapture(0) does not work in PiCamera.
Could you help me to VideoCapture with PiCamera instead of using “if usePiCamera”?
Joy
Adrian Rosebrock
The
cv2.VideoCapture
function will work with the RPi camera module if you have the V4L2 drivers installed. Otherwise, you use:VideoStream(usePiCamera=True)
to access your RPi camera module.Ityav
Hello,
Python 2.7 is gone now.
How can i modify this code to run in python 3.X?
Adrian Rosebrock
This code is compatible with Python 3.
Nelson
How can I rotate my Picamera 180 degrees?