I’ll keep the introduction to today’s post short, since I think the title of this post and GIF animation above speak for themselves.
Inside this post, I’ll demonstrate how to attach multiple cameras to your Raspberry Pi…and access all of them using a single Python script.
Regardless if your setup includes:
- Multiple USB webcams.
- Or the Raspberry Pi camera module + additional USB cameras…
…the code detailed in this post will allow you to access all of your video streams — and perform motion detection on each of them!
Best of all, our implementation of multiple camera access with the Raspberry Pi and OpenCV is capable of running in real-time (or near real-time, depending on the number of cameras you have attached), making it perfect for creating your own multi-camera home surveillance system.
Keep reading to learn more.
Looking for the source code to this post?
Jump Right To The Downloads SectionMultiple cameras with the Raspberry Pi and OpenCV
When building a Raspberry Pi setup to leverage multiple cameras, you have two options:
- Simply use multiple USB web cams.
- Or use one Raspberry Pi camera module and at least one USB web camera.
The Raspberry Pi board has only one camera port, so you will not be able to use multiple Raspberry Pi camera boards (unless you want to perform some extensive hacks to your Pi). So in order to attach multiple cameras to your Pi, you’ll need to leverage at least one (if not more) USB cameras.
That said, in order to build my own multi-camera Raspberry Pi setup, I ended up using:
- A Raspberry Pi camera module + camera housing (optional). We can interface with the camera using the
picamera
Python package or (preferably) the threadedVideoStream
class defined in a previous blog post. - A Logitech C920 webcam that is plug-and-play compatible with the Raspberry Pi. We can access this camera using either the
cv2.VideoCapture
function built-in to OpenCV or theVideoStream
class from this lesson.
You can see an example of my setup below:
Here we can see my Raspberry Pi 2, along with the Raspberry Pi camera module (sitting on top of the Pi 2) and my Logitech C920 webcam.
The Raspberry Pi camera module is pointing towards my apartment door to monitor anyone that is entering and leaving, while the USB webcam is pointed towards the kitchen, observing any activity that may be going on:
Ignore the electrical tape and cardboard on the USB camera — this was from a previous experiment which should (hopefully) be published on the PyImageSearch blog soon.
Finally, you can see an example of both video feeds displayed to my Raspberry Pi in the image below:
In the remainder of this blog post, we’ll define a simple motion detection class that can detect if a person/object is moving in the field of view of a given camera. We’ll then write a Python driver script that instantiates our two video streams and performs motion detection in both of them.
As we’ll see, by using the threaded video stream capture classes (where one thread per camera is dedicated to perform I/O operations, allowing the main program thread to continue unblocked), we can easily get our motion detectors for multiple cameras to run in real-time on the Raspberry Pi 2.
Let’s go ahead and get started by defining the simple motion detector class.
Defining our simple motion detector
In this section, we’ll build a simple Python class that can be used to detect motion in a field of view of a given camera.
For efficiency, this class will assume there is only one object moving in the camera view at a time — in future blog posts, we’ll look at more advanced motion detection and background subtraction methods to track multiple objects.
In fact, we have already (partially) reviewed this motion detection method in our previous lesson, home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox — we are now formalizing this implementation into a reusable class rather than just inline code.
Let’s get started by opening a new file, naming it basicmotiondetector.py
, and adding in the following code:
# import the necessary packages import imutils import cv2 class BasicMotionDetector: def __init__(self, accumWeight=0.5, deltaThresh=5, minArea=5000): # determine the OpenCV version, followed by storing the # the frame accumulation weight, the fixed threshold for # the delta image, and finally the minimum area required # for "motion" to be reported self.isv2 = imutils.is_cv2() self.accumWeight = accumWeight self.deltaThresh = deltaThresh self.minArea = minArea # initialize the average image for motion detection self.avg = None
Line 6 defines the constructor to our BasicMotionDetector
class. The constructor accepts three optional keyword arguments, which include:
accumWeight
: The floating point value used for the taking the weighted average between the current frame and the previous set of frames. A largeraccumWeight
will result in the background model having less “memory” and quickly “forgetting” what previous frames looked like. Using a high value ofaccumWeight
is useful if you except lots of motion in a short amount of time. Conversely, smaller values ofaccumWeight
give more weight to the background model than the current frame, allowing you to detect larger changes in the foreground. We’ll use a default value of0.5
in this example, just keep in mind that this is a tunable parameter that you should consider working with.deltaThresh
: After computing the difference between the current frame and the background model, we’ll need to apply thresholding to find regions in a frame that contain motion — thisdeltaThresh
value is used for the thresholding. Smaller values ofdeltaThresh
will detect more motion, while larger values will detect less motion.minArea
: After applying thresholding, we’ll be left with a binary image that we extract contours from. In order to handle noise and ignore small regions of motion, we can use theminArea
parameter. Any region with> minArea
is labeled as “motion”; otherwise, it is ignored.
Finally, Line 17 initializes avg
, which is simply the running, weighted average of the previous frames the BasicMotionDetector
has seen.
Let’s move on to our update
method:
def update(self, image): # initialize the list of locations containing motion locs = [] # if the average image is None, initialize it if self.avg is None: self.avg = image.astype("float") return locs # otherwise, accumulate the weighted average between # the current frame and the previous frames, then compute # the pixel-wise differences between the current frame # and running average cv2.accumulateWeighted(image, self.avg, self.accumWeight) frameDelta = cv2.absdiff(image, cv2.convertScaleAbs(self.avg)
The update
function requires a single parameter — the image we want to detect motion in.
Line 21 initializes locs
, the list of contours that correspond to motion locations in the image. However, if the avg
has not been initialized (Lines 24-26), we set avg
to the current frame and return from the method.
Otherwise, the avg
has already been initialized so we accumulate the running, weighted average between the previous frames and the current frames, using the accumWeight
value supplied to the constructor (Line 32). Taking the absolute value difference between the current frame and the running average yields regions of the image that contain motion — we call this our delta image.
However, in order to actually detect regions in our delta image that contain motion, we first need to apply thresholding and contour detection:
# threshold the delta image and apply a series of dilations # to help fill in holes thresh = cv2.threshold(frameDelta, self.deltaThresh, 255, cv2.THRESH_BINARY)[1] thresh = cv2.dilate(thresh, None, iterations=2) # find contours in the thresholded image, taking care to # use the appropriate version of OpenCV cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) # loop over the contours for c in cnts: # only add the contour to the locations list if it # exceeds the minimum area if cv2.contourArea(c) > self.minArea: locs.append(c) # return the set of locations return locs
Calling cv2.threshold
using the supplied value of deltaThresh
allows us to binarize the delta image, which we then find contours in (Lines 37-45).
Note: Take special care when examining Lines 43-45. As we know, the cv2.findContours
method return signature changed between OpenCV 2.4, 3, and 4. This codeblock allows us to use cv2.findContours
in OpenCV 2.4, 3, & 4 without having to change a line of code (or worry about versioning issues).
Finally, Lines 48-52 loop over the detected contours, check to see if their area is greater than the supplied minArea
, and if so, updates the locs
list.
The list of contours containing motion are then returned to calling method on Line 55.
Note: Again, for a more detailed review of the motion detection algorithm, please see the home surveillance tutorial.
Accessing multiple cameras on the Raspberry Pi
Now that our BasicMotionDetector
class has been defined, we are now ready to create the multi_cam_motion.py
driver script to access the multiple cameras with the Raspberry Pi — and apply motion detection to each of the video streams.
Let’s go ahead and get started defining our driver script:
# import the necessary packages from __future__ import print_function from pyimagesearch.basicmotiondetector import BasicMotionDetector from imutils.video import VideoStream import numpy as np import datetime import imutils import time import cv2 # initialize the video streams and allow them to warmup print("[INFO] starting cameras...") webcam = VideoStream(src=0).start() picam = VideoStream(usePiCamera=True).start() time.sleep(2.0) # initialize the two motion detectors, along with the total # number of frames read camMotion = BasicMotionDetector() piMotion = BasicMotionDetector() total = 0
We start off on Lines 2-9 by importing our required Python packages. Notice how we have placed the BasicMotionDetector
class inside the pyimagesearch
module for organizational purposes. We also import VideoStream
, our threaded video stream class that is capable of accessing both the Raspberry Pi camera module and built-in/USB web cameras.
The VideoStream
class is part of the imutils package, so if you do not already have it installed, just execute the following command:
$ pip install imutils
Line 13 initializes our USB webcam VideoStream
class while Line 14 initializes our Raspberry Pi camera module VideoStream
class (by specifying usePiCamera=True
).
In the case that you do not want to use the Raspberry Pi camera module and instead want to leverage two USB cameras, simply changes Lines 13 and 14 to:
webcam1 = VideoStream(src=0).start() webcam2 = VideoStream(src=1).start()
Where the src
parameter controls the index of the camera on your machine. Also note that you’ll have to replace webcam
and picam
with webcam1
and webcam2
, respectively throughout the rest of this script as well.
Finally, Lines 19 and 20 instantiate two BasicMotionDetector
‘s, one for the USB camera and a second for the Raspberry Pi camera module.
We are now ready to perform motion detection in both video feeds:
# loop over frames from the video streams while True: # initialize the list of frames that have been processed frames = [] # loop over the frames and their respective motion detectors for (stream, motion) in zip((webcam, picam), (camMotion, piMotion)): # read the next frame from the video stream and resize # it to have a maximum width of 400 pixels frame = stream.read() frame = imutils.resize(frame, width=400) # convert the frame to grayscale, blur it slightly, update # the motion detector gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (21, 21), 0) locs = motion.update(gray) # we should allow the motion detector to "run" for a bit # and accumulate a set of frames to form a nice average if total < 32: frames.append(frame) continue
On Line 24 we start an infinite loop that is used to constantly poll frames from our (two) camera sensors. We initialize a list of such frames
on Line 26.
Then, Line 29 defines a for
loop that loops over each of the video stream and motion detectors, respectively. We use the stream
to read a frame
from our camera sensor and then resize the frame to have a fixed width of 400 pixels.
Further pre-processing is performed on Lines 37 and 38 by converting the frame to grayscale and applying a Gaussian smoothing operation to reduce high frequency noise. Finally, the processed frame is passed to our motion
detector where the actual motion detection is performed (Line 39).
However, it’s important to let our motion detector “run” for a bit so that it can obtain an accurate running average of what our background “looks like”. We’ll allow 32 frames to be used in the average background computation before applying any motion detection (Lines 43-45).
After we have allowed 32 frames to be passed into our BasicMotionDetector
‘s, we can check to see if any motion was detected:
# otherwise, check to see if motion was detected if len(locs) > 0: # initialize the minimum and maximum (x, y)-coordinates, # respectively (minX, minY) = (np.inf, np.inf) (maxX, maxY) = (-np.inf, -np.inf) # loop over the locations of motion and accumulate the # minimum and maximum locations of the bounding boxes for l in locs: (x, y, w, h) = cv2.boundingRect(l) (minX, maxX) = (min(minX, x), max(maxX, x + w)) (minY, maxY) = (min(minY, y), max(maxY, y + h)) # draw the bounding box cv2.rectangle(frame, (minX, minY), (maxX, maxY), (0, 0, 255), 3) # update the frames list frames.append(frame)
Line 48 checks to see if motion was detected in the frame
of the current video stream
.
Provided that motion was detected, we initialize the minimum and maximum (x, y)-coordinates associated with the contours (i.e., locs
). We then loop over the contours individually and use them to determine the smallest bounding box that encompasses all contours (Lines 51-59).
The bounding box is then drawn surrounding the motion region on Lines 62 and 63, followed by our list of frames
updated on Line 66.
Again, the code detailed in this blog post assumes that there is only one object/person moving at a time in the given frame, hence this approach will obtain the desired result. However, if there are multiple moving objects, then we’ll need to use more advanced background subtraction and tracking methods — future blog posts on PyImageSearch will cover how to perform multi-object tracking.
The last step is to display our frames
to our screen:
# increment the total number of frames read and grab the # current timestamp total += 1 timestamp = datetime.datetime.now() ts = timestamp.strftime("%A %d %B %Y %I:%M:%S%p") # loop over the frames a second time for (frame, name) in zip(frames, ("Webcam", "Picamera")): # draw the timestamp on the frame and display it cv2.putText(frame, ts, (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1) cv2.imshow(name, frame) # check to see if a key was pressed key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup print("[INFO] cleaning up...") cv2.destroyAllWindows() webcam.stop() picam.stop()
Liens 70-72 increments the total
number of frames processed, followed by grabbing and formatting the current timestamp.
We then loop over each of the frames
we have processed for motion on Line 75 and display them to our screen.
Finally, Lines 82-86 check to see if the q
key is pressed, indicating that we should break from the frame reading loop. Lines 89-92 then perform a bit of cleanup.
Motion detection on the Raspberry Pi with multiple cameras
To see our multiple camera motion detector run on the Raspberry Pi, just execute the following command:
$ python multi_cam_motion.py
I have included a series of “highlight frames” in the following GIF that demonstrate our multi-camera motion detector in action:
Notice how I start in the kitchen, open a cabinet, reach for a mug, and head to the sink to fill the mug up with water — this series of actions and motion are detected on the first camera.
Finally, I head to the trash can to throw out a paper towel before exiting the frame view of the second camera.
A full video demo of multiple camera access using the Raspberry Pi can be seen below:
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this blog post, we learned how to access multiple cameras using the Raspberry Pi 2, OpenCV, and Python.
When accessing multiple cameras on the Raspberry Pi, you have two choices when constructing your setup:
- Either use multiple USB webcams.
- Or using a single Raspberry Pi camera module and at least one USB webcam.
Since the Raspberry Pi board has only one camera input, you cannot leverage multiple Pi camera boards — atleast without extensive hacks to your Pi.
In order to provide an interesting implementation of multiple camera access with the Raspberry Pi, we created a simple motion detection class that can be used to detect motion in the frame views of each camera connected to the Pi.
While basic, this motion detector demonstrated that multiple camera access is capable of being executed in real-time on the Raspberry Pi — especially with the help of our threaded PiVideoStream
and VideoStream
classes implemented in blog posts a few weeks ago.
If you are interested in learning more about using the Raspberry Pi for computer vision, along with other tips, tricks, and hacks related to OpenCV, be sure to signup for the PyImageSearch Newsletter using the form at the bottom of this post.
See you next week!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Fred
Amazing useful post, as always!
Keep on the good work Adrian, PyImageSearch is definitively THE blog for those who are developing their skills in cv.
All of good!
Adrian Rosebrock
Thanks Fred! 😀
Pedro Camargo
We have been able to connect 4 USB cameras to the PI 3
No need to use the internal camera port.
We asked the USB manufacturer to modify the bandwidth requeitements for the cameras
Steven Hill
Hello Pedro,
Are you able to capture the 4 USB cameras simultaneously and then read out each camera?
I want to capture a frame a second of each camera simultaneously and then read-out each camera into their own file directory
akshaya
Can you please share the details
@turbinetamer
Thanks for the improved line numbers for the python source !!!
They are much more readable in my Firefox browser.
Ryan
Awesome write up, is there a way to include IP camera’s rather than USB ones?
Adrian Rosebrock
Indeed, it is. I’ll try to cover IP cameras in a future blog post.
Khaled
Great idea!
Peter Lunk
I’m allso interested in your coming ipCamera blog 🙂
I am personally interested in, and think that of common general use to everyone would be if you could show this with ofcourse a hardware ip-camera, asswell as a ipcamera App running on an Android and/or iphone device like ‘IP Webcam’ / ‘IP webcam pro’ with wich anyone can turn a android smartphone into a ip camera.
Can’t wait to see how to properly do this…
Greetings,
Peter Lunk
Adrian Rosebrock
Hi Peter — I’ve had the IP camera blog post on my idea list for awhile. I can’t say exactly when I will cover it, but I will do my best.
Peter Lunk
That would be great man ! thanks for your reply 😉
Adrian Rosebrock
I’ll try to move it up in my queue for sure.
Peter Lunk
I figured it out 😉
Here’s my code, feel free to use as example…
Open Source rocks 😉
Adrian Rosebrock
Thank you for sharing your code Peter!
Peter Lunk
And the all important Link:
https://pastebin.com/3zf2DU5d
Joe
Wow! I really need this 🙂 thanks for sharing.
More power!
Adrian Rosebrock
Thanks Joe!
Ahmed Atef
Hi,
Appreciate your great work , thank you.
I notice that you are using logitech webcam,
Can i use microsoft lifecam cinema with raspberry pi?
Adrian Rosebrock
I have never used any of the Microsoft LifeCams before, but you should consult this list of USB compatible webcams for the Pi.
Phil
Hi Adrian, thanks for another great tutorial.
Up until now, I’ve been running OpenCV on my Raspberry Pi, logged into the GUI. I just tried booting the Pi to the console instead and on running any OpenCV project which uses ‘imread’, I get a GTK error – ‘gtk warning cannot open display’. I’ve read that this is something to do with the X11 server.
Have you tried OpenCV when booted into the console instead of the GUI? Basically I would like to be able to start my project as soon as the Pi boots up and figured it would be a waste of resources having the GUI running in the background.
Adrian Rosebrock
Indeed, anytime you want to use the
cv2.imshow
method, you’ll need to have a window server running (such as X11). If you want to start a Python script at boot and have it run in the background, just comment out all of yourcv2.imshow
andcv2.waitKey
calls and your program will run just fine.Jon Croucher
Xming forwards Xwindow over a remote connection without having to run Xwindow on the host. It works with Putty. But it is very slow.
Running FPS_test for the RPi camera over Putty, I get, 27.7 FPS / Threaded 215.7 FPS.
Enabling a display -d 1, and using Xming, I get, 2.7 FPS / Threaded 95.0 FPS.
Running in xwindow GUI directly on the RPi, I get, 24.2 FPS / Threaded 216.1 FPS.
Enabling display -d 1, I get, 13.3 FPS / Threaded 56.0 FPS.
Really a great testament to Adrian’s VideoStream class or PiVideoStream in this case.
This was on a RPi3 and V2 camera.
Regards
Jon
Girish
Hi Adrian
Great work, Thanks a lot for sharing the code, I implemented this code and tested it out.
I can see it working but I see an error message on the command window it just says “select time out”.
Can we ignore this or is there a way to fix this ?
BTW did you see this error in your implementation ?
Regards
Girish
Adrian Rosebrock
I have not seen that error message before. It seems to be an I/O related error, perhaps with Python accessing the Raspberry Pi camera?
Girish
Hi Adrian,
Thanks for your response, I ran the code you had published for long time and it worked fine, thought I am seeing the message ” Select time out” it does not seems to be impacting the function (may be dropping frames but not sure) still it working fine with Two Logictec C170 Webcams. I do not have Pi Cameras. (I am not sure why you are not seeing this message.)
Once again, great work, fantastic post, thanks a lot for sharing your code, I will run the code with more time integrate my own image processing routines and see how it goes
Regards
Girish
Adrian Rosebrock
If you are using two Logitech cameras, then make sure you have changed the code to:
Otherwise, you’ll end up trying to access a Raspberry Pi camera module that isn’t setup on your system. In fact, that’s likely where the “select time out” error is coming from.
Girish
HI Adrian,
I had done it exactly like the way you did, in the first time itself
Stil I see the message “Select Timeout” my wild guess it may be due to the OS or the USB/Webcam drivers running on my RPi, can you share which model of RPi you are using which Linux image you are using, so that I can replicate the exact setup you have and give it a try
Another difference I can think of is, I am using C170 Logitech camera not sure this will make a difference or not
David Diaz
Hi Adrian,
I experienced the same “select timeout” error, using two Logitech C170 webcams.
I tried different resolutions, no success.
Then after several days searching in the web I found someone that fixed this error, setting the width and height directly, like this:
cap = cv2.VideoCapture(0)
# set width
cap.set(3,1280)
# set height
cap.set(4,1024)
This worked for me.
Just wanted to share this info with you and everybody enjoying PyImageSearch.
Please keep developing the amazing field of CV!
Bolkar
Thanks for the very nice post.
Would it be possible to use ip cameras? I have already deployed couple of them on a regular dvr. It would be very interesting to apply this in an ip setup.
Adrian Rosebrock
Absolutely. I’ll try to do post on IP cameras with OpenCV in the future.
Melrick Nicolas
Amazing! that would be helpful in the near future
Adrian Rosebrock
I should have another example of using multiple cameras on the Pi again next week 🙂 Stay tuned.
amancio
Hey Adrian,
your multiple-cameras–rpi does not display the images
on my monitor;however, a separate program to just
capture the image and immediately display the image
using cv2.imgshow does work.
I looked around in the net and I have seen instances
in which people complained that cv2.imgshow does
not update the window properly…
Got any ideas as to why your script does not work?
Thanks
Adrian Rosebrock
As your other comment mentioned, you need to use the
cv2.waitKey
method, which the Python script does include on Line 82.Dmitrii
Hi, Adrain! Such a great story!
Could u tell about the monitor u’ve used for?
Adrian Rosebrock
I use this display from AdaFruit.
Wyn
I’d love to see this combined with storing the video or outputting to a web interface to get a full featured home surveillance system out of it.
Adrian Rosebrock
Absolutely. I’ll be doing some tutorials related to video streaming and saving “interesting” clips of a video soon. Keep an eye on the PyImageSearch blog! 🙂
Kaibofan
great!
salim
Great work, Thanks
can i use smart phone camera??
Adrian Rosebrock
Personally, I have never tried tying a smartphone camera to OpenCV. I’m not sure if this is possible for some devices without jailbreaking it.
sarath
My Pi camera video quality are very poor. How could i improve it?
Adrian Rosebrock
Can you elaborate on what you mean by “video quality is very poor”? In what way?
Krishna
Hi Adrian,
Thanks for the tutorial, Is it possible to achieve stereoscopic vision with Rpi Camera and a USB webcam?
Adrian Rosebrock
I personally haven’t tried with the Raspberry Pi, but in general, the same principles should apply. However, if you intend on doing stereo vision, you’ll need two USB webcams, not just one.
Leo
Why is not possible to use a RPi camera and a USB one? What is the maximum resolution?
Adrian Rosebrock
You can, but I wouldn’t recommend it. For stereo vision applications (ideally) both cameras should have the same sensors.
vorney thomas
stereo vision need two same camera, since they have the same intrinsic parameter and external parameter, you need this value to calculate the secens depth .
Arnold Adikrishna
Hi Adrian. Great tutorial. Great work. And thanks for sharing with us. I have one quick question.
I ran the program, everything went smooth. Nonetheless, when I press the ‘q’ button the program terminated, but one of my webcams did not stop working, and the terminal did not show the ‘>>>’ anymore. It seemed working on an infinite loop.
Any idea what is going wrong?
I am using two usb-webcams (and I have already modified your code so that it can work well with two usb-webcams), and my OS is windows 10.
Looking forward to hearing from you. Thanks.
-Arnold
Adrian Rosebrock
Just to clarify, are you executing the code via a Python shell/IDLE rather than the terminal? The code is meant to be executed via command line (not IDLE), so that could be the problem.
Arnold Adikrishna
Yes, you are right. Once I executed the code from command prompt, everything was fine. Thanks for your response 🙂
Adrian Rosebrock
No problem, I’m happy it worked out 🙂
Mike Grainger
Adrian:
Please continue with these blogs I am finding them very educational. My question, you make a reference to a ‘multi-object tracking’ tutorial coming in the future. I would like to add a + to that article in hopes that it will land higher on your priority list. To that end, do you have an idea when you will be releasing such an article?
Regards,
Mike
Adrian Rosebrock
Hey Mike — thanks for suggesting multi-object tracking. I will do a tutorial on it, but to be honest, I’m not sure exactly when that will be. I’ll be sure to keep you in the loop! Comments like these help me prioritize posts, so thanks for that 🙂
Glenn
Hey Adrian,
When I run this script my pi reboots. I was able to get both camera to turn on for a split second but then the pi shuts down pretty quickly. Any idea what could be going on?
Adrian Rosebrock
That’s quite strange, I’m not sure what the problem is. It seems like the cameras might be drawing too much power and the Pi is shutting down? You might want to post on the official Raspberry Pi forums and see if they have any suggestions.
Fad
Hi Adrian
what algorithms used to detect motion ?
regards
Fad
Adrian Rosebrock
Hey Fad — please see this blog post on basic motion detection and this one on a better motion detection algorithm for more details on how to implement motion detection.
William
Hi,
My context isnt exactly the same since I use the C++ interface of OpenCV, and I am using Linux on a PC (but I plan to go on Raspberry Pi after). I have a problem using multiple cameras though and I hoped that you would have some clues on the cause for that.
The problem is that I cannot open 2 USB cameras at the same time without having an error from video4linux (the Linux’s API for webcams, which OpenCV relies on, or so I understand).
Do you have any clues ?
Regards
Adrian Rosebrock
Hey William, thanks for the comment. I’ve never tried to use the C++ interface to access multiple cameras before, so I’m unfortunately not sure what the error is. However, it seems like the same logic should apply. You should be able to create two separate pointers, where each points to the different USB camera src.
James
I’m having a problem installing cv2. I have openCV installed, but cv2 still cannot be found on the Pi. Any suggestions?
Adrian Rosebrock
Please refer to the “Troubleshooting” section of this post for information on debugging your OpenCV install.
tita
Wow great tutorial..
how about 3 usb cameras???
Adrian Rosebrock
Sure, absolutely. You would just need to create third webcam variable and read from it:
vorney thomas
Dear Adrian Rosebrock
I plan to do the visual SLAM subject by using raspi computer board connected two usb camera Logitech C920, but i dont know to get the two image and stream frame at the same time,can you give me some practical advice?
look forward your response!
Adrian Rosebrock
Using the exact code in this blog post, you read frames from two different video sensors at the same time. So I’m not sure what you’re asking?
Arman
i want to see that camera’s view from another PC or desktop .. is that possible ??
Adrian Rosebrock
You would normally stream the output from the video stream to a second system. I haven’t created a tutorial on doing this, but it’s certainly something I will consider for the future!
Carlos
Hey Adrian
First of all, thanks for sharing this in such a detailed way, much appreciated!
I would like to activate GPIO pins when each camera senses motion, like Camera 0 –> GPIO 22 and Camera 1 GPIO 23.
How can I identify this?
Thanks a lot!!
Adrian Rosebrock
I would suggest using using this blog post as a starting point. You’ll need to combine GPIO code with OpenCV code, which may seem tricky, but once you see my example, it’s exactly pretty straightforward.
erik b.
Adrian, would you be so kind as to point me in the direction of using just ONE camera (the PiCam (IR)) and being able to save the output motion capture mpeg (OR have the ability to save the output motion capture as PNG files) to a NAS on the same network of the raspberry pi?
i just need the back end software thats processed on the Pi that does what i just mentioned. i am a python novice, but i am pretty sure that i can follow how things are being processed (like you have in this blog post..which is very awesome..almost exactly what i am trying to achieve..)
also, how hard is it to change the motion box? instead of the motion sensor box being a solid red line..how could you change that into a box that looks like this one pictured (link: https://docs.unrealengine.com/latest/images/Engine/UMG/UserGuide/Styling/BorderExample.jpg) – without the arrows and the filled in box in the middle.
OR something like this (link: http://www.codeproject.com/KB/audio-video/Motion_Detection/3.jpg)
the second one being preferred method of highlighting the motion in the field of view.
i can build out the frontend webpage to view either the mpeg captures and/or PNG captures stored on the NAS with no problems.
thank you very much in advance..i am building several of these cameras..and the software you have shared is the best that i have found so far!
Adrian Rosebrock
If you’re trying to save video clips that contain motion (or any other “key event”), I would recommend reading this tutorial where I explain how to do exactly that.
As for your second question, changing the motion box becomes a “drawing” problem at that point. It’s not exactly hard, but it’s not exactly easy either. You’ll need to use built in OpenCV drawing functions to create arrows, rectangles, etc. It’s a bit of a pain in the ass, but certainly possible.
If you want to draw just the motion field, I would get rid of the bounding box and just call
cv2.drawContours
on the region instead.EDMUND
Can i join four camera’s and a sound sensor to count vehicles and detect ambulance all on one rasberrypi board
Eddie
Thank you for a great tutorial for Raspberry pi.
I am currently dealing with multiple cameras with the lastest version of Raspberry pi 3 for a home surveillance system.
does anyone knows that is that possible to use 3 or more cameras on Raspberry Pi with a recording function.
Adrian Rosebrock
I personally have never tried with more than 2 cameras on the Pi. Your worry at that point should become power draw. 3 cameras is a lot for the Pi to power. Consider using a secondary power source. The Pi also might not be fast enough for process all of those frames.
merral
thanks man it’s real greate project but i get this error can u help me please
in resize (h, w) = image.shape[:2] AttributeError: ‘NoneType’ object has no attribute ‘shape’
Adrian Rosebrock
Anytime you see an error related to an image or frame being
NoneType
it’s because either (1) the image was not read properly from disk or (2) the frame was not read properly from the video stream. In this case it’s the latter. Double check that you can access the video streams from your system. I’ll try to do a more detailed blog post on this in the future.Jeff Cicolani
I’m having the same problem. I am able to read each camera individually, but as soon as I try to use 2 I get the error.
I’m running this on Windows 10 PC with 2 cameras connected through a USB3 hub.
Adrian Rosebrock
Hey Jeff — unfortunately, I’m not entirely sure what the error is in this case. If you can access them both individually then your system can certainly open pointers to the cameras. Can you try running both individual cameras at the same time from separate Python scripts? I’d be curious if the USB hub doesn’t have enough power to run both cameras.
Jeff Cicolani
It looks like it may be the hub. When I try to run them in separate scripts I get the same error. I’m going to try a couple other solutions, such as moving the experiment to a laptop where each port is individually powered. I’ll post my findings.
Thanks
Adrian Rosebrock
Yep, definitely sounds like a power issue. Let me know what you find! If you can get a USB hub that directly plugs into the wall this should resolve the power issue.
Jeff Cicolani
It turned out to be the power issue. I was able to capture both camera successfully when they were powered on separate USB ports on the laptop.
Adrian Rosebrock
Thanks for sharing Jeff. Congrats on resolving the problem.
Khaled
That’s very interesting a couple of questions here..
1) What happens if you would also save the video stream from 2, 3 or 4 cameras? Would that work or will the performance (framerate, framedrops…etc.)drop significantly
2) What is the limit here? For example does using 4 USB cameras + Pi Camera work? Or will that fry the pi :)!?
3) Is it possible to save the video stream from multiple cameras + show the cameras output live?
Adrian Rosebrock
I’ve never tried with more than 2 cameras. The issue would become power draw. The Pi by itself likely wouldn’t have enough power to run the USB cameras. You would likely need a USB hub that draws its own power. In either case, you would notice a performance drop as the number of cameras increases.
Joaquin Taboh
Hi
Thanks for sharing.
I want to know more about the camera’s synchronization. Can you tell if they are both shooting at the same time? Or which is the time difference? What was the FPS on your project, using two cameras?
Thanks in advance!
Adrian Rosebrock
I actually don’t have the FPS recorded when both cameras were running otherwise I would be happy to share. As for synchronization, there is no guarantee that the frames grabbed for each camera were captured at exactly the same time down to the micro-second. That is outside of the capabilities of this project.
Jimmy
Hi Adrian,
I have a question on line:
for (stream, motion) in zip((webcam, picam), (camMotion, piMotion)):
If i just use only 1 camera how would i disable the other ? I tried to modify into :
for (stream, motion) in (webcam, camMotion) but won’t work. Please help me, thanks !
Adrian Rosebrock
If you only want one camera then you shouldn’t be using this tutorial. Instead, use this tutorial on accessing a single camera.
MarkkuS
Hey Adrian,
and first, thank you so much for this tutorial. I´m sooo newbie with Pi things and that´s why I thought to ask you if you can help me. I´ve planned to use 2 x Pi 3 boards with cam & ir sensors installed each other. Do you think it´s possible to use in some way your tutorial to use third Pi 3 board as NVR with external USB disk to these 2 x Pi boards? Or is there some reasonable way to build surveillance system with Pi 3 boards with cam & ir sensors?
Thank you in advance!
Adrian Rosebrock
Hmm, I’m not sure I understand your exact question. Is your goal to have each of these Pis deployed somewhere and then networked together? The point of this blog post is to connect multiple cameras to a single Pi, not have multiple Pis each using one camera.
Moon ki Park
hi Adrian, i success your tutorial haha~
during try tutorial, i think you’re genius ~
anyway, if i buy pipipi book i can do more detail image processing?
i want to make Security image processing device(Security IoT)
thank you ~ sorry about my bad English(i’m South korean)
Adrian Rosebrock
I’m happy you find this tutorial useful! And yes, Practical Python and OpenCV covers the basics of image processing. For a deeper dive into the subject I would recommend the PyImageSearch Gurus course.
ZackSnyder
Hi Adrian,
Thank you for your tutorials they are great, i learned a lot, keep up the good work.
I have a question, Is it possible, to save the videos from the cameras to be saved in some external storage(SD or HDD) and name the videos with current date-stamp?
If yes, how do i implement it in ¨multiple-cameras-with-the-raspberry-pi-and-opencv¨ Thank you .
PS: I am new to python and RPi
Adrian Rosebrock
I demonstrate how to save videos captured from cameras to disk in this blog post. Simply specify the file path to be one of your external storage devices. The filename can be created by using a number of Python libraries. I would use
datetime
.ZackSnyder
Hi Adrian,
Thanks for the quick reply. This is exactly what i was looking for. I´ll get back to you, how it turns out, once i implement it.
Thank you.
Adrian Rosebrock
Glad to hear it Zack! 🙂
Jon Croucher
Hi Adrian
Started on CV 4 weeks ago for use in a fixed wing UAV for Search & Rescue.
My first tinkerings were on a PC, Win 10 and C++.
A week ago I received a RPi3 and camera, and it all seems to be python!
So your posts have been most informative and helpful, thank you.
Just a few questions if you can find the time.
Can the resolutions set in the VideoStream Class if the imutils module be changed in a program?
How do I “placed the BasicMotionDetector class inside the pyimagesearch module for organizational purposes”?
Occasionally when running a python program accessing a web Cam I get an error message VIDIOC_DQBUF: No such device , it even happened on the threading FPS test after about 5 runs, then when I look at /dev ,video0 is gone and video1 is there. Do you know anything about this?
Keep up the great work
Regards
Jon
Jon Croucher
Hi Adrian
I was constantly getting the “VIDIOC_DQBUF: No such device” fault when running the Web Cam FPS test with the display activated using Putty and Xming. The fault would occur about half way through test and the cam would become /dev/video1. I changed the device number in the program to 1, run the FPStest with -d 1 and about half way through I again got the “VIDIOC_DQBUF: No such device” fault, in /dev, video1 was gone and video0 was back! Unpluging/pluging in the cam would also give me back /dev/video0.
I have now used a powered USB hub to connect the WebCam and been running the FPS WebCam test for over an hour with no problems.
Just thought I’d let you know.
PS Web cam FPS test results are:
Over Putty 8.4 FPS / Threaded 55.7 FPS
Over Putty using Xming and display enabled -d 1, 2.3 FPS / Threaded 38.5 FPS
Love your threading program.
Regards
Jon
Adrian Rosebrock
Thank you for sharing Jon!
Phy
Hello and thank you. I managed to put 4 camera however I would like to put them in a same window and not have 4 separate windows, how can I do? thank you
Adrian Rosebrock
If they are all the same height then you could use
np.hstack
:output = np.hstack([frame1, frame2, frame3, frame4]
Phy
Thx u. I had
camera =cv2.putText(frame, ts, (10, frame.shape[0] – 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
cv2.imshow(“images”, np.hstack([frame, camera]))
But now only 2 frame, or i have 4 frame separate.
Pavel
Hi, great guide, but I have an issue here:
I tried to make it work with 2 USB cams but without all those motion stuff. Just wanted 2 streams in 2 windows. So I tried to remove all motion related stuff. Didn’t work. Moved from one error to the other.
Any suggestions or some lines of code on how to realise this?
Adrian Rosebrock
Hey Pavel — it’s hard to say what your error is without knowing the error message. My main suggestion here would be to delete the motion detector classes using an IDE like PyCharm so it will “highlight” any code that will throw an error by deleting these classes.
Dipali
Thank u so much sir!
Ur tutorials helped a lot during my final year project completion and i learned a lot !!!
Great work….
Adrian Rosebrock
Nice job getting your Raspberry Pi + multiple cameras up and running!
Dipali
How to do live streaming of both videos on web browser???
Adrian Rosebrock
I don’t have any tutorials on how to stream videos to a web browser. I would suggest reading up on “gstreamer” and the Raspberry Pi.
Brian
Is it possible to use Raspberry Pi + multiple camera approach for car surveillance? Is it possible to capture motion and either send live video or pictures to your phone?
Adrian Rosebrock
Can you elaborate more on what you mean by “car surveillance”? And yes, you can capture images/video and send them to your phone. I discuss this more inside the PyImageSearch Gurus course.
Hugo Elias
Thanks for this amazing tutorial. I got it working in no time.
One question: When the RPi detects movement, I want to save a few seconds of video to a .h264 video file. I can’t seem to find any examples of this online. Is there some keyword I should be looking for, or do you have a tutorial on this?
Many thanks
Adrian Rosebrock
Hey Hugo — you should take a look at this blog post where I demonstrate how to write video clips to file based on a given action taking place.
adnan dubois
Hi Adrian,
So first of all thks for your amazing works.
one PB: I am using a USB cam which is not the pb but I have this message when I launch the multicam:
$ python multi_cam_motion.py
Traceback (most recent call last):
File “multi_cam_motion.py”, line 3, in
from pyimagesearch.basicmotiondetector import BasicMotionDetector
ImportError: No module named pyimagesearch.basicmotiondetector
can U help PLS?
Adrian Rosebrock
Please make sure you use the “Downloads” section of this tutorial to download the code + proper project structure. It’s likely that your project structure does not match mine.
Daniella Solomon
Hi
I tried to run it on IP cameras and it works.
There is just one problem – when I put the resize line in comment it doesn’t work.
When the is motion all the screen become with the red box
any idea why?
Adrian Rosebrock
Hi Daniella — resizing the frame shouldn’t matter. Make sure you are resizing both frames.
Armando
Is it possible to play/record videos at higher quality like 30 fps?
If so what would be the steps?
Adrian Rosebrock
It depends on what your camera sensors are possible at recording/retrieving frames at. I don’t have any specific tutorials on adjusting the actual physical FPS Of the camera, but I will try to cover this in the future. The problem is that the function calls required to do this don’t work on every camera due to driver issues. But to start, you should read up on the documentation of your camera to see if it can record at a higher FPS.
Vladislav
Hi, Adrian!
How i can find motion detection for many objects with two cameras?
Thx
Adrian Rosebrock
You can use this code for multi-object detection. Line 48 loops over all contour regions that have a sufficiently large area.
Vladislav
And I have one problem…
What is this?
VIDIOC_STREAMON: No space left on device
Adrian Rosebrock
Can you check the amount of space on your system? Either your system ran out of space on the drive or there is a problem with your camera. Unfortunately, I’m not sure what the exact issue is.
Vladislav
Thx very much, i searched in net, i think i`ll find the answer in future)
and about more detection, not one: what i need change in code, can you write please?)
Adrian Rosebrock
I’m happy to help point you in the right direction but I cannot write the code for you.
Vladislav
And i `m sorry that i`m spamming, one more question, how i can do 3D people detection with two cameras?)
Anders Yuran
Just as an alternative you can use the PI version of the xeoma surveillance platform and use all its features.
http://www.felenasoft.com/xeoma/en/
Rajvi Tivedi
Hello,
I attach 3 camera with raspberry pi 3 and take videos of 30 minute at 1 FPS from each camera simultaneously and saving it. Here, I use 2 cameras directly connected with my raspberry pi and one camera is connected via external power hub to raspberry pi 3. The only problem I face is during 30 minute of video 15-20 frames from 3rd camera (attach via external hub) has some black out part in frames.
Any help solving this problem would be appreciated.
Thank You.
Adrian Rosebrock
Would it be possible to attach all three cameras to the external USB hub? Or at least swapping out each camera on the USB hub? That would at least determine if there is an issue with the individual camera.
Dylan
If i get a 28 usb hub can I use 28 cameras on one raspberry pi 3?
Adrian Rosebrock
No, absolutely not. The Pi is not fast enough to handle 28 cameras. Even two cameras is starting to push the limit of the Pi.
Richard Reina
Anyone ever get this error or know hoe to fix?
$ python multi_cam_motion.py
[INFO] starting cameras…
Traceback (most recent call last):
File “multi_cam_motion.py”, line 39, in
locs = motion.update(gray)
AttributeError: BasicMotionDetector instance has no attribute ‘update’
Adrian Rosebrock
Can you share a bit more details on your setup? Which version of Python, OpenCV, etc.? Additionally, did you use the “Downloads” section of the tutorial to download the code?
AcrimoniousMirth
Hi,
I need to use two USB webcams in an auto-script running from startup. They don’t both need to be on at the same time but they need to take photos either simultaneously or instantly after each other.
I also need to save the images in %iimg%i style to a USB drive.
Would you be willing to tell me how to initialise them and capture images from each? I’m pretty new to coding so your help would be appreciated.
Thank you very much 🙂
Adrian Rosebrock
Hey there — this blog post demonstrates how to initialize both cameras. Capturing at exactly the same time is a bit more challenging, especially if you are a bit new to coding. If you can tolerate a tiny fraction of a second difference between cameras this code will work just fine.
Writing images to disk can be accomplished via
cv2.imwrite
.If you’re new to OpenCV and Python I would recommend working through Practical Python and OpenCV which will help you get up to speed quickly with OpenCV + Python.
Richard Reina
Hi Adrian,
Do you know if the Multitech C922 will also work with your VideoStream class?
Adrian Rosebrock
If OpenCV’s
cv2.VideoCapture
function can access your Logitech C922, then yes, theVideoStream
class will work with it as well. I use a Logitech C920 and have no problems with it.Richard Reina
Hi Adrian,
I am getting this error when trying it (the Logitech C922) out with videostream_demo.py.
python videostream_demo.py
Traceback (most recent call last):
File “videostream_demo.py”, line 28, in
frame = imutils.resize(frame, width=400)
File “/home/pi/.virtualenvs/cv/local/lib/python2.7/site-packages/imutils/convenience.py”, line 69, in resize
(h, w) = image.shape[:2]
Richard Reina
Oops, forgot the last line of the error message.
AttributeError: ‘NoneType’ object has no attribute ‘shape’
Adrian Rosebrock
See my reply to “merral” on October 12, 2016. The gist is that you’ll want to double-check that OpenCV access BOTH of your cameras. It sounds like one (or more) cannot. Check this blog post for information on how to diagnose and resolve NoneType errors with OpenCV.
Richard Reina
Okay, took your advice and have thoroughly reviewed https://pyimagesearch.com/2016/12/26/opencv-resolving-nonetype-errors/ My problem is with cv2.videocapture and it’s happening with both c922 and a c920.
I can access and take a picture with the camera using fswebcam. However, if I run the code below I get: “Camera Not Online”. I built opencv-3.3.1 and opencv_contrib-3.3.1 per your tutorial and it works fine for my Raspberry Pi Camera Module just not for the webcam.
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
if cap.isOpened():
print(“Webcam online.”)
else :
print(“Camera Not Online”)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Going crazy. Any help greatly appreciated.
Adrian Rosebrock
The code works fine with your Raspberry Pi camera module? Are you using cv2.VideoCapture to access your Raspberry Pi camera module? Or are you using the “picamera” library? Something doesn’t quite add up, I’m just not sure what it is yet…
Richard Reina
HI Adrian,
I went ahead and recompilled and reinstalled opencv carefully following your instructions here: https://pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/ and now Videocapture works with both the c920 and the c922 webcam.
I however, am stuck on a new problem. I am using your dropbox functionality from pI_surveillance.py via json in multi_cam_motion.py and it works pretty well. However, I am setting up another pi to monitor the gangway of my home and I want to just use the webcam only and not the pi cam module. I am stuck trying to modify this line so to eliminate the picam pimotion arguments.
for (stream, motion) in zip((webcam, picam), (camMotion, piMotion)):
When I try:
for (stream, motion) in zip(webcam, camMotion):
I get:
for (stream, motion) in zip(webcam camMotion):
^
SyntaxError: invalid syntax
Any help would be greatly appreciated. Thank you.
Adrian Rosebrock
There is a syntax error in your script, either on the line you mentioned or above it. I’m not sure exactly where the error is in the script but you can use a Python syntax highlighter or an IDE to help you determine exactly where the error is. My guess is that you’re missing a “,” between “webcam” and “camMotion”:
for (stream, motion) in zip(webcam, camMotion):
Although you’ll likely need to remove the
zip
function call as well.Juan Daniel
Hello Adrian
I have a error with the code
and honestly not what is the problem
Traceback (most recent call last):
File “multi_cam_motion.py”, line 3, in
from pyimagesearch.basicmotiondetector import BasicMotionDetector
ImportError: No module named pyimagesearch.basicmotiondetector
Adrian Rosebrock
Hey Jaun — make sure you use the “Downloads” section of this blog post to download the “pyimagesearch” module which contains the implementation of the BasicMotionDetector class.
caie
Hi Doctor Adrian, I have followed through the tutorials and managed to have the scripts running. My issue however is that the PiCamera window copies the first frame from webcam and remains still. no update or what, only the webcam is functioning.if i test the picam and usb cam individually, they work well. What could my issue be?
Adrian Rosebrock
That is a bit of strange behavior for sure! Is the rest of your system functional or does the entire Pi freeze? Additionally, have you made sure you are using the cv2.waitKey(1) call? If not, the frame will not update.
Miguel
Hi Adrian,
Is it possible to have one 360º camera and other single camera working simultaneously and make live streaming to the facebook of the 360º camera signal with the image from the single camera as PiP?
At the end you will have real-time video streaming with the image from the 360º camera as main image and a small square (PiP) with the image from the single usb camera.
Can we connect both cameras to the raspberry Pi?
thanks in advance
Miguel
Lord James
Hi Adrian,
What if my camera is Microsoft or touch mate, do I need to change some part of the code. Thank you for immediate response.
Adrian Rosebrock
The code in this post assumes either a USB webcam or a Raspberry Pi camera module. If you are using a different type of camera you’ll need to ensure OpenCV can access it.
Jin
Hi adrian,
Can you help me. How to captured a picture in this program then send to email. Thanks a lot
Adrian Rosebrock
The Python programming language has a built-in email module. I would suggest starting there.
Jin
Hi adrian,
How to captured pictured in this program .?
Adrian Rosebrock
Hi Jin — can you clarify what you mean by “captured picture”? Do you mean writing the image to disk? If so, you can use the “cv2.imwrite” function.
Rupesh
Hi,
How to access multiple camera’s using their IP address ?
Thanks,
Rupesh
Adrian Rosebrock
I do not have any tutorials on using the Raspberry Pi as an IP camera. I may consider this as a future tutorial, thank you for the suggestion.
Guru
Folks,
Can you please help me to process the Videos from a 20 channel NVR which is connected with 20 CCTV Cameras using OpenCV
Thanks
Guru
David
This works great, except for when I use two cameras. Using an RPI I am getting the error…
select timeout
About every 8th frame
On this frame (and the one after) the image is black and then it resumes as normal
Any ideas where thats coming from?
Adrian Rosebrock
I assume you are using a Raspberry Pi camera module? If so, it sounds like you may have a problem with the camera sensor itself.
Sang
Thank you Adrian!! I have one question about it. How would I rotate only the Pi camera in the code? I have tried some but the video stream remains the same.
Adrian Rosebrock
Have you tried using the
cv2.rotate
function? Or are you trying to rotate the stream itself prior to it being read by OpenCV?Henrik Lauridsen
Hi Adrian,
Thank you for your great tutorials.
I am a newbee in Linux and Python and struggling with a for your simple task.
The thing is that I have 2 Raspberry Pi Zero’s each with a CSI camera attached.
The Zero’s are configured to stream video through Motion (IP address:8081).
I can watch the cameras in a browser and it work fine.
Now on a third Raspberry Pi 3 I would like to watch the 2 streams so that each camera is shown in a window that uses half of the screen (1920×1080 / 2)
I have tried with some simple Python code, but it is not quite what I want. I would like something like you are showing. 2 windows with only the picture on the split screen.
Would you please be so kind and help me out.
Thank you in advance,
Henrik
Adrian Rosebrock
Hey Henrik — I understand what you are trying to build but I’m not what exactly the issue is. You mentioned the code is working but not what you want. What is the difference between the code and what you ideally want? Please be more specific.
Henrik Lauridsen
Hi Adrian,
Thank you for your reply.
I have chosen another solution for the streaming. Not using Motion, but some Python code.
I have build a small Windows IOT program for the Raspberry Pi 3 and I can watch the 2 cameras, but I would like a solution build on Raspian not Windows. What I would like is 2 windows like you are showing in window no. 1 and 4
My problem is that I am a rookie in Python and Linux.
So what I am looking for is some Python code to open 2 windows splitting the screen showing the 2 cameras.
I hope you can understand what I mean.
Tia,
Henrik
Adrian Rosebrock
Just create two windows via “cv2.imshow”, giving each a different name:
Tanvi R
Hi Adrian,
Thank you so much for the tutorials, Great work indeed!
I’m trying to build a system with two USB webcams (Logitech C920 and pro 9000) connected to a computer. i want run detection on the two at the same time. So will the above method work for two webcams connected to a computer. Or will I require a raspberryPi
Adrian Rosebrock
No, you can use two USB webcams. Just set the
src
for each one instead ofusePiCamera
.R jagan
Hii sir ,
How to access more than one camera on ressberry pi using IP address ,Not using motion detection
R jagan
Anybady please answer my question it’s possible or not
Adrian Rosebrock
You mean like an RTSP stream? Or something else?
Sourabh Mane
Hi Adrian,
Nice Tutorial!! I have a question what if one camera out of 2 is disconnected / not able to start , whether the script will continue its execution??
Adrian Rosebrock
I haven’t added logic to handle when a camera is disconnected. You would need to add such logic to the script.
Elena
Hello Adrian,
I want to connect 3 cameras in different part of the house and do identification with YOLOv3. I was wondering if you could tell me how can I use YOLOv3 instead of the simple motion detector and simultaneously detect objects?
could I also track the object?
Adrian Rosebrock
Hey Elena — I’ll be covering that exact topic in my upcoming Computer Vision + Raspberry Pi book! Make sure you join the PyImageSearch email newsletter to be notified when the book launches later this year.
Zia
can you use multiple camera board with RPi for two cameras and cover in next blog??
Adrian Rosebrock
It’s not that easy, unfortunately. It requires special hardware to use two actual Raspberry Pi camera modules on a single Pi. I’ll consider it for future tutorial but I can’t guarantee if/when I would cover it.
Mark
Adrian,
I am trying your multiple threading to get input from two webcams. I must say it works brilliantly, actually too well, using XX.read() I am getting 1 minute of video for 10 seconds of recording. Looking at videostream.py on GIT it looks like I should be using XX.update to get the next new frame rather than .read which I guess is there for testing fps. At present the webcams default to 640×480. Is there any way to change the resolution using XX.set(cv2.CAP_PROP_FRAME_WIDTH, imageWidth) and equivalent for height without modifying imutils? Apologies for asking but I am just starting out with Python/OpenCV and Linux and am not sure I am yet up to making the mods to imutils.
Adrian Rosebrock
Unfortunately you will need to modify imutils. I’ve considered updating imutils to allow a
cv2.VideoCapture
to be passed in, presuming it’s already been initialized with whatever parameters. I’m not sure if/when I will do that though.Trixondemapendan
Hi Adrian, is it possible to use this for object detection projects? Where you will use both the pi camera and usb camera to detect objects?
Adrian Rosebrock
Absolutely. What type of object detectors are you trying to run?
Michelle
Hi adrian. Is there a way to optimize the code for facial detection using 3 cameras without having a lot of slowdown in frame capture? like if one camera is already detecting a face the other 2 cameras would stop the detector code and only starts recapturing when one camera stops detecting the face. Does that help in removing the processing weight on the rpi?
Adrian Rosebrock
Technically yes, you could do that using threads/processes and shared variables. The problem is that you would miss faces in the other 2 cameras if the 3rd camera told the other 2 to stop.
I’ll be covering how to apply Computer Vision on the Raspberry Pi, including optimizations, in my upcoming Computer Vision + Raspberry Pi book. Be sure to be on the lookout for it!
charles
hi do you have an example of that with the use of flask?
Adrian Rosebrock
Using Flask to do what, specifically?
Gil
Hey Adrian im a big fan of the work you have been doing.
However I have been having a hell of a time using your optimized “motion” sensor with a single camera. I know you have pointed a previous commenter to another lesson on acessing a single camera but for me it doesnt mesh well with this lesson, specifically this section where it cycles through the stream and some motion script:
for (stream, motion) in zip(webcam1, piMotion):
Thanks for the help if you see this.
Adrian Rosebrock
Hey Gil — could you be a bit more descriptive regarding the problem? I can appreciate the frustration but without knowing specifically what you’re attempting to accomplish or what your current errors are. If you can provide more details I’d be happy to point you in the right direction.
Gil
Let me try to be as succinct as possible:
I can’t use this code while using only 1 camera.
When I attempt to convert the code on this lesson for 1 camera I come I get zip related errors like such:
TypeError: zip argument #1 must support iteration
For context here are some of my sophomoric attempts at getting your camMotion/piMotion for loop to work for a single camera:
for (stream, motion) in zip((webcam1), (camMotion, piMotion)):
for (stream, motion) in zip(webcam1, camMotion):
for (stream, motion) in webcam1:
Thanks for the help, I know you’re a busy guy with your new project.
Arya
CAN anyone say if some lower cost webcam like C310 will deliver respectable performance on this project or investing behind them is a waste of money ?
Anurag
Please help me into this. How will the for loop condition parameters change if I use a single web camera?
Adrian Rosebrock
If you only need a single camera then you don’t need the “for” loop. Just access the “VideoStream” object and loop over the frames. See this post for an example.
Naseeb Gill
Hi Adrian,
When I run “multi_cam_motion.py” in terminal, it shows error “No module named imutils”. But I already installed imutils and when I typed import imutils in Python 3 (Idle), it shows no error. What to do?
Adrian Rosebrock
Make sure you are in the Python virtual environment via the “workon” command before you execute the Python script.
Phan Nhật Huy
Hi Adrian,
How can i create a video stream from 2 video stream of 2 cameras. I choose the way that combine 1/2 frame of each frame into 1 frame only then suppose it is a video stream. Is there anyway to do this ?
Adrian Rosebrock
I’m not sure I fully understand your question but it sounds like you may want to build a panorama via image stitching?
ignacio
Thank you so much!
Adrian Rosebrock
You are welcome!
Glalie
hello adrian,
how can i create something like add camera or remove camera?
Adrian Rosebrock
You mean like a GUI interface? I don’t do GUIs here, I mainly just teach Computer Vision and Deep learning.
Salman Sajid
Hi Adrian
I have one question,How we fetch multiple camera stream parallel.
Bilal Khamri
thanks for sharing !! it is possible to detect multipe object in both camera?
Adrian Rosebrock
Yes, just apply object detection to each individual video feed.
zhazha
Hi bro! Can you make a tutorial on how to connect Raspberry Pi with a wireless camera (IP CAM)? Thanks
Adrian Rosebrock
You mean something like this?