Hi Adrian,
I’m enjoying your blog and I especially liked last week’s post about image classification with the Intel Movidius NCS.
I’m still considering purchasing an Intel Movidius NCS for a personal project.
My project involves object detection with the Raspberry Pi where I’m using my own custom Caffe model. The benchmark scripts you supplied for applying object detection on the Pi’s CPU were too slow and I need faster speeds.
Would the NCS be a good choice for my project and help me achieve a higher FPS?
Great question, Danielle. Thank you for asking.
The short answer is yes, you can use the Movidius NCS for object detection with your own custom Caffe model. You’ll even achieve high frame rates if you’re processing live or recorded video.
…but there’s a catch.
I told Danielle that she’ll need the full-blown Movidius SDK installed on her (Ubuntu 16.04) machine. I also mentioned that generating graph files from Caffe models isn’t always straightforward.
Inside today’s post you will learn how to:
- Install the Movidius SDK on your machine
- Generate an object detection graph file using the SDK
- Write a real-time object detection script for the Raspberry Pi + NCS
After going through the post you’ll have a good understanding of the Movidius NCS and whether it’s appropriate for your Raspberry Pi + object detection project.
To get started with real-time object detection on the Raspberry Pi, just keep reading.
Deprecation Notice: This article uses the Movidius SDK and APIv1/APIv2 which is now superseded by Intel’s OpenVINO software for using the Movidius NCS. Learn more about OpenVINO in this PyImageSearch article.
Looking for the source code to this post?
Jump Right To The Downloads SectionReal-time object detection on the Raspberry Pi
Today’s blog post is broken into five parts.
First, we’ll install the Movidius SDK and then learn how to use the SDK to generate the Movidius graph files.
From there, we’ll write a script for real time object detection with the Intel Movidius Neural compute stick that can be used with the Pi (or alternative single board computer with minor modifications).
Next, we’ll test the script + compare results.
In a previous post, we learned how to perform real-time object detection in video on the Raspberry Pi using the CPU and the OpenCV DNN module. We achieved approximately 0.9 FPS which serves as our benchmark comparison. Today, we’re going to see how the NCS paired with a Pi performs against the Pi CPU using the same model.
And finally, I’ve captured some Frequently Asked Questions (FAQs). Refer to this section often — I expect it to grow as I receive comments and emails.
Installing the Intel Movidius SDK
Deprecation Notice: This article uses the Movidius SDK and APIv1/APIv2 which is now superseded by Intel’s OpenVINO software for using the Movidius NCS. Learn more about OpenVINO in this PyImageSearch article.
Last week, I reviewed the Movidius Workflow. The workflow has four basic steps:
- Train a model using a full-size machine
- Convert the model to a deployable graph file using the SDK and an NCS
- Write a Python script which deploys the graph file and processes the results
- Deploy the Python script and graph file to your single board computer equipped with an Intel Movidius NCS
In this section we’ll learn how to install the SDK which includes TensorFlow, Caffe, OpenCV, and the Intel suite of Movidius tools.
Requirements:
- Stand-alone machine or VM. We’ll install Ubuntu 16.04 LTS on it
- 30-60 minutes of time depending on download speed and machine capability
- Movidius NCS USB stick
I highlighted “Stand-alone” as it’s important that this machine only be used for Movidius development.
In other words, don’t install the SDK on a “daily development and productivity use” machine where you might have Python Virtual Environments and OpenCV installed. The install process is not entirely isolated and can/will change existing libraries on your system.
However, there is an alternative:
Use a VirtualBox Virtual Machine (or other virtualization system) and run an isolated Ubuntu 16.04 OS in the VM.
The advantage of a VM is that you can install it on your daily use machine and still keep the SDK isolated. The disadvantage is that you won’t have access to a GPU via the VM.
Danielle wants to use a Mac and VirtualBox works well on macOS, so let’s proceed down that path. Note that you could also run VirtualBox on a Windows or Linux host which may be even easier.
Before we get started, I want to bring attention to non-standard VM settings we’ll be making. We’ll be configuring USB settings which will allow the Movidius NCS to stay connected properly.
As far as I can tell from the forums, these are Mac-specific VM USB settings (but I’m not certain). Please share your experiences in the comments section.
Download Ubuntu and Virtualbox
Let’s get started.
First, download the Ubuntu 16.04 64-bit .iso image from here the official Ubuntu 16.04.3 LTS download page. You can grab the .iso directly or the torrent would also be appropriate for faster downloads.
While Ubuntu is downloading, if you don’t have Oracle VirtualBox, grab the installer that is appropriate for your OS (I’m running macOS). You can download VirtualBox here.
Non-VM users: If you aren’t going to be installing the SDK on a VM, then you can skip downloading/installing Virtualbox. Instead, scroll down to “Install the OS” but ignore the information about the VM and the virtual optical drive — you’ll probably be installing with a USB thumb drive.
After you’ve got VirtualBox downloaded, and while the Ubuntu .iso continues to download, you can install VirtualBox. Installation is incredibly easy via the wizard.
From there, since we’ll be using USB passthrough, we need the Extension Pack.
Install the Extension Pack
Let’s navigate back to the VirtualBox download page and download the Oracle VM Extension Pack if you don’t already have it.
The version of the Extension Pack must match the version of Virtualbox you are using. If you have any VMs running, you’ll want to shut them down in order to install the Extension Pack. Installing the Extension Pack is a breeze.
Create the VM
Once the Ubuntu 16.04 image is downloaded, fire up VirtualBox, and create a new VM:
Give your VM reasonable settings:
- I chose 2048MB of memory for now.
- I selected 2 virtual CPUs.
- I set up a 40Gb dynamically allocated VDI (Virtualbox Disk Image).
The first two settings are easy to change later for best performance of your host and guest OSes.
As for the third setting, it is important to give your system enough space for the OS and the SDK. If you run out of space, you could always “connect” another virtual disk and mount it, or you could expand the OS disk (advanced users only).
USB passthrough settings
A VM, by definition, is virtually running as software. Inherently, this means that it does not have access to hardware unless you specifically give it permission. This includes cameras, USB, disks, etc.
This is where I had to do some digging on the intel forms to ensure that the Movidius would work with MacOS (because originally it didn’t work on my setup).
Ramana @ Intel provided “unofficial” instructions on how to set up USB over on the forums. Your mileage may vary.
In order for the VM to access the USB NCS, we need to alter settings.
Go to the “Settings” for your VM and edit “Ports > USB” to reflect a “USB 3.0 (xHCI) Controller”.
You need to set USB2 and USB3 Device Filters for the Movidius to seamlessly stay connected.
To do this, click the “Add new USB Filter” icon as is marked in this image:
From there, you need to create two USB Device Filters. Most of the fields can be left blank. I just gave each a Name and provided the Vendor ID.
- Name: Movidius1, Vendor ID: 03e7, Other fields: blank
- Name: Movidius2, Vendor ID: 040e, Other fields: blank
Here’s an example for the first one:
Be sure to save these settings.
Install the OS
To install the OS, “insert” the .iso image into the virtual optical drive. To do this, go to “Settings”, then under “Storage” select “Controller: IDE > Empty”, and click the disk icon (marked by the red box). Then find and select your freshly downloaded Ubuntu .iso.
Verify all settings and then boot your machine.
Follow the prompts to “Install Ubuntu”. If you have a fast internet connection, you can select “Download updates while installing Ubuntu”. I did not select the option to “Install third-party software…”.
The next step is to “Erase disk and install Ubuntu” — this is a safe action because we just created the empty VDI disk. From there, set up system name and a username + password.
Once you’ve been instructed to reboot and removed the virtual optical disk, you’re nearly ready to go.
First, let’s update our system. Open a terminal and type the following to update your system:
$ sudo apt-get update && sudo apt-get upgrade
Install Guest Additions
Non-VM users: You should skip this section.
From there, since we’re going to be using a USB device (the Intel NCS), let’s install guest additions. Guest additions also allows for bidirectional copy/paste between the VM and the host amongst other nice sharing utilities.
Guest additions can be installed by going to the Devices menu of Virtual box and clicking “Insert Guest Additions CD Image…”:
Follow the prompt to press “Return to close this window…” which completes the the install.
Take a snapshot
Non-VM users: You can skip this section or make a backup of your desktop/laptop via your preferred method.
From there, I like to reboot followed by taking a “snapshot” of my VM.
Rebooting is important because we just updated and installed a lot of software and want to ensure the changes take effect.
Additionally, a snapshot will allow us to rollback if we make any mistakes or have problems during the install — as we’ll find out, there are some gotchas along the way that can trip you up wih the Movidius SDK, so this is a worthwhile step.
Definitely take the time to snapshot your system. Go to the VirtualBox menubar and press “Machine > Take Snapshot”.
You can give the snapshot a name such as “Installed OS and Guest Additions” as is shown below:
Installing the Intel Movidius SDK on Ubuntu
This section assumes that you either (a) followed the instructions above to install Ubuntu 16.04 LTS on a VM, or (b) are working with a fresh install of Ubuntu 16.04 LTS on a Desktop/Laptop.
Intel makes the process of installing the SDK very easy. Cheers to that!
But like I said above, I wish there was an advanced method. I like easy, but I also like to be in control of my computer.
Let’s install Git from a terminal:
$ sudo apt-get install git
From there, let’s follow Intel’s instructions very closely so that there are hopefully no issues.
Open a terminal and follow along:
$ cd ~ $ mkdir workspace $ cd workspace
Now that we’re in the workspace, let’s clone down the NCSDK and the NC App Zoo:
$ git clone https://github.com/movidius/ncsdk.git $ git clone https://github.com/movidius/ncappzoo.git
And from there, you should navigate into the ncsdk
directory and install the SDK:
$ cd ~/workspace/ncsdk $ make install
You might want to go outside for some fresh air or grab yourself a cup of coffee (or beer depending on what time it is). This process will take about 15 minutes depending on the capability of your host machine and your download speed.
VM Users: Now that the installation is complete, it would be a good time to take another snapshot so we can revert in the future if needed. You can follow the same method as above to take another snapshot (I named mine “SDK installed”). Just remember that snapshots require adequate disk space on the host.
Connect the NCS to a USB port and verify connectivity
This step should be performed on your desktop/laptop.
Non-VM users: You can skip this step because you’ll likely not have any USB issues. Instead, plug in the NCS and scroll to “Test the SDK”.
First, connect your NCS to the physical USB port on your laptop or desktop.
Note: Given that my Mac has Thunderbolt 3 / USB-C ports, I initially plugged in Apple’s USB-C Digital AV Multiport Adapter which has a USB-A and HDMI port. This didn’t work. Instead, I elected to use a simple adapter, but not a USB hub. Basically you should try to eliminate the need for any additional required drivers if you’re working with a VM.
From there, we need to make the USB stick accessible to the VM. Since we have Guest Additions and the Extension Pack installed, we can do this from the VirtualBox menu. In the VM menubar, click “Devices > USB > ‘Movidius Ltd. Movidius MA2X5X'” (or a device with a similar name). It’s possible that the Movidus already has a checkmark next to it, indicating that it is connected to the VM.
In the VM open a terminal. You can run the following command to verify that the OS knows about the USB device:
$ dmesg
You should see that the Movidius is recognized by reading the most recent 3 or 4 log messages as shown below:
If you see the Movidius device then it’s time to test the installation.
Test the SDK
This step should be performed on your desktop/laptop.
Now that the SDK is installed, you can test the installation by running the pre-built examples:
$ cd ~/workspace/ncsdk $ make examples
This may take about five minutes to run and you’ll see a lot of output (not shown in the block above).
If you don’t see error messages while all the examples are running, that is good news. You’ll notice that the Makefile has executed code to go out and download models and weights from Github, and from there it runs mvNCCompile. We’ll learn about mvNCCompile in the next section. I’m impressed with the effort put into the Makefiles by the Movidius team.
Another check (this is the same one we did on the Pi last week):
$ cd ~/workspace/ncsdk/examples/apps $ make all $ cd hello_ncs_py $ python hello_ncs.py Hello NCS! Device opened normally. Goodbye NCS! Device closed normally. NCS device working.
This test ensures that the links to your API and connectivity to the NCS are working properly.
If you’ve made it this far without too much trouble, then congratulations!
Generating Movidius graph files from your own Caffe models
Deprecation Notice: This article uses the Movidius SDK and APIv1/APIv2 which is now superseded by Intel’s OpenVINO software for using the Movidius NCS. Learn more about OpenVINO in this PyImageSearch article.
This step should be performed on your desktop/laptop.
Generating graph files is made quite easy by Intel’s SDK. In some cases you can actually compute the graph using a Pi. Other times, you’ll need a machine with more memory to accomplish the task.
There’s one main tool that I’d like to share with you: mvNCCompile
.
This command line tool supports both TensorFlow and Caffe. It is my hope that Keras will be supported in the future by Intel.
For Caffe, the command line arguments are in the following format (TensorFlow users should refer to the documentation which is similar):
$ mvNCCompile network.prototxt -w network.caffemodel \ -s MaxNumberOfShaves -in InputNodeName -on OutputNodeName \ -is InputWidth InputHeight -o OutputGraphFilename
Let’s review the arguments:
network.prototxt
: path/filename of the network file-w network.caffemodel
: path/filename of the caffemodel file-s MaxNumberOfShaves
: SHAVEs (1, 2, 4, 8, or 12) to use for network layers (I think the default is 12, but the documentation is unclear)-in InputNodeNodeName
: you may optionally specify a specific input layer (it would match the name in the prototxt file)-on OutputNodeName
: by default the network is processed through the output tensor and this option allows a user to select an alternative end point in the network-is InputWidth InputHeight
: the input shape is very important and should match the design of your network-o OutputGraphFilename
: if no file/path is specified this defaults to the very ambiguous filename,graph
, in the current working directory
Where’s the batch size argument?
The batch size for the NCS is always 1 and the number of color channels is assumed to be 3.
If you provide command line arguments to mvNCCompile
in the right format with an NCS plugged in, then you’ll be on your way to having a graph file rather quickly.
There’s one caveat (at least from my experience thus far with Caffe files). The mvNCCompile
tool requires that the prototxt be in a specific format.
You might have to modify your prototxt to get the mvNCCompile
tool to work. If you’re having trouble, the Movidius forums may be able to guide you.
Today we’re working with MobileNet Single Shot Detector (SSD) trained with Caffe. The GitHub user, chuanqui305, gets credit for the training the model on the MS-COCO dataset. Thank you chuanqui305!
I have provided chuanqui305’s files in the “Downloads” section. To compile the graph you should execute the following command:
$ mvNCCompile models/MobileNetSSD_deploy.prototxt \ -w models/MobileNetSSD_deploy.caffemodel \ -s 12 -is 300 300 -o graphs/mobilenetgraph mvNCCompile v02.00, Copyright @ Movidius Ltd 2016 /usr/local/bin/ncsdk/Controllers/FileIO.py:52: UserWarning: You are using a large type. Consider reducing your data sizes for best performance "Consider reducing your data sizes for best performance\033[0m")
You should expect the Copyright message and possibly additional information or a warning like I encountered above. I procdeded by ignoring the warning without any trouble.
Object detection with the Intel Movidius Neural Compute Stick
Deprecation Notice: This article uses the Movidius SDK and APIv1/APIv2 which is now superseded by Intel’s OpenVINO software for using the Movidius NCS. Learn more about OpenVINO in this PyImageSearch article.
Writing this code can be performed on your desktop/laptop or your Pi, however you should run it on your Pi in the next section.
Let’s write a real-time object detection script. The script very closely aligns with the non-NCS version that we built in a previous post.
You can find today’s script and associated files in the “Downloads” section of this blog post. I suggest you download the source code and model file if you wish to follow along.
Once you’ve downloaded the files, open ncs_realtime_objectdetection.py
:
# import the necessary packages from mvnc import mvncapi as mvnc from imutils.video import VideoStream from imutils.video import FPS import argparse import numpy as np import time import cv2
We import our packages on Lines 2-8, taking note of the mvncapi
, which is the the Movidius NCS Python API package.
From there we’ll perform initializations:
# initialize the list of class labels our network was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) # frame dimensions should be sqaure PREPROCESS_DIMS = (300, 300) DISPLAY_DIMS = (900, 900) # calculate the multiplier needed to scale the bounding boxes DISP_MULTIPLIER = DISPLAY_DIMS[0] // PREPROCESS_DIMS[0]
Our class labels and associated random colors (one random color per class label) are initialized on Lines 12-16.
Our MobileNet SSD requires dimensions of 300×300, but we’ll be displaying the video stream at 900×900 to better visualize the output (Lines 19 and 20).
Since we’re changing the dimensions of the image, we need to calculate the scalar value to scale our object detection boxes (Line 23).
From there we’ll define a preprocess_image
function:
def preprocess_image(input_image): # preprocess the image preprocessed = cv2.resize(input_image, PREPROCESS_DIMS) preprocessed = preprocessed - 127.5 preprocessed = preprocessed * 0.007843 preprocessed = preprocessed.astype(np.float16) # return the image to the calling function return preprocessed
The actions made in this pre-process function are specific to our MobileNet SSD model. We resize, perform mean subtraction, scale the image, and convert it to float16
format (Lines 27-30).
Then we return the preprocessed
image to the calling function (Line 33).
To learn more about pre-processing for deep learning, be sure to refer to my book, Deep Learning for Computer Vision with Python.
From there we’ll define a predict
function:
def predict(image, graph): # preprocess the image image = preprocess_image(image) # send the image to the NCS and run a forward pass to grab the # network predictions graph.LoadTensor(image, None) (output, _) = graph.GetResult() # grab the number of valid object predictions from the output, # then initialize the list of predictions num_valid_boxes = output[0] predictions = []
This predict
function applies to users of the Movidius NCS and it is largely based on the Movidius NC App Zoo GitHub example — I made a few minor modifications.
The function requires an image
and a graph
object (which we’ll instantiate later).
First we pre-process the image (Line 37).
From there, we run a forward pass through the neural network utilizing the NCS while grabbing the predictions (Lines 41 and 42).
Then we extract the number of valid object predictions (num_valid_boxes
) and initialize our predictions
list (Lines 46 and 47).
From there, let’s loop over the valid results:
# loop over results for box_index in range(num_valid_boxes): # calculate the base index into our array so we can extract # bounding box information base_index = 7 + box_index * 7 # boxes with non-finite (inf, nan, etc) numbers must be ignored if (not np.isfinite(output[base_index]) or not np.isfinite(output[base_index + 1]) or not np.isfinite(output[base_index + 2]) or not np.isfinite(output[base_index + 3]) or not np.isfinite(output[base_index + 4]) or not np.isfinite(output[base_index + 5]) or not np.isfinite(output[base_index + 6])): continue # extract the image width and height and clip the boxes to the # image size in case network returns boxes outside of the image # boundaries (h, w) = image.shape[:2] x1 = max(0, int(output[base_index + 3] * w)) y1 = max(0, int(output[base_index + 4] * h)) x2 = min(w, int(output[base_index + 5] * w)) y2 = min(h, int(output[base_index + 6] * h)) # grab the prediction class label, confidence (i.e., probability), # and bounding box (x, y)-coordinates pred_class = int(output[base_index + 1]) pred_conf = output[base_index + 2] pred_boxpts = ((x1, y1), (x2, y2)) # create prediciton tuple and append the prediction to the # predictions list prediction = (pred_class, pred_conf, pred_boxpts) predictions.append(prediction) # return the list of predictions to the calling function return predictions
Okay, so the above code might look pretty ugly. Let’s take a step back. The goal of this loop is to append prediction data to our predictions
list in an organized fashion so we can use it later. This loop just extracts and organizes the data for us.
But what in the world is the base_index
?
Basically, all of our data is stored in one long array/list (output
). Using the box_index
, we calculate our base_index
which we’ll then use (with more offsets) to extract prediction data.
I’m guessing that whoever wrote the Python API/bindings is a C/C++ programmer. I might have opted for a different way to organize the data such as a list of tuples like we’re about to construct.
Why are we ensuring values are finite on Lines 55-62?
This ensures that we have valid data. If it’s invalid we continue
back to the top of the loop (Line 63) and try another prediction.
What is the format of the output
list?
The output list has the following format:
output[0]
: we extracted this value on Line 46 asnum_valid_boxes
output[base_index + 1]
: prediction class indexoutput[base_index + 2]
: prediction confidenceoutput[base_index + 3]
: object boxpoint x1 value (it needs to be scaled)output[base_index + 4]
: object boxpoint y1 value (it needs to be scaled)output[base_index + 5]
: object boxpoint x2 value (it needs to be scaled)output[base_index + 6]
: object boxpoint y2 value (it needs to be scaled)
Lines 68-82 handle building up a single prediction tuple. The prediction consists of: (pred_class, pred_conf, pred_boxpts)
and we append the prediction
to the predictions
list on Line 83.
After we’re done looping through the data, we return
the predictions
list to the calling function on Line 86.
From there, let’s parse our command line arguments:
# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-g", "--graph", required=True, help="path to input graph file") ap.add_argument("-c", "--confidence", default=.5, help="confidence threshold") ap.add_argument("-d", "--display", type=int, default=0, help="switch to display image on screen") args = vars(ap.parse_args())
We parse our three command line arguments on Lines 89-96.
We require the path to our graph file. Optionally we can specify a different confidence threshold or display the image to the screen.
Next, we’ll connect to the NCS and load the graph file onto it:
# grab a list of all NCS devices plugged in to USB print("[INFO] finding NCS devices...") devices = mvnc.EnumerateDevices() # if no devices found, exit the script if len(devices) == 0: print("[INFO] No devices found. Please plug in a NCS") quit() # use the first device since this is a simple test script # (you'll want to modify this is using multiple NCS devices) print("[INFO] found {} devices. device0 will be used. " "opening device0...".format(len(devices))) device = mvnc.Device(devices[0]) device.OpenDevice() # open the CNN graph file print("[INFO] loading the graph file into RPi memory...") with open(args["graph"], mode="rb") as f: graph_in_memory = f.read() # load the graph into the NCS print("[INFO] allocating the graph on the NCS...") graph = device.AllocateGraph(graph_in_memory)
The above block is identical to last week, so I’m not going to review it in detail. Essentially we’re checking that we have an available NCS, connecting, and loading the graph file on it.
The result is a graph
object which we use in the predict function above.
Let’s kick off our video stream:
# open a pointer to the video stream thread and allow the buffer to # start to fill, then start the FPS counter print("[INFO] starting the video stream and FPS counter...") vs = VideoStream(usePiCamera=True).start() time.sleep(1) fps = FPS().start()
We start the camera VideoStream
, allow our camera to warm up, and our instantiate our FPS counter.
Now let’s process the camera feed frame by frame:
# loop over frames from the video file stream while True: try: # grab the frame from the threaded video stream # make a copy of the frame and resize it for display/video purposes frame = vs.read() image_for_result = frame.copy() image_for_result = cv2.resize(image_for_result, DISPLAY_DIMS) # use the NCS to acquire predictions predictions = predict(frame, graph)
Here we’re reading a frame from the video stream, making a copy (so we can draw on it later), and resizing it (Lines 135-137).
We then send the frame through our object detector which will return predictions
to us.
Let’s loop over the predictions
next:
# loop over our predictions for (i, pred) in enumerate(predictions): # extract prediction data for readability (pred_class, pred_conf, pred_boxpts) = pred # filter out weak detections by ensuring the `confidence` # is greater than the minimum confidence if pred_conf > args["confidence"]: # print prediction to terminal print("[INFO] Prediction #{}: class={}, confidence={}, " "boxpoints={}".format(i, CLASSES[pred_class], pred_conf, pred_boxpts))
Looping over the predictions
, we first extract the class, confidence and boxpoints for the object (Line 145).
If the confidence
is above the threshold, we print the prediction to the terminal and check if we should display the image on the screen:
# check if we should show the prediction data # on the frame if args["display"] > 0: # build a label consisting of the predicted class and # associated probability label = "{}: {:.2f}%".format(CLASSES[pred_class], pred_conf * 100) # extract information from the prediction boxpoints (ptA, ptB) = (pred_boxpts[0], pred_boxpts[1]) ptA = (ptA[0] * DISP_MULTIPLIER, ptA[1] * DISP_MULTIPLIER) ptB = (ptB[0] * DISP_MULTIPLIER, ptB[1] * DISP_MULTIPLIER) (startX, startY) = (ptA[0], ptA[1]) y = startY - 15 if startY - 15 > 15 else startY + 15 # display the rectangle and label text cv2.rectangle(image_for_result, ptA, ptB, COLORS[pred_class], 2) cv2.putText(image_for_result, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 1, COLORS[pred_class], 3)
If we’re displaying the image, we first build a label
string which will contain the class name and confidence in percentage form (Lines 160-161).
From there we extract the corners of the rectangle and calculate the position for our label
relative to those points (Lines 164-168).
Finally, we display the rectangle and text label on the screen. If there are multiple objects of the same class in the frame, the boxes and labels will have the same color.
From there, let’s display the image and update our FPS counter:
# check if we should display the frame on the screen # with prediction data (you can achieve faster FPS if you # do not output to the screen) if args["display"] > 0: # display the frame to the screen cv2.imshow("Output", image_for_result) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # update the FPS counter fps.update() # if "ctrl+c" is pressed in the terminal, break from the loop except KeyboardInterrupt: break # if there's a problem reading a frame, break gracefully except AttributeError: break
Outside of the prediction loop, we again make a check to see if we should display the frame to the screen. If so, we show the frame (Line 181) and wait for the “q” key to be pressed if the user wants to quit (Lines 182-186).
We update our frames per second counter on Line 189.
From there, we’ll most likely continue to the top of the frame-by-frame loop to complete the process again.
If the user happened to press “ctrl+c” in the terminal or if there’s a problem reading a frame, we break out of the loop.
# stop the FPS counter timer fps.stop() # destroy all windows if we are displaying them if args["display"] > 0: cv2.destroyAllWindows() # stop the video stream vs.stop() # clean up the graph and device graph.DeallocateGraph() device.CloseDevice() # display FPS information print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
This last code block handles some housekeeping (Lines 200-211) and finally prints the elapsed time and the frames per second pipeline information to the screen. This information allows us to benchmark our script.
Movidius NCS object detection results
This step should be applied on your Raspberry Pi + NCS with an HDMI cable + screen hooked up. You’ll also need a keyboard and mouse and as I described in my previous tutorial in Figure 2, you may need a dongle extension cable to make room for a USB keyboard/mouse. It’s also possible to run this step on a desktop/laptop, but the speed is likely to be slower than using your CPU.
Let’s run our real-time object detector with the NCS using the following command:
$ python ncs_realtime_objectdetection.py --graph graph --display 1
Prediction results will be printed in the terminal and the image will be displayed on our Raspberry Pi monitor.
Below I have included an example GIF animation of shooting a video with a smartphone and then post-processing it on the Raspberry Pi:
Along with the full example video of clips:
Thank you to David McDuffee for shooting these example clips so I could include it!
Here’s an example video of the system in action recorded with a Raspberry Pi:
A big thank you to David Hoffman for demoing the Raspberry Pi + NCS in action.
Note: As some of you know, this past week I was taking care of a family member who is recovering from emergency surgery. While I was able to get the blog post together, I wasn’t able to shoot the example videos. A big thanks to both David Hoffman and David McDuffee for gathering great examples making today’s post possible!
And here’s a table of results:
The Movidius NCS can propel the Pi to a ~6.88x speedup over the standard CPU object detection! That’s progress.
I reported results with the display option being “on” as well as “off”. As you can see, displaying on the screen slows down the FPS by about 1 FPS due to the OpenCV drawing text/boxes as well as highgui overhead. The reason I reported both this week is so that you’ll have a better idea of what to expect if you’re using this platform and performing object detection without the need for a display (such as in a robotics application).
Note: Optimized OpenCV 3.3+ (with the DNN module) installations will have faster FPS on the Pi CPU (I reported 0.9 FPS previously). To install OpenCV with NEON and VFP3 optimizations just read this previous post. I’m not sure if the version of OpenCV 2.4 that gets installed with the Movidius toolchain contains these optimizations which is one reason why I reported the non-optimized 0.49 FPS metric in the table.
I’ll wrap up this section by saying that it is possible give the illusion of faster FPS with threading if you so wish. Check out this previous post and implement the strategy into ncs_realtime_objectdetection.py
that we reviewed today.
Frequently asked questions (FAQs)
In this section I detail the answers to Frequently Asked Questions regarding the NCS.
Why does my Movidius NCS continually disconnect from my VM? It appears to be connected, but then when I run ‘make examples’ as instructed above, I see connectivity error messages. I’m running macOS and using a VM.
You must use the VirtualBox Extension Pack and add two USB device filters specifically for the Movidius. Please refer to the USB passthrough settings above.
No predictions are being made on the video — I can see the video on the screen, but I don’t see any error messages or stacktrace. What might be going wrong?
This is likely due to an error in pre-processing.
Be sure your pre-processing function is correctly performing resizing and normalization.
First, the dimensions of the pre-processed image must match the model exactly. For the MobileNet SSD that I’m working with, it is 300×300.
Second, you must normalize the input via mean subtraction and scaling.
I just bought an NCS and want to run the example on my Pi using my HDMI monitor and a keyboard/mouse. How do I access the USB ports that the NCS is blocking?
It seems a bit of poor design that the NCS blocks adjacent USB ports. The only solution I know of is to buy a short extension cable such as this 6in USB 3.0 compatible cable on Amazon — this will give more space around the other three USB ports.
Of course, you could also take your NCS to a machine shop and mill down the heatsink, but that wouldn’t be good for your warranty or cooling purposes.
How do I install the Python bindings to the NCS SDK API in a virtual environment?
Quite simply: you can’t.
Install the SDK on an isolated computer or VM.
For your Pi, install the SDK API-only mode on a separate micro SD card than the one you currently use for everyday use.
I have errors when running ‘mvNCCompile’ on my models. What do you recommend?
The Movidius graph compiling tool, mvNCCompile, is very particular about the input files. Oftentimes for Caffe, you’ll need to modify the .prototxt file. For TensorFlow I’ve seen that the filenames themselves need to be in a particular format.
Generally it is a simple change that needs to be made, but I don’t want to lead you in the wrong direction. The best resource right now is the Movidius Forums.
In the future, I may update these FAQs and the Generating Movidius graph files from your own Caffe models section with guidelines or a link to Intel documentation.
I’m hoping that the Movidius team at Intel can improve their graph compiler tool as well.
What’s next?
If you’re looking to perform image classification with your NCS, then refer to last week’s blog post.
Let me know what you’re looking to accomplish with a Movidius NCS and maybe I’ll turn the idea into a blog post.
Be sure to check out the Movidius blog and TopCoder Competition as well.
Movidus blog on GitHub
The Movidius team at Intel has a blog where you’ll find additional information:
The GitHub community surrounding the Movidius NCS is growing. I recommend that you search for Movidius projects using the GitHub search feature.
Two official repos that you should watch are (click the “watch” button on to be informed of updates):
TopCoder Competition
Are you interested earning up to $8,000?
Intel is sponsoring a competition on TopCoder.
There are $20,000 in prizes up for grabs (first place wins $8,000)!
Registration and submission closes on February 26, 2018. That is next Monday, so don’t waste any time!
Keep track of the leaderboard and standings!
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
Today, we answered PyImageSearch reader, Danielle’s questions. We learned how to:
- Install the SDK in a VM so she can use her Mac.
- Generate Movidius graph files from Caffe models.
- Perform object detection with the Raspberry Pi and NCS.
We saw that MobileNet SSD is >6.8x faster on a Raspberry Pi when using the NCS.
The Movidius NCS is capable of running many state-of-the-art networks and is a great value at less than $100 USD. You should consider purchasing one if you want to deploy it in a project or if you’re just yearning for another device to tinker with. I’m loving mine.
There is a learning curve, but the Movidius team at Intel has done a decent job breaking down the barrier to entry with working Makefiles on GitHub.
There is of course room for improvement, but nobody said deep learning was easy.
I’ll wrap today’s post by asking a simple question:
Are you interested in learning the fundamentals of deep learning, how to train state-of-the-art networks from scratch, and discovering my handpicked best practices?
If that sounds good, then you should definitely check out my latest book, Deep Learning for Computer Vision with Python. It is jam-packed with practical information and deep learning code that you can use in your own projects.
Deprecation Notice: This article uses the Movidius SDK and APIv1/APIv2 which is now superseded by Intel’s OpenVINO software for using the Movidius NCS. Learn more about OpenVINO in this PyImageSearch article.
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Dougi
This looks like a great article and I look forward to digging into it properly.
I am currently working on a similar thing and am thinking of having a go with DarkNet’s YOLO.
Do you know if that would work on a Raspberry Pi too? If so, an article on that would be awesome!
Thanks so much.
Adrian Rosebrock
Movidius supports a variation of YOLO based on a Caffe port of the DarkNet method. I would suggest using the Movidius version of YOLO or finding a Caffe version that can be directly imported to OpenCV’s “dnn” module. See this blog post for more information.
John Jamieson
Have you had a look at https://github.com/gudovskiy/yoloNCS ?
haixun
Well done, Adrian! First such detailed post on this. Thanks!
Adrian Rosebrock
Thanks haixun!
Steve Cox
Great article. If anyone has hands on experience taking a re-trained tensorflow object detection model and running it on OpenCV 3.3 DNN api just like you do with caffe models I would greatly appreciate the help.
I can’t seem to find the secret sauce to take a tensorflow model and get it loaded in OpenCV DNN. I realize everyone has great experience with caffe models, but I want to stick with Tensorflow/Karas framework.
Thanks !!!
Adrian Rosebrock
Hey Steve! TensorFlow models can be a real pain to work with when it comes to loading their serialized weights. This is true for both OpenCV DNN and the NCS. If I can figure the “secret sauce” I’ll absolutely be doing a blog post on it.
Vinod Alase
Hi Adrian ,
Very nice blog..Can you help me with generating Movidius graph files from using Tensorflow…
Adrian Rosebrock
I’m covering that exact topic inside Raspberry Pi for Computer Vision.
farhan
how if i just want to detect object “person” and remove all other object in the image ? what must i do
Adrian Rosebrock
See this tutorial.
Dmitry
There is a way to create a supportive definition of TensorFlow graph in text. As you know, Caffe models are represented by .caffemodel and .prototxt. Actually they are both protocol buffers but the last one has no weights and easy for editing. For example, you can add extra layers without weights like SoftMax. The script mentioned at a wiki page https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API (tf_text_graph_ssd.py) creates a .pbtxt file that can be used in OpenCV to help it import TensorFlow graph. Note that this script works only for SSD-based object detection model.
Ali
Very well detailed tutorial as always, Adrian ! Looking forward to getting my Intel NCS ! Meanwhile I’m trying to use real time object detection along with some opencv image processing for a project, which framework and which model implementation would you suggest can be able to achieve a decent frame rate ( around 20 – 30 fps) on GPU ? Thanks.
Adrian Rosebrock
Hey Ali — a Single Shot Detector (SSD) + MobileNet would get you in the range of 30-50 FPS on a GPU.
braca
Thanks Adrian very cool post! I think that you skip the section to install pip3 and one could encounter problems while trying to complete all the steps.
Adrian Rosebrock
Hi braca — just to clarify, are you referring to installing pip on the VM or the Raspberry Pi?
braca
Ignore previous comment restarted the system and worked!!!
braca
I’m using a VM!! Thanks Adrain!!
Adrian Rosebrock
Awesome, I’m glad it’s now working for you, Braca 🙂
David Hoffman
Hi Braca,
I used the system PIP. The only packages I installed are imutils and picamera[array].
$ pip install imutils
$ pip install “picamera[array]”
You may need to use pip3.
braca
Thanks David!! It helped!!
Alvin
Hi adrian, I always have problem when running the make examples on ncsdk. It says syntax error on protobuf. It appear that protobuf 2.6.1 did not compatible on python3 and make examples on movidius is using python3. How did you counter this problem? Thanks
Adrian Rosebrock
Hi Alvin — I suggest you double check the formatting in your prototxt and then post in the Movidius Forums if you’re still having trouble.
Al Bee
Hey Adrian,
Have you tried YOLO? It uses a totally different approach by applying a single neural network to the full image.
https://pjreddie.com/darknet/yolo/
Adrian Rosebrock
I have used YOLO for different projects, yes. It really depends on the project but I tend to find SSDs provide a better balance between speed and accuracy. Of course, it really depends on the project. I have yet to try YOLO on the NCS though. That will be for another project 🙂
Al Bee
For others reference, here is the link to the instructions to install yoloNCS. Cheers!
https://github.com/gudovskiy/yoloNCS/blob/master/README.md
Adrian Rosebrock
Thanks for sharing!
Achintya Kumar
Does the yoloNCS work accurately with the neural stick and raspberry pi?
iamtodor
Hello Adrian Rosebrock,
I have found your blog-post https://pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/ very useful for myself. I want to say thank you.
The only thing I have incomplete is I can’t figure out how to use range-detector: https://github.com/jrosebr1/imutils/blob/master/bin/range-detector
I see a lot of people have that problem and you replied you might publish even the whole article about that util. Unfortunately, I don’t find it.
Can you help me please to run that script? I have an image and specific object – orange. I want to know the upper and lower boundaries.
Thanks in advance
Adrian Rosebrock
I had not written the article on it yet, I’ve been busy with a few other projects and writing up these deep learning tutorials. I’ll make a note to write an article on it soon.
Marc A Getter
After the make install for ncsdk runs for about 10 minutes, my vm is crashing with a red and purple screen with a combination of characters. Has anyone else encountered this?
David Hoffman
Which OS is your Host, which OS is your Guest, and which version of VirtualBox are you running? For reference, I’m on macOS OSX 10.13.3, my Guest VM is Ubuntu 16.04, and VirtualBox is 5.2.6.
Jim
I followed all the instructions (I think!) and get:
(cv) $ python ncs_realtime_objectdetection.py –graph graph –display 1
Traceback (most recent call last):
File “ncs_realtime_objectdetection.py”, line 6, in
from mvnc import mvncapi as mvnc
ImportError: No module named mvnc
I did install ncsdk and it talks to my NCS
$ python hello_ncs.py
Hello NCS! Device opened normally.
Goodbye NCS! Device closed normally.
NCS device working.
Adrian Rosebrock
Hey Jim — unfortunately you will not be able to use Python virtual environments with the NCS. I discuss this more in the previous post.
FanWah
may i know the solutions for this issue? Im having this issue as well.
abdbaddude
Wondering why you keep using print(“INFO” ….) ? I guess the python logging facility could be used. Or is the a performance gain considered on the raspberryPi.
Adrian Rosebrock
You could use logging if you wanted, there is no problem with that. I just used “print” as some readers may not be comfortable or used to the logging features with Python. It’s pretty trivial to swap out “print” for “logging” so feel free to use whichever one you are comfortable with.
Prubio
Hi!
I downloaded your code and I had some errors. I share it here in case can help others.
The first error I suffered was: numpy.float16 cannot be interpreted as integer.
To solve that, in line 50:
num_valid_boxes=output[0].astype(int)
The second error: DISP_MULTIPLIER is not defined
In lines 169 and 170 change DISP_MULTIPLIER for DISPLAY_MULTIPLIER.
Great job Adrian, congrats!
I am trying to apply ncs to object detection with tensorflow, for my own models, but I have problems. If you have time, I´ll really appreciate an explication about that.
Cheers!
Adrian Rosebrock
Hi Prubio — thanks for the catch about the variable name mismatch. This was a mistake while putting the post together and it has now been corrected. I didn’t encounter the same issue with needing to force the type of output[0] to an int, but I’m glad that you got yours working! What is your question about NCS object detection with tensorflow?
Prubio
I trained ssd_mobilenet_v1 (pre trained in coco dataset) for object detection, with LabelImg and object detection API Tensorflow. I obtained the frozen_inference_graph.pb and it´s working perfectly, but I can´t convert it with mvNC for using with Movidius NCS. I have read that ssd_mobilenet_v1 has not supported. But I have the same problem with ssd_inception_v2.
How could I do for reach my goal? Train a network with object detection API and obtain the graph to apply inference with Movidius NCS.
Thanks Adrian.
Cheers!
Adrian Rosebrock
Prubio — I’ve had success using the models that Movidius has provided. Using models of my own I’ve had limited success and have been referring to the Movidius Forums and searching on GitHub. That’s the first place I’ve been going for support and that seems to be where the “experts” are. Nobody is truly an expert yet (short of the ones that coded the compile tool at Movidius) as this product is still in its infancy. Early adopters have definitely been struggling along, but it will get better I hope.
Leo
I had the same errors. Thanks a lot!!
simon
Hi,
Thanks for your amazing post. And I have tested the same script on my workstation (i7 Core, 64GB RAM, NVIDIA 980 TI) and the FPS is around 10. I guess it depends on CPU/RAM performance.
Since it seems to depend on CPU/RAM performance, I wish to monitor system CPU and RAM usage when more than one NCS USB is taking a job; I got the secound NCS USB and wonder how to assign a different work on a second USB while the first USB works on the other script.
Thanks,
simon
So this
“device = mvnc.Device(devices[0])” should be
“device = mvnc.Device(devices[1])” on the secound USB?
Thanks,
simon
I have tested it and it works fine.
The second one shows the same performance.
CPU usages on both: 9%
RAM usage on both: about 130 MB.
FPS: around 10 FPS.
Thanks,
Adrian Rosebrock
Excellent. Thanks for sharing.
Adrian Rosebrock
If you have two NCS devices plugged in, that’s my understanding as to how it would work.
Adrian Rosebrock
Hi Simon — I’d recommend a threaded approach if you’re putting multiple NCS devices to work. Here’s a relevant blog post to get you started. Populate a queue as frames come in. Then have a thread to preprocess and assign the images to the available NCSs (in separate threads). The trick would be getting the detections back into the right order before displaying since you’ve got multiple worker threads. To accommodate, I’d recommend including additional information with each frame from the start (frame count number). This might be a future blog post idea.
Michael
Adrian,
When I download the code from your email, I get the following error:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
Access Denied
E33023FCA7F14778
Adrian Rosebrock
Hi Michael — thank you for bringing this to my attention. I uploaded a new version of the NCS code yesterday and forgot to set the permissions to make the file downloadable. It is fixed now and the code can be downloaded.
Jussi
VM dont recognize my movidius stick.
I have installed extension and insert guest additions.
David Hoffman
Hi Jussi, I definitely struggled with USB too, so you’re not alone. Which OS is your Host, which OS is your Guest, and which version of VirtualBox are you running? For reference, I’m on macOS OSX 10.13.3, my Guest VM is Ubuntu 16.04, and VirtualBox is 5.2.6.
Jussi
Hi David, I use Ubuntu 17.10, my Guest is 16.04 and VirtualBox is 5.2.6. I am really stack with this. Meaby I try also with my mac.
David Hoffman
Hi Jussi, I would suggest that you post in the VirtualBox forums. They’ll be able to help you and their community is very active. Feel free to post a linkback to your VirtualBox forum post so that other PyImageSearch readers can see any potential solutions. I’m also curious — did you experience a VirtualBox error, or was the Stick just not able to stay connected (an NCS error in the terminal of the VM itself)?
Jussi
Ok. I will ask that from VM forum. Ubuntu did not recognize the usb stick even filters has right setup. I changed to my mac and stick found right away ;).
David Hoffman
Thanks for sharing. I’m sorry you’re having trouble with an Ubuntu host — I don’t currently know a solution for you. If you figure it out, then please share to benefit the community.
Don
Don’t forget to add the 2 filters described above in “USB passthrough settings”. I struggled with this problem for days, until I came to this article. (thumbs up Adrian) Intel’s team give NO instructions on this, and users in the forums state you must enter “Product ID” and “Vendor ID” in the filter. THIS WILL FAIL!! (at least for me) ONLY enter the “Vendor ID” ‘040e’ and ’03e7′ for each.
You can see if it’s connected by looking at Devices-USB on the VM window. If there a check next to the Movidius, then ubuntu can see it, and it should auto-reconnect during initialization.
Adrian Rosebrock
Thanks for sharing Don. I’m glad this blog post was able help you!
Raghvendra Jain
Hi, Thank you for the great blog! My question is can we try Caffe2 also on this stick too? If so, it will be very easy to write code using PyTorch and use ONNX to convert a model defined in PyTorch into the ONNX format and then load it into Caffe2. Thank you very much.
Adrian Rosebrock
As far as I know, Caffe2 isn’t supported yet — there’s a Movidius Forum topic question and nobody from Intel has responded. I’m sure it is on their roadmap (quite honestly it would be nice if they share their roadmap on the Movidius Blog).
Raghvendra Jain
Thank you for the reply!
Pongrut
Hi, I always appreciate your dedication to your blogs
I doubt the process of creating a graph. I tried downloading the MobileNet_deploy.caffemodel and MobileNet_deploy.prototxt files from chuanqui305 GitHub, and of course, I failed to generate graph files.
With your provided files, I can create your graph file as you shown in the blog, so I want to ask what you need to do to generate the graph file successfully.
Adrian Rosebrock
I believe that the Movidius Makefile associated with the models from GitHub user chuanqui305 are compatible. Please check the Movidius Forums, the Makefile on Github, and this forum post.
Andy
Just an update for anyone trying to use the NCS on a VirtualBox VM on Windows 10…
It looks like there are various issues with setting this up depending on your PC. I’m on an ACER with 2 x USB3.0 and 1 x USB2.0 ports on Windows 10 Home and running a VirtualBox VM with Ubuntu 16.04.
I’ve gone through various combinations with plugging the NCS into all three different USB ports and the USB2.0 seemed to be the most stable. The device shows up in the device list in the VM USB menu as Movidius MA2X5X with a Vendor ID of 03e7 and Product ID of 2150. It’s worth noting that the movidius forum have indicated that it’s not worth pursuing using the VM route as it does seem fraught with challenges but Adrian has summarised the various steps worth trying.
For my part, I would add that since I tried the stick in the USB3.0 port first which didn’t work, it seemed to leave residual devices in the system that were being picked up by VirtualBox (I had an unknown device 03e7:2150 and a Movidius LSC (or VSC maybe as it’s now gone) on 03e7:2150 too) .
The issue of the device being dropped from the VM system is still present (you’ll see this when you run ‘make examples’ in the ncsdk folder ) and there was some traffic about this on the forums but the ncappzoo hello_ncs_py works as a single test so persevere but be patient.
Thanks again to Adrian for this great write-up with all the supporting detail.
Adrian Rosebrock
Andy, thanks for sharing your experience the VM in Windows 10.
Niklas
Hi Andy,
quick question is the :
[Error 7] Toolkit Error: USB Failure. Code: No device found
what youre refering to here ?
Got that Error message myself when i try to launch the examples… (Win10)
John Jamieson
Hi Andy, I use VirtualBox 5.2.8 on windows 10 FCU. The VM is Ubuntu 16.04.5. The NCS is plugged into a USB3 port. I set the USB in VirtualBox to the USB 3.0 xHCI controller. I then add two USB devices, 1st one with “Vendor 03E7, Product 2150” and the 2nd one with “Vendor 03E7, Product F63B”. Note the Vendor ID for both. I did not test this combo on a USB 2 port yet, but will test it when get a chance to put the VM onto another machine. (laptop only has 3.0). I am able to run every single NCS example (I only have 1 stick) in the ncappzoo without any problems, except maybe the webcam – it struggles somewhat. This includes the python and compiled C examples. I used the minhoolee/install-opencv-3.0.0 script off github for openCV. I dont get any dropouts with the NCS, even when i had it plugged into a TB powered USB hub.
han
Thanks for sharing this cool information
Adrian Rosebrock
I’m glad you enjoyed it Han. Do you have an NCS or are you considering purchasing one?
Jiang Chuan
Hi Adrian,
I have movidius but I do not have Raspberry Pi, So I am trying to start your sample from my ubuntu host. When I run “python ncs_realtime_objectdetection.py –graph graphs/mobilenetgraph”, It fails as follows:
[INFO] finding NCS devices…
[INFO] found 1 devices. device0 will be used. opening device0…
[INFO] loading the graph file into RPi memory…
[INFO] allocating the graph on the NCS…
[INFO] starting the video stream and FPS counter…
Traceback (most recent call last):
File “ncs_realtime_objectdetection.py”, line 144, in
predictions = predict(frame, graph)
File “ncs_realtime_objectdetection.py”, line 54, in predict
for box_index in range(num_valid_boxes):
TypeError: ‘numpy.float16’ object cannot be interpreted as an integer
Do you have any suggestion about how to fix the error?
Thanks,
Jiang Chuan.
Adrian Rosebrock
Hi Jiang, it looks like `num_valid_boxes` is being reported as a float16. Try casting it to an int.
roshan
how can i casting into int, can u pls explain clearly
Adrian Rosebrock
Just call:
int(num_valid_boxes
Jiang Chuan
In order to get video from camera, I changed the following code:
vs = VideoStream(usePiCamera=True).start()
to
vs = VideoStream(0).start()
Eswar Sai Krishna G
I changed it, but still, I am getting elapsed time and approx. FPS as 0 when I am trying to run on laptop’s camera.
Do you have any other suggestion or idea about the problem?
Thanks,
Eswar.
Jussi
Hi, what is (in this case) the best way and line to flip video horizontal? My pi camera is upside down.
Adrian Rosebrock
You can use the “cv2.flip” function.
Lee Mewshaw
Hi Adrian,
I’m trying to follow your steps, and I’m not clear on when I switch the Movidius from Ubuntu to the Raspberry Pi. I have successfully made it through “Test the SDK” on the Ubuntu machine, and I’m getting an error on mvNCCompile. I’ll work through the forums to figure that out, but the question for you is after running that command mvNCCompile command, do I move it over to a USB port on the raspberry pi after that, or before? I can’t tell from your steps when you actually are writing something to the Movidius and I should then move it to the pi.
Thanks in advance for any help!
Lee
Adrian Rosebrock
Hi Lee. I’m sorry if this was unclear. Please review the workflow image above. You need the NCS to generate the graph. You also need the NCS to deploy the graph. Does this make sense?
David Ramírez
Hello Adrian, I’m trying to follow your tutorial step by step but I don’t understand this part either. So, just to be clear, just before the “Object detection with the Intel Movidius Neural Compute Stick” section, should I connect the NCS on my raspberry? or when exactly?. Also, I’m having errors while running the mvNCCompile command, it says: “[Error 9] Argument Error : Network weight cannot be found.”, I believe it is because I don’t have the download files on the VM but I don’t know how to put them there or in which folder should I save them, can you please tell me how to solve this or where to find information about it.
Thanks a lot. David.
Adrian Rosebrock
Hi David.
Thank you for the feedback.
(1) I urge you to read the first Movidius tutorial from the prior week first: Getting started with the Intel Movidius Neural Compute Stick.
(2) I edited this post with information in italics at the top of several of the sections so that the instructions are more clear, however you should really read the first tutorial from top to bottom before moving forward with this post.
(3) To move files between the VM and your host, you should make use of SCP (Secure Copy). Explaining SCP is not appropriate for this forum, but I will say that you need a “host-only” network adapter for your VM and you need `openssh-server` installed on the Ubuntu VM to make it work. See the following two links: host-only network adapter and how to SCP files.
Christoph Viehoff
I ran follwed the VM installation and the SDK installed sucessfully . All tests pass when I run the mvNCComple script from the Real-time-object-detection folder on my VM I get the following error:
nvNCComplie V02.00, Copyright @ Movidius Ltd 2016
Error importing caffe
Adrian Rosebrock
Hi Christoph — try opening a fresh terminal and/or check your PYTHONPATH environment variable. For further information, please see the response from Tome at Intel on this direct forum link. Let me know if that works.
yang
Hi Christoph,
I met the same problem like you.
Would you please tell me how did you solve that?
I’ve changed my PYTHONPATH to where I installed caffe,but it doesn’t work.
Many thanks
Zimeng
Hi Adrian, after building CV env on Rasp pi (jessie+cv3.3) , I wanna update system to Strech (recommended in your tutorials on using Movidius NCS ) ,will it influence my current cv env?
Adrian Rosebrock
Updating the actual OS is notorious for breaking development environments. I do not recommend it. But if you would like to try, backup your .img file on your desktop and then try the upgrade.
Kevin
Hi Adrian. Thank you for this amazing tutorial. Everything worked perfectly. Now I’m trying to make a gender detection in real-time with the Movidius. Do you have any recommendations of a model to use? I’ve found a classification model in https://gist.github.com/GilLevi/c9e99062283c719c03de, but I would like to make a detection. Can the classification be used inside this detection code?
Thank You,
Kevin
Adrian Rosebrock
I don’t think there is a need for a detection model. Use a face detector to detect the face. Extract the ROI. Then pass the ROI into a classification model.
Kevin
Ohhh. Thank you Adrian 😀
schamarti
Hi Adrian, I have installed sdk and I am able to run example graph provided in the downloaded section. To convert model on raspberry pi, I am getting error mvNCCompile command not found. Any hints
Adrian Rosebrock
Hey Schmarti — run mvNCCompile on the Ubuntu machine or VM where you installed the full SDK. The tool isn’t available on the Pi.
Hein
Hi Adrian, I am confused about this article.
Is the tutorial you provided to run on Ubantu or Raspberry?
Adrian Rosebrock
The Raspberry Pi runs Raspbian but I needed an Ubuntu VM to develop the code and create the deep learning model that later runs on the Pi.
monsour
hai sir Adrian can i ask some help? do u have any xml file for garbage detection? because it is a very long process when i trained my own haar_cascade.
thank you for the help
Adrian Rosebrock
I do not have any pre-trained models for garbage detection. You would need to train your own.
bob
Hi Adrian,
Thanks for the tutorial! I was able to run the demo but I’m now interested to use different graph.
There are 2 magic numbers in the code that I’m not sure.
preprocessed = preprocessed – 127.5
preprocessed = preprocessed * 0.007843
Could you explain why you chose these numbers?
Thanks
Adrian Rosebrock
These numbers are used to perform mean subtraction and scaling. See this post for more details.
Fabian
Hi Adrian, great tutorial as always!! , I was wondering, is there a way to improve the fps performance?? would you recommend another card rather than raspberry to work on? I was looking some options, like up board but I am scared to make a mistake buying it… I would appreciate some suggestion, please… I don’t know if it can be used some kind of rack of raspberry also to improve fps performance…
Adrian Rosebrock
I would recommend NVIDIA Jetson TX1 or TX2.
Schwarz
Hi Adrian, I happened to stumble upon your page months ago when I was searching for answers for my project, I am very very grateful for all the insightful blog posts that I have been following since.
I am currently doing a similar project which performs object detection from a camera stream. The thing is, I want to only detect specific parts of the stream (lets say only the right hand corner). Is there any way to specify the ROI in python?
Adrian Rosebrock
Hey Schwarz, it’s wonderful to hear you are enjoying the PyImageSearch blog 🙂 There are a few ways you can accomplish your goal:
1. Manually use array slicing to extract the ROI and only pass the ROI through the network for detection
2. Perform object detection on the entire image, but when you loop over the results, discard any where the bounding box would not fall into your ROI coordinates
Exactly which method will work better really depends on your project and dataset so give both a try.
Vivek
I am trying to run the above mentioned installs on a Raspberry Pi instead of a VM. Have been stuck at the “make install” step for a while now. I realized that the previous tutorial that helped getting started with the movidius on a Raspberry Pi lacked the required steps for mVNCComplie as this tutorial covers, and so I went ahead and attempted to follow the steps mentioned here on my RPi to make sure all the dependancies are looked after.
Is this feasible. Is there a different set of dependancies that need to be run if I am using a raspberry pi instead of a VM?
Thanks
Adrian Rosebrock
Hi Vivek — were you planning on putting the full SDK on the Pi? If so, that’s not possible/recommended. Instead, you should put the full SDK on a capable full size computer and then put the API-only mode software on the Pi. I tried to make this clear in the blog post, but I understand that it is confusing in general. Can you please let me know what your intentions are?
Vivek
Hi Adrian,
Thank you for getting back! I realized (through experience) that getting the SDK onto the Pi is not feasible. On giving your post another read I also caught on to the very concisely laid out pipeline which clearly mentions loading the SDK on a VM.
I made the necessary changes and got this up and running 🙂
I am now working to convert my own custom SSD tensorflow model into NCS.
Thank you for the work you do here!
Adrian Rosebrock
Congrats on resolving the issue, Vivek!
Rodolfo
Hi Adrian, thanks for tutorial.
I created a script that uses multiple streams to use a single movidius stick. The script is working fine but do you think this can cause some problem with it?
In it I’m also using this caffe deployment of the mobilenet provided by https://github.com/chuanqi305/MobileNet-SSD
David Hoffman
I don’t see any problems with this approach — just remember that the Movidius is fast but there will be a delay. It’s also possible to hook up multiple Movidius sticks to your Pi where each could support a different stream, or a few sticks could work in tandem to support one stream provided that you handle the overhead well.
Rodolfo
Thanks for the help David.
Vivek
Hi Ardian,
Thank you so much for this post. I was able to successfully execute the entire tutorial and run the model on my RPi.
How would you approach converting a tensorflow model in this format. I have a custom SSD model trained in tensorflow that I was trying to run on the Neural stick.
Thanks
David Hoffman
Hi Vivek — TensorFlow is supported by mvNCCompile. You can refer to the docs here.
Vivek
Interesting, I have been attempting to follow the steps in the documentation you have linked in your response without any success. I did come across this:
https://ncsforum.movidius.com/discussion/532/detectors-based-on-tensorflow-mobilenet-ssd (posted in January) and this:
https://ncsforum.movidius.com/discussion/667/tensorflow-ssd-mobilenet (posted in March)
Both of which imply that the NCS does not support SSD yet.
Wondering if it anyone has actually made this conversion in your knowledge, or if you know how to go about it.
David Hoffman
It looks like you might be right. I would suggest YOLO as an alternative to SSD. Check this link.
Prakash M
looks like still there is no support for tensorflow mobilenet_SSD
Rabbani
Hi Ardian,
would you mine can u give program which detect the mobile phone
Adrian Rosebrock
This post demonstrates how to run a deep neural network on a mobile device.
rabbani
hi
Adrian,
sir i want to detect illegal use of mobile phone while driving a car.
would mine to help me about this concept
Adrian Rosebrock
You would need to research “activity recognition with deep learning”. It’s not an easy project. Good luck with it!
Simeon
Hi Adrian. Great blog post! I was wondering if the fps can be increased on the new 3B+ version of the pi? The project is an autonomous mobile security robot. The fps needs to be fast as I am streaming live video with the pi to a control room. Also the robot is in motion whilst the real-time object detection is being done. Budget is limited so a Movidius stick isn’t an option nor is getting an alternative board as the prices are too steep. I’m hoping the new 3B+ pi would be enough for a better framerate…?
Thanks for any input you may have.
Adrian Rosebrock
The Pi 3B+ is ~17% faster than the original Pi 3. That will certainly lead to a bit faster inference but it’s not going to even remotely compare to the NCS.
Angelo
— Generating Movidius graph files from your own models —
Hi, I’ve a TF model that has 7 output nodes. There is a way to generate the graph without changing the code of the model or the NCS allows only one input and one output? There’s someone that had the same problem?
Thx a lot
Angelo Tartaglia
Movidius software has been upgraded: NCAPIv1 to NCAPIv2.
I’ve tried to modify your script with the new commands and I’ve used the new generated graph file using the .prototxt and mvNCCompile command but it doesn’t work.
Adrian Rosebrock
Thanks for sharing, Angelo. I will certainly look into the new API. I haven’t decided if I’ll make a new blog post or if I’ll update this one to be compatible. In the meantime I suggest you use the old API to work with my blog post. Is there a particular feature in the new API that you need right now?
Amare
Hi Adrian Thank you for bringing me to the new Intel processor!!!
I have seen the video at the start of the page and it is very fast in real time…. does it mean if I buy this movidius NCS it can run (trained SSD model with a dlib tracker) like a GPU accelerated desktop computer?
Adrian Rosebrock
The NCS is certainly faster than the CPU of a Raspberry Pi but don’t expect it to run as fast as a desktop GPU like a Titan X.
Shuhei Kawamoto
Nice to meet you,Adrian.
I am a Japanese college student who is researching machine learning.
In my study, I would like to recognize objects in real time using the source code you created.
I tried to try it immediately, but this kind of error occurred.
File “realtime-object-detection/ncs_realtime_objectdetection.py”, line 54, in predict for box_index in range(num_valid_boxes):
TypeError: ‘numpy.float16’ object cannot interpreted as an integer
Would you please lend me your power if you do not mind?
Waiting for a reply.
Adrian Rosebrock
Hi Shuhei, it’s nice to meet you as well. I haven’t experienced this problem, but I think another reader sent me an email. See Line 30 in the blog post where it is shown how to convert a NumPy datatype. You can use a similar method to convert to an int as needed.
Annie
Hi, What if I want more classes to be added so that it will recognize more objects?
Adrian Rosebrock
You would need to either:
1. Train your own model from scratch
2. Apply fine-tuning to a pre-trained model
Suman Ghimire
Wow this is an interesting article and the way you explained made it so straightforward to implement and it worked. I want to put this article to real application by counting number of people flow inside my lab. I am wondering how i can modify this python script to display the total number of peoples flow in a day and real time updating the number in the same video (top left corner) after it detects the person.
Any suggestion Adrian? and Thanks again for posting such an informative video.
Cheers.
Adrian Rosebrock
Hey Suman, I’m happy to hear you enjoyed the post! Unfortunately building a system to detect and count flows is a bit more involved and certainly move involved than what I can cover in a comment. I’ll be sure to add this to my list of ideas to cover in a future post.
Razeen Muhajireen
Hey Adrian, Have you done a post on movidius with object tracking/counting?
Adrian Rosebrock
I have not. If I do in the future I’ll be sure to announce it via email so make sure you join the PyImageSearch Newsletter to be notified when new tutorials are published.
Mehrzad Mehrabipour
Hello Adrian,
Thanks a lot for this great post.
I have created a traffic sensor using your instructions. However, I also want to track vehicles for a couple of seconds. Therefore, I need to assign a specified ID to the detected vehicles. I was wondering if it is possible to do.
I need to have a text file as outputs for more analysis as follows:
Vehicles ID (a specified number), coordinates, time
…
Thanks a lot,
Adrian Rosebrock
What you are referring to is called “object tracking”. Take a look at correlation trackers.
Ahmed
Hi adrian;
How I can create my own caffe model?
When I can do it?
I searched for toturial about creating caffe model, but I couldn’t find it.
Best regards
Adrian Rosebrock
You can train your own custom Caffe models but you’ll need experience in computer vision, machine learning, and deep learning. If you’re interested, I discuss how to train your own custom Caffe models inside the PyImageSearch Gurus course.
michell
hi adrian
I’m doing a project and I need to detect fruit foods for example do you know any model that detects and makes a box around the object? for raspberry?
Adrian Rosebrock
What you are referring to is called object detection. I have an introduction to deep learning object detection which you can read here.
vahid
hi adrian. tnx for best post.
i installed mvnc on my pi and testing using this method and got result like you.
cd ~/workspace/ncsdk/examples/apps
$ make all
$ cd hello_ncs_py
$ python hello_ncs.py
Hello NCS! Device opened normally.
Goodbye NCS! Device closed normally.
NCS device working
also when i import mvnc in my program i don’t get any error. but using mvNCCompile in command line for graph file i got this below error.
bash. mvNCCompile command not found.
please help me.
Adrian Rosebrock
Hi Vahid, thanks for your comment. Just checking — are you using mvNCCompile on the Pi? You shouldn’t run that command on the Pi. Instead, you should execute that command on a capable desktop computer. The instructions show you how to run it in a VM but a VM isn’t necessary if you have Ubuntu.
vahid
dear Adrian thanks for your answer. I should say that I used mvNCCompile on the Ubuntu in my pc but unfortunately could not compile. please if you can tell me more detail about it. thank you very much.
Adrian Rosebrock
Double check that you didn’t install Movidius SDK version 2. This blog post was written before version 2 was released. The other thing you could check is your PATH to make sure that the directory housing the binary is properly added.
Joseph Palermo
Installation failed for me because: “No matching distribution for tensorflow==1.4.0”. Is the tensorflow version important? I may try simply installing the latest version.
Adrian Rosebrock
Try with the latest version of TensorFlow — it should now be pip installable:
$ pip install tensorflow
Joseph Palermo
Also “Insert Guest Additions CD image…” is a checkbox in the User Interface tab. Do I have do something in addition to checking the box? For instance, Adrian in figure 6 you had the guest additions installer running in your terminal on the Ubuntu guest os. How did you get that to happen?
Adrian Rosebrock
Hi Joseph, when you have your VM running and go to to the VirtualBox menu bar under devices, you’ll see an option to “Insert Guest Additions CD Image…”. Click that and the installer will start automatically. If it is already checked, you might see CD icon on the desktop of the VM. You should be able to launch the autorun executable from the CD if it didn’t start automatically. Refer to Figure 6 — that’s what it will look like when the installation is complete. Don’t forget to do a restart!
Aske
Great and interesting tutorial!
Have you tried installing the OpenVINO toolkit and optimize the model? Not sure if an Intel CPU based host is required or if it possible to use the Pi3+NCS.
Adrian Rosebrock
I have not tried the OpenVINO toolkit. I’ll have to look into it.
Sam Pawall
Hi Adrian – thanks for the amazing post. I don’t have any experience with Deep learning or object detection. I will likely be purchasing one of your books. I want to use it to build a model to detect if screws have been installed on a part. So imagine a part is on a moving conveyor belt. This part should have 5 screws on it. I want to detect if all 5 screws are present and if missing, which screws (location and quantity) are missing. Can I do that with this stick?
Also, as the part moves through different stations on the conveyor belt, I imagine I can calculate time spent at each station (how long it took to complete an operation at a station) as well, right?
I would appreciate your thoughts, also would appreciate if you can suggest which one of your blogs and books and other resources would be helpful.
Thanks.
Adrian Rosebrock
Hey Sam, are you intending to deploy your trained model to the Raspberry Pi? Keep in mind that even with the NCS the Pi will likely achieve 4-10 FPS at the very max, depending on your model. If your conveyor belt is moving slow enough this should be fine but if it’s a fast moving conveyor belt it may be problematic.
As far as suggestions go, if you are serious about studying deep learning and object detection you should absolutely go with my book, Deep Learning for Computer Vision with Python.
Sam
Hi Adrian – thanks for your reply. Yes.. can I use my trained model on Movidius + Raspeberry PI?
David Hoffman
If you’ve already trained a model and created a graph file, then yes — you can run it on your Movidius + Pi. I’d also like to add that you might consider traditional image processing approaches (non deep learning) to identify the screws as you might achieve higher FPS especially with the Pi.
Michael
Hey guys,
so I’m experiencing more of a an annoyance than a problem. I’m working on a raspberry by by ssh’ing into the pi. One keyboard/mouse, but I’m still looking at the screen. However when i run the code on the pi via ssh I get the following.
$ python ncs_video.py –graph mobilenetgraph –display 1
…
(Output:1160): Gtk-WARNING **: cannot open display:
Now If I connect my keyboard/mouse directly to the Pi and run it. It works fine, slow, but it works.
Any suggestions on how to get this to work via ssh?
Adrian Rosebrock
You need to enable X11 forwarding when you SSH into your Pi:
$ ssh -X pi@your_ip_address
michael
Adrian,
Thank you sir!!
Sai Teja
Hi Adrian,
It was a really good tutorial. I was wondering to know how you got the results that you displayed in the table comparing Pi with and without Movidius. How can we calculate the FPS..?
Sai Teja
I am sorry, please ignore this question
Sai Teja
Hi Adrain,
Can I use GoogleNet, Alexnet instead of MobileNet…?
hiankun
Hi, the default SHAVEs number is 1, which can be found in the (maybe newer?) document: https://movidius.github.io/ncsdk/tools/compile.html
I recently encounter a situation in which I didn’t assign the SHAVEs value, and the final graph runs at very slow speed. After asking my question in the NCS forum and getting the suggestion, I realized the importance of the `-s` option. :-p
Adrian Rosebrock
Thanks for sharing Hiankun. Yes, the default is one and the more you can allocate the better.
Stas
What is SHAVEs?
Amare Mahtsentu
Hi Adrian !!!
it is very clear from your post that movidius is best for fast processing.
you have shown here SSD with mobile net architecture written in caffe frame work…. and it is very nice… does NCS support SSD mobile net (written in tensorflow) ?
Adrian Rosebrock
Hi Amare! The Movidius is a good device to augment SBCs, but it would never replace a fully capable GPU or even a high-end laptop CPU. Please refer to the previous post (Figure 5) where a benchmark was made that includes my Macbook Pro. As the figure demonstrates, the NCS + laptop is actually slower than the laptop itself (and that’s even in a VM on the laptop — running on bare metal the numbers would be even worse).
Check this link for SSD MobileNet with Tensorflow: Movidius MobileNets.
Bitbitbit
Hello Adrian, thanks a lot for the tutorial.
I saw that Intel has updated the NCAPI from version 1 to version 2. I noticed that the version of API your tutorial use is version 1. Will the same python code that your dowbload provide work with NCAPI version 2? Can you guide us to make it work with NCAPI version 2?
Thanks again.
Adrian Rosebrock
From what I understand, it will not work with NCSDKv2. You’ll have to figure out some modifications to the scripts. I may do a new blog post in the future.
Wicus Van der Westhuizen
Hi Adrian,
Great tutorial and I love your blog. Keep up the good work.
Is there a way to display the video feed from the camera on the display at a higher framerate than the FPS of the processed feed? I don’t know if that makes any sense at all. So you basically have a smooth video feed at say 30 fps with the classification boxes running at for instance 4 fps.
Adrian Rosebrock
Technically yes, you can. See this blog post.
Sai Teja
Hi Adrain,
Thanks for the post. Can you guide me in training a custom model for object detection..??
Adrian Rosebrock
Before you get too far down the rabbit hole on training your own model I would read this getting started guide for object detection. Inside the guide I help you develop a foundation of what object detection is, how it works from a deep learning perspective, and then provide suggestions for how to train your models.
Pravesh Bawangade
can we cluster two or more raspberry pi to make processing faster. can you make a tutorial on that. thank you
Mark West
Hello Adrian!
First off, great blog. I’ve really struggled getting the Movidius up and running and your instructions really helped.
I’m trying to adapt the example from this blog post to use Graphs generated from other models. However when I try using a Graph based on a Tiny Yolo Caffe Model I’m just getting an array of NaN’s back from graph.GetResult().
My guess is that this has something to do with both the image dimensions and the preprocess_image function. Do you have any tips that may help me progress further here?
Thanks again for all your work!
Adrian Rosebrock
Hey Mark — are you using float16 datatypes? If so, read this Movidius thread and specifically Tome’s January 17 post. That’s the only thing I can think of. Typically when I’ve run models without proper peprocessing it just yields unfavorable results, but not NaNs. Definitely triple check your image dimensions!
Mark West
Thanks for the tip! I’m off on holiday now but will get back to this in a week or so.
Thanks again!
Sai Teja
HI Adrian,
Thanks for the tutorial. Can you guide me how to give recorded stream as input to the code.
Kashyap Nishtala
Hi Adrian!
Can you please tell me how to use the detected objects in the video to perform any physical action (robot arm) through the raspberry pi? Can I write code for the motors along with the training models and form a graph file?
Thank you
Adrian Rosebrock
You certainly can take actions based on objects detected but exactly what those actions are and how you perform them are entirely up to you. For a robot you would want to look at the physical hardware you are using as well as read the documentation on how to use it. The documentation of your robot/servo/etc. will instruct you on which libraries to use.
Michael
Hi Adrian, could you recommend a raspi alternative that could do detection’s at around 50+fps using CNN. I was hoping the NCS and raspberry pi3b might do it but now no it cant after reading your article.
Adrian Rosebrock
Have you taken a look at the NVIDIA Jetson lineup? That would be my suggested hardware.
tuxkart
Hi Adrian,
Although this post is far left behind but I still hope your help anyway.
Actually, I had been so happy with my on-board-graphics laptop until I found that I must need more and this neural stick is quite good choice. Do you have any advice for my situation?
Adrian Rosebrock
It really depends on what your application is. What do you hope to accomplish with the NCS?
tuxkart
Oh, my mistake in posting.
Actually, I’m a “big fan” of your posts that they’re really cool to me. However, when I run your code sample of object detection in my laptop, the FPS is quite low and some other samples I cloned on github (yolo for example), the results’re even worse. Movidius NCS which possibly speeds up about ~10 times as shown above, may be a good choice for me. But I still hope more options and I look forward to your suggestion about this. Thanks so much Adrian.
Adrian Rosebrock
If you take a look at this post you’ll find a comparison of the NCS speeds on a MacBook Pro. In general, the speed is actually worse than using the CPU itself. The NCS works great on speeding up inference on resource constrained devices such as the Pi but it won’t do much for your laptop (provided you are running a modern laptop). I would instead invest in a good GPU.
tuxkart
Oh I saw, Adrian. That’s what I need.
Thanks so much
hashir
how can i get the output only in between my predefined percentage level rest of the prediction must be avoided
Adrian Rosebrock
On Line 149 you could modify the “if” statement to be something like:
if pred_conf > min_conf and pred_conf < max_conf
senay
Hi Adrian !!
I have tested NCS and it works fine with FPS of 8…
but I still need more speed !!
what will happen if I use two NCSs for the same model and the same video sample? does it give a speed of around 16 for the two?
I am using a raspberry pi camera module for the live video…
Adrian Rosebrock
Unfortunately no, using more than one NCS is not going to increase your FPS. You should look into faster embedded devices that are designed to run inference with deep learning models. The Jetson TX2 would be my first suggestion.
Chad Green
I made the changes to the code to update to Intel’s v2.0 of their NCSDK. There were a couple tricks. 2.0 support virtual environments, which is nice. I followed their website: https://movidius.github.io/ncsdk/install.html
and was able to install the whole SDK on a Rasperry Pi, as long as I increased the swapfile to 1024 and made sure nothing else was running on the Pi during install (headless helps). With the updates I made to your ncs_realtime_objectdetection.py (see url below), everything in your blog seems to be working the same, so that’s good. However, after all that work, I found out (should have known) they don’t support very large networks and anything tensorflow is pretty much unsupported. So that sucks. Anyway, I’m attaching the file in case someone else can use it.
https://gist.github.com/chad-green/713d88e9515aa8a9a8cf46c2a7ac8a16
Adrian Rosebrock
Thank you so much for sharing this, Chad! 😀
ChianLi
ha ! I should have read all the comments before doing the same myself !
thanks a lot Chad, mine wasn’t working
Adrian, could you add a link in your blog to Chad’s update for people who need NCSDKv2 version ?
thanks a lot btw for this great tutorial
SteveR in MD
Hi! Thanks for going through this for everyone, but I have one question. I’ve updated to ncsdk2.0, but I can’t run the ncs_realtime_objectdetection.py and I want to make sure I’m understanding what the problem is.
While python doesn’t complain about anything syntax-wise, the example fails with “The device didn’t accept the graph…Exception: Status.UNSUPPORTED_GRAPH_FILE”
I’m assuming this means the graph for the example isn’t ncsdk2.0 compliant, or it’s too big, or something else bad. Is that the case, and does an updated graph exist anywhere?
Thanks!
Steve
SteveR in MD
Oops. After I decided to RT_M, I recompiled the model and all is well.
Adrian Rosebrock
Awesome, I’m glad to hear it!
zimuyuan
Thank you so much
zimuyuan
Hello! Can you send me your code to learn.I also used version 2.0, but I didn’t succeed.Thank you very much!
Adrian Rosebrock
You can use the “Downloads” section of the blog post to download the source code + examples.
Aman
Hey Adrian!
I was planning on using custom RefineDet or PeeleNet caffe models for object detection. Any idea if these would run on NCS, given that I compile the graph file using mvNCCompile correctly ?
Or is NCS just limited to SSD Mobilenet for running object detection models?
Adrian Rosebrock
The NCS is fairly limited to the models that Intel supports. I would suggest you contact Movidius support if you have a question related to a specific model.
Devesh Patel
Hey Adrian,
Great video!! I am using the NCS with the raspberry pi for my senior design project and have a questions. Is there a way for the NCS to work with the raspberry pi to control a car? For example, if the camera detects an object in the way of the car, can the NCS detect the image from the camera and send an interrupt/signal to the raspberry pi to tell the motors to go around the object and keep going. This is the car I am planning to use https://www.robotshop.com/en/smart-video-car-kit-raspberry-pi-model-b.html?gclid=EAIaIQobChMIsbHYoNuN2gIVJ769Ch2RMAU5EAkYAyABEgJ31fD_BwE. I have looked online and there is nothing that related.
Adrian Rosebrock
You can use a Raspberry Pi to control a car, but keep in mind exactly how you control the car is heavily dependent on which car you are using. I would suggest you research the car you will be using and understand the library/API.
bitbitbit
Hello Mr. Adrian. Thank you for your amazing tutorial. If you ever released OpenCV book for Raspberry Pi, I would be sure to buy it.
On another note, for the other users that are trying to use Movidius on Raspberry Pi, there is a known problem in which the Movidius will hang and return and error after some random time (i can’t remember the error name). There is no way to recover from this error except cutting off power to Movidius, and reconnect to reset it. It was presumably caused by inadequate power supply from Raspberry Pi.
Luckily, we can cut the power and reconnect by software (so we don’t have to plug the NCS on/off manually) by using hub-ctrl.c package (by codazoda). Catch the error, cut the power to USB hubs for a few seconds, and reconnect the power, and resupply the NCS with the graph.
Adrian Rosebrock
I’m actually working on a computer vision + Raspberry Pi book now 😉 Stay tuned and thanks for the comment!
Anand C U
Hi Adrian, thank you for sharing this with the community. I want to know what changes need to be done to run this from a video file instead of the camera?
Cristian Benglenok
I undersutand taht we can run Tensorflow and Caffe models, but what about MXNET?
Adrian Rosebrock
Mxnet is unfortunately not supported by the Movidius.
Paul Foster
Hi Adrian, please could you share how you have your pi display setup? I don’t get any output on the HDMI connected monitor, using Stretch Lite. I’m SSH’ed in from my PC. The display is working as it is showing the login prompt.
Other than that, the model is executing and logging findings to the SSH session window.
Cheers
Paul
Adrian Rosebrock
I believe Stretch Lite doesn’t have a window manager installed. You would need to install X11 and then enable X11 forwarding when SSH’ing.
Duy Pham
Hi Adrian, I want to train my own custom model like your but I don’t know steps, can’t you help me this, and i just want to have 1 class ‘person’ in my model could that possible ?
Thanks
Duy
Adrian Rosebrock
I would suggest starting my reading by gentle guide to deep learning object detection to help you learn the basics. From there, take a look at Deep Learning for Computer Vision with Python where I provide code + explanations on how to train your own custom object detectors.
Nathan
Hi Adrian, thanks for such an amazing article, I have achieved a decent fps of slightly more than 4.
Just really wondering if there is a way to increase fps even further. I am currently having 4 NCS sticks with me that I’m trying to plug in the Pi. Will they work together on one graph? If yes, can you briefly explain how to do that?
Adrian Rosebrock
Take a look at Wally’s comment in a separate blog post.
Patrick
Hello Adrian,
Are you planning to revisit this setup with the new INTEL NCS-2 ; they claim to be 8 times faster… would you get a 20-30 fps wit this one ?
https://www.cnx-software.com/2018/11/14/intel-neural-compute-stick-2-myriad-x-vpu/
Adrian Rosebrock
Yes, I will be doing a followup tutorial with the Intel NCS2; however, I highly doubt that 20-30 FPS will be obtained with an object detector.
Martin
Hi Adrian, thanks for your tutorials! It helped me more than once. I was able to train a smaller model from scratch and make it work on the movidius NCS. The training is done with keras, then I create a tensorflow graph with the network and compile it.
With a smaller model (than tiny yolo) I’m able to get 20 fps detecting one class.
Adrian Rosebrock
Awesome, nice job Martin!
Could you let me know how which guide you followed to go from Keras => TensorFlow graph? I’t something I’ve wanted to play around with on the NCS.
Martin Peniak
You get 6-7FPS.
Chris
Hi Adrian –
First, thank you for all your work on this site. I’m really learning a lot and enjoying your approach.
That’s all – just thank you.
Adrian Rosebrock
Thank you Chris, I really appreciate that 🙂
Niko Gamulin
Hi Adrian,
I followed your tutorial and got stuck with recognizing movidius stick. I noticed that none of USB devices were visible if I removed the filters and then I noticed that the host machine is Ubuntu (my case), I had to add myself as a user to vboxusers group by running the following command:
sudo usermod -a -G vboxusers $USER
In order to spare further confusions for Ubuntu users, it would be helpful if you could add the note somewhere close to USB-related section.
Adrian Rosebrock
Thank you for sharing, Niko!
Niko Gamulin
Hello Adrian,
I have just modified object detection script to run on NCS 2 with OpenVINO on Raspberry Pi: https://github.com/nikogamulin/movidius-rpi
I added the links to your blog as references
Adrian Rosebrock
Thanks Niko!
Ben Bartling
Hi Adrian,
With the Movidius NCS running with pi B+, would it be possible to run the code from your August 13 post “OpenCV People Counter”?
Would there be any chance the people counting topic would be revisited for a pi version?
Thanks,
Ben
Adrian Rosebrock
I’ll actually be implementing the people counter with the Movidius NCS inside my upcoming Computer Vision + Raspberry Pi book. Make sure you’re on the PyImageSearch mailing list if you aren’t already.
Joakim Byström
I have had great fun both with opencv and Movidius and are thinking of more projects. But I had the idea not to install the full development kit and instead download already compiled graph files. This has been very difficult to find, I don’t know why. I suspect there is something fundamental I missed…
1) Why isn’t graph files available e.g. at Github?
2) Is there a way to get a list of all trained classes in a Movidius graph?
Joakim
Shaun Shen
Hello Adrian,
Many thanks for this amazing tutorial. I am really learning a lot.
I modify your code to enable taking photos when it detects “person” and let system automatically send the photo to me through text message or email. This is much better than the motion sensor which is often false alarm or bad coverage or triggered by your innocent pets. 🙂 Wow. I kind of feel this is a sophisticated home security system. I am thinking I should hook up a super loud siren to scare away any intruder. By the way, I use RPi 3B+/PiCamera/NCS. This is an awesome combination.
Adrian Rosebrock
Congrats on the successful project, Shaun! Nice job.
Andres
Hello Adrian, thanks for the tutorial, it works fine for me, but when I want to upload a video from file with vs = cv2.VideoCapture (‘video7.avi’), I have errors like image_for_result = frame.copy ()
AttributeError: ‘tuple’ object has no attribute ‘copy’, I have seen that when you take the video of the camera returns a type numpy.ndaray but with the video file is tuple type, how do I correct that error? Thank you
Adrian Rosebrock
You’re using a the “cv2.VideoCapture” class directly? If so, that function returns a 2-tuple of
(grabbed, frame)
. Change your code to:(grabbed, frame) = stream.read()
paresh
which camera did you use ?
Karan Maheshwari
Hi Adrian, Amazing Tutorials,
I am trying to use Tensorflow Object Detection API for real time object detection and then deploy that model on NCS. I’m having issues. Would love to get a tutorial on that
Adrian Rosebrock
Thanks for the suggestion, Karan. I’ll try to cover that in my upcoming Computer Vision + Raspberry Pi book.
bd222
Hi Adrian! Thank you for the posting, i learn a lot from your blog.
Recently im working on a project that i need to do real time fruit-detection, i bought the Neural compute stick2 yesterday, but i don’t know how to accomplish my work while i found out you use the ncs 1 version. Is it able to accomplish my goal with the ncs 2 on raspberry pi ? (Im a beginner so sorry for lots of question. Hope to hear from you soon !)
Adrian Rosebrock
Great questions. I’m actually covering the Movidius NCS and Raspberry Pi for custom object detection inside my upcoming Computer Vision + Raspberry Pi book. Stay tuned!
Nguyễn Anh Nguyên
Hi Adrian,
Would this bundle (VM + Movidious) work with ResNet151 or RetinaNet ? A guide on this will be greatly appreciated.
Steve
Adrian Rosebrock
Hey Steve — if you would like more details on how to train your own custom object detectors and then deploy them to the NCS, you should read Raspberry Pi for Computer Vision. Object detection on the NCS is covered in detail there.
Ajinkya
Hi Adrian,
Is there a direct relation of inference speed between number of classes and inferencing speed/accuracy?
Thanks,
Ajinkya
Adrian Rosebrock
The total number of classes doesn’t impact speed dramatically but it can impact accuracy, especially if many of your classes are visually similar.