My Uncle John is a long haul tractor trailer truck driver.
For each new assignment, he picks his load up from a local company early in the morning and then sets off on a lengthy, enduring cross-country trek across the United States that takes him days to complete.
John is a nice, outgoing guy, who carries a smart, witty demeanor. He also fits the “cowboy of the highway” stereotype to a T, sporting a big ole’ trucker cap, red-checkered flannel shirt, and a faded pair of Levi’s that have more than one splotch of oil stain from quick and dirty roadside fixes. He also loves his country music.
I caught up with John a few weeks ago during a family dinner and asked him about his trucking job.
I was genuinely curious — before I entered high school I thought it would be fun to drive a truck or a car for a living (personally, I find driving to be a pleasurable, therapeutic experience).
But my question was also a bit self-motivated as well:
Earlier that morning I had just finished writing the code for this blog post and wanted to get his take on how computer science (and more specifically, computer vision) was affecting his trucking job.
The truth was this:
John was scared about his future employment, his livelihood, and his future.
The first five sentences out of his mouth included the words:
- Tesla
- Self-driving cars
- Artificial Intelligence (AI)
Many proponents of autonomous, self-driving vehicles argue that the first industry that will be completely and totally overhauled by self-driving cars/trucks (even before consumer vehicles) is the long haul tractor trailer business.
If self-driving tractor trailers becomes a reality in the next few years, John has good reason to be worried — he’ll be out of a job, one that he’s been doing his entire life. He’s also getting close to retirement and needs to finish out his working years strong.
This isn’t speculation either: NVIDIA recently announced a partnership with PACCAR, a leading global truck manufacturer. The goal of this partnership is to make self-driving semi-trailers a reality.
After John and I were done discussing self-driving vehicles, I asked him the critical question that this very blog post hinges on:
Have you ever fallen asleep at the wheel?
I could tell instantly that John was uncomfortable. He didn’t look me in the eye. And when he finally did answer, it wasn’t a direct one — instead he recalled a story about his friend (name left out on purpose) who fell asleep after disobeying company policy on maximum number of hours driven during a 24 hour period.
The man ran off the highway, the contents of his truck spilling all over the road, blocking the interstate almost the entire night. Luckily, no one was injured, but it gave John quite the scare as he realized that if it could happen to other drivers, it could happen to him as well.
I then explained to John my work from earlier in the day — a computer vision system that can automatically detect driver drowsiness in a real-time video stream and then play an alarm if the driver appears to be drowsy.
While John said he was uncomfortable being directly video surveyed while driving, he did admit that it the technique would be helpful in the industry and ideally reduce the number of fatigue-related accidents.
Today, I am going to show you my implementation of detecting drowsiness in a video stream — my hope is that you’ll be able to use it in your own applications.
To learn more about drowsiness detection with OpenCV, just keep reading.
Looking for the source code to this post?
Jump Right To The Downloads SectionDrowsiness detection with OpenCV
Two weeks ago I discussed how to detect eye blinks in video streams using facial landmarks.
Today, we are going to extend this method and use it to determine how long a given person’s eyes have been closed for. If there eyes have been closed for a certain amount of time, we’ll assume that they are starting to doze off and play an alarm to wake them up and grab their attention.
To accomplish this task, I’ve broken down today’s tutorial into three parts.
In the first part, I’ll show you how I setup my camera in my car so I could easily detect my face and apply facial landmark localization to monitor my eyes.
I’ll then demonstrate how we can implement our own drowsiness detector using OpenCV, dlib, and Python.
Finally, I’ll hop in my car and go for a drive (and pretend to be falling asleep as I do).
As we’ll see, the drowsiness detector works well and reliably alerts me each time I start to “snooze”.
Rigging my car with a drowsiness detector
The camera I used for this project was a Logitech C920. I love this camera as it:
- Is relatively affordable.
- Can shoot in full 1080p.
- Is plug-and-play compatible with nearly every device I’ve tried it with (including the Raspberry Pi).
I took this camera and mounted it to the top of my dash using some double-sided tape to keep it from moving around during the drive (Figure 1 above).
The camera was then connected to my MacBook Pro on the seat next to me:
Originally, I had intended on using my Raspberry Pi 3 due to (1) form factor and (2) the real-world implications of building a driver drowsiness detector using very affordable hardware; however, as last week’s blog post discussed, the Raspberry Pi isn’t quite fast enough for real-time facial landmark detection.
In a future blog post I’ll be discussing how to optimize the Raspberry Pi along with the dlib compile to enable real-time facial landmark detection. However, for the time being, we’ll simply use a standard laptop computer.
With all my hardware setup, I was ready to move on to building the actual drowsiness detector using computer vision techniques.
The drowsiness detector algorithm
The general flow of our drowsiness detection algorithm is fairly straightforward.
First, we’ll setup a camera that monitors a stream for faces:
If a face is found, we apply facial landmark detection and extract the eye regions:
Now that we have the eye regions, we can compute the eye aspect ratio (detailed here) to determine if the eyes are closed:
If the eye aspect ratio indicates that the eyes have been closed for a sufficiently long enough amount of time, we’ll sound an alarm to wake up the driver:
In the next section, we’ll implement the drowsiness detection algorithm detailed above using OpenCV, dlib, and Python.
Building the drowsiness detector with OpenCV
To start our implementation, open up a new file, name it detect_drowsiness.py
, and insert the following code:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2
Lines 2-12 import our required Python packages.
We’ll need the SciPy package so we can compute the Euclidean distance between facial landmarks points in the eye aspect ratio calculation (not strictly a requirement, but you should have SciPy installed if you intend on doing any work in the computer vision, image processing, or machine learning space).
We’ll also need the imutils package, my series of computer vision and image processing functions to make working with OpenCV easier.
If you don’t already have imutils
installed on your system, you can install/upgrade imutils
via:
$ pip install --upgrade imutils
We’ll also import the Thread
class so we can play our alarm in a separate thread from the main thread to ensure our script doesn’t pause execution while the alarm sounds.
In order to actually play our WAV/MP3 alarm, we need the playsound library, a pure Python, cross-platform implementation for playing simple sounds.
The playsound
library is conveniently installable via pip
:
$ pip install playsound
However, if you are using macOS (like I did for this project), you’ll also want to install pyobjc, otherwise you’ll get an error related to AppKit
when you actually try to play the sound:
$ pip install pyobjc
I only tested playsound
on macOS, but according to both the documentation and Taylor Marks (the developer and maintainer of playsound
), the library should work on Linux and Windows as well.
Note: If you are having problems with playsound
, please consult their documentation as I am not an expert on audio libraries.
To detect and localize facial landmarks we’ll need the dlib library which is imported on Line 11. If you need help installing dlib on your system, please refer to this tutorial.
Next, we need to define our sound_alarm
function which accepts a path
to an audio file residing on disk and then plays the file:
def sound_alarm(path): # play an alarm sound playsound.playsound(path)
We also need to define the eye_aspect_ratio
function which is used to compute the ratio of distances between the vertical eye landmarks and the distances between the horizontal eye landmarks:
def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear
The return value of the eye aspect ratio will be approximately constant when the eye is open. The value will then rapid decrease towards zero during a blink.
If the eye is closed, the eye aspect ratio will again remain approximately constant, but will be much smaller than the ratio when the eye is open.
To visualize this, consider the following figure from Soukupová and Čech’s 2016 paper, Real-Time Eye Blink Detection using Facial Landmarks:
On the top-left we have an eye that is fully open with the eye facial landmarks plotted. Then on the top-right we have an eye that is closed. The bottom then plots the eye aspect ratio over time.
As we can see, the eye aspect ratio is constant (indicating the eye is open), then rapidly drops to zero, then increases again, indicating a blink has taken place.
In our drowsiness detector case, we’ll be monitoring the eye aspect ratio to see if the value falls but does not increase again, thus implying that the person has closed their eyes.
You can read more about blink detection and the eye aspect ratio in my previous post.
Next, let’s parse our command line arguments:
# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args())
Our drowsiness detector requires one command line argument followed by two optional ones, each of which is detailed below:
--shape-predictor
: This is the path to dlib’s pre-trained facial landmark detector. You can download the detector along with the source code to this tutorial by using the “Downloads” section at the bottom of this blog post.--alarm
: Here you can optionally specify the path to an input audio file to be used as an alarm.--webcam
: This integer controls the index of your built-in webcam/USB camera.
Now that our command line arguments have been parsed, we need to define a few important variables:
# define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False
Line 48 defines the EYE_AR_THRESH
. If the eye aspect ratio falls below this threshold, we’ll start counting the number of frames the person has closed their eyes for.
If the number of frames the person has closed their eyes in exceeds EYE_AR_CONSEC_FRAMES
(Line 49), we’ll sound an alarm.
Experimentally, I’ve found that an EYE_AR_THRESH
of 0.3
works well in a variety of situations (although you may need to tune it yourself for your own applications).
I’ve also set the EYE_AR_CONSEC_FRAMES
to be 48
, meaning that if a person has closed their eyes for 48 consecutive frames, we’ll play the alarm sound.
You can make the drowsiness detector more sensitive by decreasing the EYE_AR_CONSEC_FRAMES
— similarly, you can make the drowsiness detector less sensitive by increasing it.
Line 53 defines COUNTER
, the total number of consecutive frames where the eye aspect ratio is below EYE_AR_THRESH
.
If COUNTER
exceeds EYE_AR_CONSEC_FRAMES
, then we’ll update the boolean ALARM_ON
(Line 54).
The dlib library ships with a Histogram of Oriented Gradients-based face detector along with a facial landmark predictor — we instantiate both of these in the following code block:
# initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"])
The facial landmarks produced by dlib are an indexable list, as I describe here:
Therefore, to extract the eye regions from a set of facial landmarks, we simply need to know the correct array slice indexes:
# grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
Using these indexes, we’ll easily be able to extract the eye regions via an array slice.
We are now ready to start the core of our drowsiness detector:
# start the video stream thread print("[INFO] starting video stream thread...") vs = VideoStream(src=args["webcam"]).start() time.sleep(1.0) # loop over frames from the video stream while True: # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale # channels) frame = vs.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0)
On Line 69 we instantiate our VideoStream
using the supplied --webcam
index.
We then pause for a second to allow the camera sensor to warm up (Line 70).
On Line 73 we start looping over frames in our video stream.
Line 77 reads the next frame
, which we then preprocess by resizing it to have a width of 450 pixels and converting it to grayscale (Lines 78 and 79).
Line 82 applies dlib’s face detector to find and locate the face(s) in the image.
The next step is to apply facial landmark detection to localize each of the important regions of the face:
# loop over the face detections for rect in rects: # determine the facial landmarks for the face region, then # convert the facial landmark (x, y)-coordinates to a NumPy # array shape = predictor(gray, rect) shape = face_utils.shape_to_np(shape) # extract the left and right eye coordinates, then use the # coordinates to compute the eye aspect ratio for both eyes leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) # average the eye aspect ratio together for both eyes ear = (leftEAR + rightEAR) / 2.0
We loop over each of the detected faces on Line 85 — in our implementation (specifically related to driver drowsiness), we assume there is only one face — the driver — but I left this for
loop in here just in case you want to apply the technique to videos with more than one face.
For each of the detected faces, we apply dlib’s facial landmark detector (Line 89) and convert the result to a NumPy array (Line 90).
Using NumPy array slicing we can extract the (x, y)-coordinates of the left and right eye, respectively (Lines 94 and 95).
Given the (x, y)-coordinates for both eyes, we then compute their eye aspect ratios on Line 96 and 97.
Soukupová and Čech recommend averaging both eye aspect ratios together to obtain a better estimation (Line 100).
We can then visualize each of the eye regions on our frame
by using the cv2.drawContours
function below — this is often helpful when we are trying to debug our script and want to ensure that the eyes are being correctly detected and localized:
# compute the convex hull for the left and right eye, then # visualize each of the eyes leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
Finally, we are now ready to check to see if the person in our video stream is starting to show symptoms of drowsiness:
# check to see if the eye aspect ratio is below the blink # threshold, and if so, increment the blink frame counter if ear < EYE_AR_THRESH: COUNTER += 1 # if the eyes were closed for a sufficient number of # then sound the alarm if COUNTER >= EYE_AR_CONSEC_FRAMES: # if the alarm is not on, turn it on if not ALARM_ON: ALARM_ON = True # check to see if an alarm file was supplied, # and if so, start a thread to have the alarm # sound played in the background if args["alarm"] != "": t = Thread(target=sound_alarm, args=(args["alarm"],)) t.deamon = True t.start() # draw an alarm on the frame cv2.putText(frame, "DROWSINESS ALERT!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) # otherwise, the eye aspect ratio is not below the blink # threshold, so reset the counter and alarm else: COUNTER = 0 ALARM_ON = False
On Line 111 we make a check to see if the eye aspect ratio is below the “blink/closed” eye threshold, EYE_AR_THRESH
.
If it is, we increment COUNTER
, the total number of consecutive frames where the person has had their eyes closed.
If COUNTER
exceeds EYE_AR_CONSEC_FRAMES
(Line 116), then we assume the person is starting to doze off.
Another check is made, this time on Line 118 and 119 to see if the alarm is on — if it’s not, we turn it on.
Lines 124-128 handle playing the alarm sound, provided an --alarm
path was supplied when the script was executed. We take special care to create a separate thread responsible for calling sound_alarm
to ensure that our main program isn’t blocked until the sound finishes playing.
Lines 131 and 132 draw the text DROWSINESS ALERT!
on our frame
— again, this is often helpful for debugging, especially if you are not using the playsound
library.
Finally, Lines 136-138 handle the case where the eye aspect ratio is larger than EYE_AR_THRESH
, indicating the eyes are open. If the eyes are open, we reset COUNTER
and ensure the alarm is off.
The final code block in our drowsiness detector handles displaying the output frame
to our screen:
# draw the computed eye aspect ratio on the frame to help # with debugging and setting the correct eye aspect ratio # thresholds and frame counters cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) # show the frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop()
To see our drowsiness detector in action, proceed to the next section.
Testing the OpenCV drowsiness detector
To start, make sure you use the “Downloads” section below to download the source code + dlib’s pre-trained facial landmark predictor + example audio alarm file utilized in today’s blog post.
I would then suggest testing the detect_drowsiness.py
script on your local system in the comfort of your home/office before you start to wire up your car for driver drowsiness detection.
In my case, once I was sufficiently happy with my implementation, I moved my laptop + webcam out to my car (as detailed in the “Rigging my car with a drowsiness detector” section above), and then executed the following command:
$ python detect_drowsiness.py \ --shape-predictor shape_predictor_68_face_landmarks.dat \ --alarm alarm.wav
I have recorded my entire drive session to share with you — you can find the results of the drowsiness detection implementation below:
Note: The actual alarm.wav
file came from this website, credited to Matt Koenig.
As you can see from the screencast, once the video stream was up and running, I carefully started testing the drowsiness detector in the parking garage by my apartment to ensure it was indeed working properly.
After a few tests, I then moved on to some back roads and parking lots were there was very little traffic (it was a major holiday in the United States, so there were very few cars on the road) to continue testing the drowsiness detector.
Remember, driving with your eyes closed, even for a second, is dangerous, so I took extra special precautions to ensure that the only person who could be harmed during the experiment was myself.
As the results show, our drowsiness detector is able to detect when I’m at risk of dozing off and then plays a loud alarm to grab my attention.
The drowsiness detector is even able to work in a variety of conditions, including direct sunlight when driving on the road and low/artificial lighting while in the concrete parking garage.
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In today’s blog post I demonstrated how to build a drowsiness detector using OpenCV, dlib, and Python.
Our drowsiness detector hinged on two important computer vision techniques:
- Facial landmark detection
- Eye aspect ratio
Facial landmark prediction is the process of localizing key facial structures on a face, including the eyes, eyebrows, nose, mouth, and jawline.
Specifically, in the context of drowsiness detection, we only needed the eye regions (I provide more detail on how to extract each facial structure from a face here).
Once we have our eye regions, we can apply the eye aspect ratio to determine if the eyes are closed. If the eyes have been closed for a sufficiently long enough period of time, we can assume the user is at risk of falling asleep and sound an alarm to grab their attention. More details on the eye aspect ratio and how it was derived can be found in my previous tutorial on blink detection.
If you’ve enjoyed this blog post on drowsiness detection with OpenCV (and want to learn more about computer vision techniques applied to faces), be sure to enter your email address in the form below — I’ll be sure to notify you when new content is published here on the PyImageSearch blog.
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Lee Hoyoung
Hello, Adrian.
I’d like to ask you a few questions about this post.
I use raspberry pies 3 and I’m using an SAMSUNG SPC-B900W webcam.
You mentioned that you did not perform well in the raspberry Pie 3 article.
I’d like to reduce the incidence of this phenomenon, but how can I solve this phenomenon?
Adrian Rosebrock
Please see my reply to “N.Trewartha” regarding the Raspberry Pi 3.
AHSAN JALIL
Dear Adrian,
If someone wear sun glasses ,How will you detect the person is sleeping ?????
Plz reply
Harsha
Bro, your codes are awesome.
I need a small, what are the packages need to install on windows 10.
And is the driver drowsiness detection system code does work on windows?
Adrian Rosebrock
Yes, this code will run on Windows 10 provided you have OpenCV and dlib installed.
Amit Shukla
No it’s easily work on windows u need to install some python library opencv scipy numpy imutils cmake playsound before install this packages firstly check python installation and it’s environmental path settings .
paruchurisaikrishna
dear Amit Shukla,can u please tell me how to install these libraries in command prompt?when I am trying to install dlib,opencv,scipy and everything ,it arises a connection time out error everytime.so help me with this problem.
N.Trewartha
8.5.2017
A super project.
I will try to do this on a RPi 3 so I have a solution fpr the car.
Any tips ?
Adrian Rosebrock
If you intend on using a Raspberry Pi for this, I would:
1. Use Haar cascades rather than the HOG face detector. While Haar cascades are less accurate, they are also faster.
2. Use skip frames and only detect faces in every N frames. This will also speedup the pipeline.
Vikram
Adrian, what exactly do you mean when you say,”Use skip frames”?Is this switch or an option that we can use?I am planning to implement this on R-Pi3 in my car and would love to understand more.Btw,fantastic article.A huge fan of your site and courses.
Also any tips or articles on the precompilation of dlib libraries and perf tips for R-pi3?
Adrian Rosebrock
By “skip frames” I mean literally only process every N frames for face detection (i.e., “skipping frames”). I plan on doing an updated blog post on how to optimize facial landmark detection for the Raspberry Pi, so stay tuned for that post.
SMITH
WOW!!!
THANKS~
I’m waiting for the results, too!
Alexon
Hey Adrian/Vikram,
I found if you only perform detection every once in a while as well it improves performance quite significantly, I trained my own dlib shape predictor/detector and ran very smooth on a Raspi by only performing detection once at startup and then on large movements (Please bear in mind the picamera was at this stage locked in a fixed place, so there was not lots of movement, so this may not work for your own setup, but give it a shot!)
Adrian Rosebrock
Thanks for sharing Alexon. And just to add to the comment further, this method is called “frame skipping” and is often used to improve the speed of frame processing pipelines.
Kenny
Awesome Adrian! Fantastic post as usual. Looking forward to the release of your deep learning book!
Adrian Rosebrock
Thank you Kenny! 🙂
Hitesh
How many FPS you can process ?
Fang
This depends on what device you are running on.
Balesh
How to detect when a driver wears shades.
ss
you can’t
Pradeep Roy
My idea was to use a IR camera that usually takes the thermal image of the face instead of the RGB colored image . There should be slight variation in the thermal image when eyes are closed even when the shades are on .
I should definitely work for a good IR Camera .
Gary Cao
Fantastic work!
What if the driver wears sunglasses? Any ideas?
Adrian Rosebrock
If the driver wears sunglasses and you cannot detect the eyes then you cannot apply this algorithm. I would suggest extending the approach to also monitor the head tilt of the driver as well.
Marco
Typical sunglasses filter out visible light but do not block infrared.
Chevrolet’s Super Cruise has an IR emitter built into the steering wheel. On top of the steering column they have a camera with visible light filter (and IR pass through). By using the IR reflections from the eyes they ensure that the driver is watching the road.
Adrian Rosebrock
That’s pretty cool, thanks for sharing Marco.
Carlos
Is this the most dangerous and risky software test you have ever made?
Adrian Rosebrock
Off the top of my head, yes. But I was driving very slow (5-10 MPH) on uncrowded streets. The video made it seem like I was going much faster.
Hermi
I love to read your great posts. Amazing work, very impressive.
Greetings from germany.
Adrian Rosebrock
Thank you Hermi, I hope all is well in Germany.
mapembert
Fantastic job Adrian! Both the results and the write up. I’m patiently waiting for a ultra dice counter. 🙂
Adrian Rosebrock
I’m glad you enjoyed the blog post mapembert! What do you mean by an “ultra dice counter”?
Oleh
Nice tutorial and nice application for facial landmarks, thank you! Cool car! ( I am Subaru lover too 🙂
Adrian Rosebrock
Thanks Oleh, I’m glad you enjoyed the tutorial! I also really love my Subaru as well. Living in the north-eastern part of the United States, it’s often help to have AWD drive to get around on snowy days 😉
Rishabh Gupta
Awesome work Adrian! A slight change from Blink detector but a nice application.
I’ve a question regarding this.
Dont you think you should also consider the moving state of tha car coz there’s no point of any alert if the car is stationary and driver is sleepy.
I know we would need some sensor to detect the speed of the car for this. But i would like to know exactly what device do we need to use for this, how do we connect to our system and required modules for our code to incorporate this functionality.
Adrian Rosebrock
I often get questions on how to build practical computer vision applications based on previous blog posts. This post on drowsiness detection, as you noted, is an extension of blink detection.
As for considering the moving state of the car, absolutely — but that’s outside what we are focusing on: computer vision. If you were to implement this method in a factory for cars you would have sensors that could tell you if the car was moving, how fast, etc. Exactly how you access this information is dependent on the manufacturer of the car.
Joseph Landau
In view of the importance of this application, would it not be sensible to use a faster single board computer, such as perhaps an Odroid? Or would that still be inadequate?
Adrian Rosebrock
For an entirely self-contained project I would likely use a device from the NVIDIA TX series.
heart
Thank you.
The connection was successful.
Movement is about 5 seconds slower.
What should I do if my camera is slow?
heart
Thank you.
The connection was successful.
But there is no sound.
Is there a solution?
What should I download separately?
Adrian Rosebrock
If there is no sound, then there is an issue with the
playsound
library. As I mentioned in the blog post, I’m not an expert on playing sounds with the Python programming language so you will need to consult theplaysound
documentation.Umar Yusuf
Nice practical application of openCV. Am a huge fan of your blog, however my primary niche of interest is in Geo sciences field.
Any chance you will venture into creating Geo related blog posts in future?
I mean openCV in GIS, Remote Sensing, Geomatics, Geology, Geography etc…
Adrian Rosebrock
Hi Umar — I personally don’t do any work with geo-related projects, but it’s something I would consider exploring in the future.
Joseph Landau
Do you have any plans to support night driving?
Adrian Rosebrock
At the present time no, but I will certainly consider it.
Ömer Furkan
Hi I work on raspberry pi 3 I think I did everything right but occur an error like that :
usage: detect_drowsiness.py [-h] -p SHAPE_PREDICTOR [-a ALARM] [-w WEBCAM]
detect_drowsiness.py: error: argument -p/–shape-predictor is required
Adrian Rosebrock
It’s not an error. You need to read up on command line arguments before continuing.
Ankit
I am also getting same but don’t know what to do ?
$ python pi_detect_drowsiness.py
usage: pi_detect_drowsiness.py [-h] -c CASCADE -p SHAPE_PREDICTOR [-a ALARM]
pi_detect_drowsiness.py: error: the following arguments are required: -c/–cascade, -p/–shape-predictor
Ankit
I solve my problem by this
$ python pi_detect_drowsiness.py –shape-predictor shape_predictor_68_face_landmarks.dat –cascade haarcascade_frontalface_default.xml
thank you all.
Adrian Rosebrock
For others struggling with the same issue please read this post on command line arguments.
VoidHeart
I’m using windows. I stuck at that command line arguments. I dont know how to run this code
Adrian Rosebrock
It’s okay if you are using Windows, command line arguments still work in Windows. To run the script open a command line prompt and follow the instructions in the post. If you need help with command line arguments, read this post first.
Fahim
Hi Omer. I am also having the same problem did you fix this problem. Please let me know.
John
Hi, i have the same problem. Did you fix this problem, please let me know.
usage: detect_drowsiness.py [-h] -p SHAPE_PREDICTOR [-a ALARM] [-w WEBCAM]
detect_drowsiness.py: error: the following arguments are required: -p/–shape-predictor is
I work with windows 10.
Nitesh
usage: detect_drowsiness.py [-h] -p SHAPE_PREDICTOR [-a ALARM] [-w WEBCAM]
detect_drowsiness.py: error: the following arguments are required: -p/–shape-predictor
How to solve the above error?
Thanks in advance
Adrian Rosebrock
Please read the comments before posting. I have addressed this question in my reply to “Ömer”.
Nitesh
Playsound library not working, giving import error So i used Pygame,i redefined the sound_alarm by putting the pygame code inside it and called it with separate thread it’s working fine.
Thanks
Adrian Rosebrock
Thanks for sharing Nitesh!
Ebuka
how can i get mine working, my video stream is very slow and the sound this not working
Manthan Admane
Playsound library working in my case. Thanks for the suggestion though 🙂
Rad
hey.
I have tried to run the code on raspberry pi 3.
The code is working fine but has a delay of 5-10 sec.
What would u suggest me to do to run it real time on the pi?
Adrian Rosebrock
I will be doing a separate blog post that provides optimizations for running blink detection and drowsiness detection on the Raspberry Pi. There are a number of optimizations that need to be made, too many to detail in a comment.
Limin
Hello Adrian
Does it work if the driver using glasses ? especially sun glasses ?
Thx
Adrian Rosebrock
In most cases, no. You need to be able to reliably detect the facial landmarks surrounding the eyes. Sun glasses especially can obscure this and give incorrect results. Remember, if you can’t detect eyes, you can’t detect blinks.
wayne
Thanks for writing this article! This is something I’ve been looking for.
I live in South Korea and deadly traffic accidents caused by drivers(especially overworked bus or truck drivers) falling asleep behind the wheel occur almost regularly.
I’ve been thinking about implementing a system that utilizes dual cameras, one for eye blinking monitoring, the other for monitoring the road.
The front road monitoring camera would be capturing the image of the car in your lane and by analyzing how rapidly you are approaching the vehicle, you could warn the driver. I have a few vague ideas as to how to solve this problem but I am just starting to wet my beak in computer vision so if you write an article about this subject, I’d appreciate it so much!
Damian
[INFO] loading facial landmark predictor…
Traceback (most recent call last):
File “/home/pi/Downloads/drowsiness-detection/detect_drowsiness.py”, line 64, in
predictor = dlib.shape_predictor(args[“shape_predictor”])
RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat
I got this error on raspberry 🙁
Adrian Rosebrock
Make sure you use the “Downloads” section of this blog post to download the source code and shape_predictor_68_face_landmarks.dat file.
Harry
How did u solve this error???Even I have got the same error.
Adrian Rosebrock
To solve this error you will need to read up on command line arguments and how they work. Once you read up on them you will understand how to solve this error.
Damian Zarate
Hi Adrian!
I got the following error and don’t know what to do!
…
(h, w) = image.shape[:2]
AttributeError: ‘NoneType’ object has no attribute ‘shape’
Adrian Rosebrock
Double-check that OpenCV can access your webcam. I cover the reason for these NoneType errors in this blog post.
Yousuf Fahim
I am also having the same error. Did you find out how to solve it ?
halukGul
if you use desktop(no laptop), you must send “–webcam 0″ as command line argument whether you check the line this type. chance as”–webcam 1 to webcam 0″
Sr
Can we use a laptop webcam?
Adrian Rosebrock
Yes, you can absolutely use a laptop webcam. I used a laptop webcam to debug this script before I moved to a normal webcam in my car.
Ricardo Rodriguez
Do you think is a good idea to try to reuse the detected pose of every frame, (to implement a tracking algorithm).
The result would be the same as using the landmark detection every frame. I would be glad if you could recommend me some papers to do the tracking.
Best Regards,
Ricardo
Adrian Rosebrock
Object tracking in video is a huge body of work. My main suggestion would be to start with dlib’s correlation tracker and go from there.
Arighi Pramudyatama
Hey Adrian, nice post! This helps me a lot to unsderstand about the science behind this project. I used Raspberry Pi 3 and can’t figure to use skip frame or using haar-cascade instead of HOG. Any references to do that?
And when will you release the Raspberry Pi version of this tutorial? Can’t wait for your next interesting post.
Thanks
Adrian Rosebrock
I’m not sure when I’ll be releasing the Raspberry Pi version of the tutorial — most of my time lately has been spent writing Deep Learning for Computer Vision with Python.
As for using Haar cascades for face detection, be sure to take a look at Practical Python and OpenCV where I discuss how to perform face detection in video streams using Haar cascades.
Shahnawaz Shaikh
Adrian I have a similar problem at hand to detect the eye deflection.I have a video file of eye portion of the QA person who checks the defective bottle. If a person sees at one point there is no defect as soon as his eyes deflect up or sideways there can be defect.How can the implementation be done.
Adrian Rosebrock
I’m not familiar with the term “eye deflection”. Can you explain it or provide a link to a page that describes it?
Ashfak
Great Work Adrian….
I have seen while two faces come in frame it detects both and EAR is overlapped. So in real time driving I don’t want to detect any face other than the first one. Any suggestion implementing on that?
For who facing sound problem I have installed pygame module and works fine.
import pygame
def sound_alarm(path):
# play an alarm sound
pygame.mixer.init()
pygame.mixer.music.load(path)
pygame.mixer.music.play()
Hopefully it helps others 🙂
Adrian Rosebrock
There are a few methods to do this. The easiest solution is to find the face with the largest bounding box as this face will be the one closest to the camera.
aswthama
find the face with the largest bounding box
Adrian Rosebrock
You use the
cv2.boundingRect
function to compute the bounding box coordinates. If you are new to working with OpenCV that’s okay but I would recommend you read through Practical Python and OpenCV to help you learn the fundamentals, including face detection and working with bounding boxes.Koustav Dutta
Thanks a lot dude really it worked great
Itzia Flores
Hi Adrian. I really love your tutorials, they´ve helped me a lot. Actually I have to do a work using blink detection but my teacher didn´t let me use a PC, so I want to use Raspberry Pi 3, but as you said, it´s not fast enough. What other development board can I use that is fast enough??
Adrian Rosebrock
You can make this code fast enough to run on the Raspberry Pi. Swap out the dlib HOG + Linear SVM detector to use Haar cascades and use skip frames.
Marvin
Hi Adrian, I agree with Itzia, when it talks about its editions and how much it helps us to improve in the Programming of Computer Vision!!
You could make an example teaching how to swap out the dlib HOG + Linear SVM detector to use HAAR Cascades?? We would be really grateful!!!
Adrian Rosebrock
Yes, I will be doing a dedicated Haar cascade + Raspberry Pi blog post in the future (hopefully soon).
fariborz
please please please 🙂
Adrian Rosebrock
According to my current schedule, I’ll be releasing the Raspberry Pi + drowsiness detector post in October 2017 (i.e., later this month).
Neer
Train engineers (drivers) are afflicted by the same issue. The solution there is much lower tech. They have a pedal that they have to repeatedly press throughout their shift. If they fail to press the pedal in the allotted time an audio warning is sounded. If the warning goes unheeded (presumably because the engineer fell asleep) then the train comes to a stop.
Adrian Rosebrock
Excellent solution and a great example of how simple engineering can be used instead of more complicated approaches.
Anne
Hey..Thanks a lot for your posts ,I regularly follow them. Your tutorials are fun and easy to understand..Recently I have developed a keen interest in explainable AI(XAI) but there not much papers or interesting applications available for Computer vision and Image processing field of XAI.. I was hoping if you could come up with some fun applications in this field
Adrian Rosebrock
Hi Anne — I think you would benefit greatly from the PyImageSearch Gurus course and my new book, Deep Learning for Computer Vision with Python. Inside both the course and book I include practical, real-world projects.
Kiruthika
Hi adrain I need to know how to run Raspberry pi with voice commands. Like siri I need to know the date time weather meanings of words or any python code to be executed just through voice commands with a wake up call and “What is the time”? type of commands.
Adrian Rosebrock
Hi Kiruthika — I am not familiar with voice command libraries/packages.
Dhruv
Hi adrain I am getting ‘select timeout’ error every time I run this code. Please help me out
Adrian Rosebrock
This sounds like an issue with your camera. Double-check that the camera is connected properly to your system and that you can access it via OpenCV.
Matteo
Hey Adrian, cool stuff. The open source movement is remarkable. That is progress, also thanks to your contributions.
Anyhow, imagine I have exploited your tutorial to make a yawn detector. When people yawn with their mouth wide open it’s straightforward, but different people have different styles of yawning so what could you suggest to allow also people who yawn by placing their hand in front of the mouth to be detected reliably following your approach?
Adrian Rosebrock
At that point you would need to train a machine learning classifier to recognize various types of yawns. Using simple heuristics like the aspect ratio of facial regions is not going to be robust enough, especially if parts of the face are occluded.
omjeet verma
thanks
Darshil
I am not able to install dlib library for windows. please help
Adrian Rosebrock
Hi Darshil — I don’t support Windows here on the PyImageSearch blog, only Linux and macOS. Please take a look at the official dlib install instructions for Windows.
MY Yang
Hi, Dr. Rosebrock
Your posts are very helpful for me. Thanks a lot.
I have a question.
You use the eye aspect ratio (EAR) method.
I think ‘PERCLOS’ is also good method to detect drowsiness.
PERCLOS is the ratio of the full size of the eyes to the size of current eyes.
I want to calculate PERCLOS, but I have a problem.
I can calculate the size of current eyes, but I can’t calculate full size of them (fully opened eyes).
How can I calculate it?
Sorry for poor English skill.
Adrian Rosebrock
I haven’t used PERCLOS before. Do you have a link I could use to read more about it?
MY Yang
Of course
https://ntl.bts.gov/lib/10000/10100/10114/tb98-006.pdf
https://image.slidesharecdn.com/201410icasmmexicocity-absent-notgiven-150621212630-lva1-app6892/95/detecting-fatigue-lessons-learned-19-638.jpg?cb=1434922163
Adrian Rosebrock
Thanks for sharing. You mentioned not being able to compute the size of the “fully open” eyes. Can you elaborate on what you mean by that? If the facial landmarks can localize the eyes you can compute the size.
MY Yang
I apologize for not being word-perfect in English.
English is not my mother tongue; please excuse any errors on my part.
Rohit Thakur
Hi Adrian,
Thanks for this wonderful tutorial. Huge fan of you. I want to know how can we detect yawning as well as head movement of drivers including with eye blinking for detecting drowsiness among drivers as this will give a proper indication about their condition. Could you explain a little if possible. Thanks in advance.
Adrian Rosebrock
Head movement can be tracked by monitoring the (x, y)-coordinates of the facial landmarks across successive frames. You could combine this approach with this post to monitor how direction changes. Yawning could potentially be distinguished my monitoring the lips/mouth landmarks and applying a similar aspect ratio test.
Saimon
Hi Adrian,
Thanks for this wonderful tutorial. I m fan of you. I want to know how can solve this problem
[INFO] loading facial landmark predictor…
[INFO] starting video stream thread…
Instruction non permise (core dumped)
Adrian Rosebrock
Hi Saimon — can you insert some “print” statements into your code to debug and determine exactly which line is causing the seg-fault? I would need to know exactly which line is causing the problem to advise.
Saloni
Hello, Adrian!
My program is terminating in a similar way too!
[INFO] loading facial landmark predictor…
[INFO] starting video stream thread…
here1
here2
here3
here4
here5
here6
Illegal instruction
(I inserted some “here#” statements to debug the code. It stopped here:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
print(“here6”) —>this was the last “here#”
# detect faces in the grayscale frame
rects = detector(gray, )
print(“here7”)
The webcam starts and stops in a few seconds and no frame appears.
Adrian Rosebrock
It sounds like your system is trying to compute the detected faces. Since it’s this line that is throwing the illegal instructions error, I think you should try re-compiling and re-installing dlib.
Daniel Guerra
Do you think this method will work with an infra red camera? I.e. do you think the face detection you used will work with such image?
chiheb
Hi, I’m using Raspberry Pi 3 and I’m facing a problem with scipy library t is installed from pip3 from the last version but I got this error:
from scipy import distance as dist
ImportError: cannot import name ‘distance’
I want to know what is the cause of the problem. And thnx
Adrian Rosebrock
Which version of SciPy do you have installed?
Juan M
Hi Adrian,
This project was terrific! I found it really inspiring…now I am installing dlib into my raspberry pi, however it is taking an eternity to end the last step…I did the process of swap memory and the other adjustments and it is still running. Thanks!
Parabens!!
Adrian Rosebrock
Hey Juan — it will take awhile to compile and install. I would suggest letting it run overnight.
John Goodman
Hey Adrian,
Thanks for this! Not being the greatest coder, I was banging my head trying to figure out the logic to adapt your blink detection blog post to a sleep/drowsiness detector. The addition of the audible alarm makes this so much cooler!
Adrian Rosebrock
Thanks John 🙂
Vaishnavi Shirbhate
Hey Adrian,
If I wanna use Raspberry pi drowsiness detection, what changes shall i do.
Also please post your new blog about optimizing raspberry pi for real time facial landmark detection.
Thank you.
Adrian Rosebrock
I have already published the optimized drowsiness detection + Raspberry Pi post. You can find it here.
APv
Hi Adrian,
Will this code work for any other video stream of a drowsy driver(ie, not real time)
Adrian Rosebrock
I tuned some of the parameters, including the EAR threshold and consecutive number of frames based on this video. It should translate reasonably well to other videos but you may have to tune the parameters for your application.
APv
Thanks. And one more question, what is the index of the webcam?. say, iam trying to interface it with the webcam of my laptop
Adrian Rosebrock
You’ll want to check this yourself on your own machine. Typically it’s “0” for the first webcam and “1” for the second. Again, you’ll need to check that on your machine.
vamshi
what does .xml, .dar files do..how can i use them..how did you create them..please explain clearly..iam new to this..
thank you.
Adrian Rosebrock
Hey Vamshi — this blog post doesn’t have any XML files so I’m not sure what you are referring to?
vamshi
hi..
when i download the code,three files are being dowloaded
shape_predictor_68_face_landmarks.dat
haarcascade_frontalface_default.xml
pi_detect_drowsiness.py
iam confused..how do i use them..
can i do the same drowsines detection using neural networks..?
Adrian Rosebrock
The
haarcascade_frontalface_default.xml
is the face detector model andshape_predictor_68_face_landmarks.dat
is your facial landmark predictor. Thepi_detect_drowsiness.py
Python script loads the models and uses them to first detector a face and then localize the facial landmarks. To see how to execute the script please refer to the blog post.rvk
but sir does this blog post code use cnn..i guess not..i need to know how to implement the same thing in cnn..is it possible..
Adrian Rosebrock
This method does not need a CNN. A CNN would be overkill.
Satish Nair
Thanks for sharing your ideas. Your tutorials are amazing!!!
Really helps in understanding the concept.
Adrian Rosebrock
Thank you Satish, I really appreciate that 🙂
Juan Pablo
Sorry. What version do you use of Python ?. I try to test it with windows.
Adrian Rosebrock
This code will work with both Python 2.7 and Python 3.
kent
the code is not working in me..
Adrian Rosebrock
Hey Kent — what specifically is not working? Is the code giving you an error? Keep in mind that myself and others can only help if you provide more detail and explain exactly what the problem is.
Asim Hamal
can we make .exe file for this application? IF it can be made then can u provide me some tips.
Adrian Rosebrock
We typically do not create .exe applications from Python scripts. For simple Python script it is technically possible but since we use OpenCV, which is dependent on a number of libraries, it’s very complicated I do not recommend it.
Asim Hamal
is there any method to deploy this application?
Adrian Rosebrock
Not easily, but that isn’t my particular area of expertise. I would suggest putting together a VM or Docker image with OpenCV pre-installed along with your application and shipping the VM/Docker image directly.
sahil
if someone where sun glasses then is there any solution?????????
Lee
Wow. This is amazing! I’m currently working on a school project that is focused on drowsy driving. This article is really informative, especially the details on facial landmarks and recognition.
Adrian Rosebrock
Thanks Lee! And best of luck with the school project 🙂
mukul
this is giving error in windows
usage: detect_drowsiness.py [-h] -p SHAPE_PREDICTOR [-a ALARM] [-w WEBCAM]
detect_drowsiness.py: error: the following arguments are required: -p/–shape-predictor
Adrian Rosebrock
Hey Mukul — you need to execute the script from your terminal and supply the command line arguments. If you are new to command line arguments, that’s okay, but you need to read up on them first before you try to execute the script.
mukul
thanku sir.it worked.
one think i want to ask does the shape_predictor_68_face_landmarks.dat
comes along with the dlib library.or you created it for this project.
akshara
i am also getting the same error.is your error solved.If solved will u please share the solution
mukul
for windows :
all you need to de is first go to the directory where you have placed these files in cmd prompt with “cd” command;
then you need to run the following command:
python detect_drowsiness.py -p shape_predictor_68_face_landmarks.dat -a alarm.wav
mike
awesome project
Adrian Rosebrock
Thanks Mike, I’m glad you enjoyed it!
mike
i want to know .did you created the facial landmark file for this project.plz provide some detailed description of it.can we use it on other projects also.
Adrian Rosebrock
I did not create the facial landmark predictor — that was created by Davis King of dlib. You should refer to his example on training custom shape landmark predictors.
17g1996
what if i want to use picam instaed of a webcam?
Adrian Rosebrock
See this blog post.
sid
im getting an error called excepted an inteded block on line 21 ..plz help
Adrian Rosebrock
Make sure you use the “Downloads” section of this blog post to download the source code and example video. Don’t try to copy and paste the code.
Ali Mirmostafa
Hello Adrian I’m getting this error running the code
Traceback (most recent call last):
File “detect_drowsiness.py”, line 6, in
from scipy.spatial import distance as dist
ImportError: No module named ‘scipy’
and when i want to install scipy with “pip install scipy” the terminal gives this message:
Requirement already satisfied: scipy in /usr/lib/python2.7/dist-packages
In the stages of installing opencv on ubuntu i think my python 3 was my choice and in the test it was ok(python 2.7 test result was not ok)
what should I do? How can I install scipy on python 3?
Adrian Rosebrock
How did you install OpenCV on your system? Did you use a PyImageSearch tutorial? If so, you may have forgotten to install SciPy into the Python virtual environment:
Georgy
Hi. Wonderfull job! I already finished your book about OpenCv and going to get your deep learning book.
I have a question/problem. I cannot find a shape predictor for 194 points, but there are dataset that you mentioned. How precize could it be if I train it?
And are there any shape predictors or datasets
for profile face?
Adrian Rosebrock
Davis King, the creator of dlib, includes a number of models and scripts/programs that can be used to train shape predictors. I would suggest starting with the dlib models page.
iman
hello Adrian.
my webcam can’t work in wmware Ubuntu runner.
Can you fix it?
(my laptop is HP Probook 4540s – win 10)
Adrian Rosebrock
Exactly how (or even if) it’s possible to enable your webcam through virtualization software such VMware is dependent on your OS, system, and VM software version. I’m not sure what the process would be for your particular system so you should spend some time researching it.
Prerna
Hii …can you please tell me what setup should be made before running the code
Adrian Rosebrock
Make sure you follow my instructions on installing OpenCV for your particular operating system. From there you can follow this guide.
Vishnu T P
Sir ,
I am implementing drowsiness detection on raspberry Pi including yawning detection also following your blog. Programming is working fine but it takes a lot of running time on raspberry pi. Can you suggest methods to speed up the processing.
Adrian Rosebrock
I actually wrote a Raspberry Pi optimized drowsiness detector. You can find it here.
Thakur Rohit
Hi Adrian, I am new to Computer Vision and Python. But with your blog i have successfully applied this drowsiness detection on AWS cloud server on a saved video file. However i can view the result by logging to AWS through SSH with -X flag. But it is taking a lot of time so i am wondering how can i save the EAR value corresponding to each frame and corresponding Drowsiness alert message into text file. Can you specify the code snippet which i need to change for the same? And does it help me to fasten the detection and improving the performance?
Adrian Rosebrock
Congrats on running the script in AWS, that’s a big step. The reason why you are seeing such a lag is not due to the slowness of the processing, it’s due to the I/O latency of forwarding the video frames from the cloud to your machine. You could write some additional code to “log” the drowsiness events or you could consider writing the video back to disk with the results drawn on it.
Sabina
Hi your work amazing as usually. I want to detect eye pupil in video using dlib, but i do not have an idea how to do it. Please help. Can you give me some advance please
Adrian Rosebrock
I don’t have any tutorials on pupil detection or tracking but I know some PyImageSearch readers have tried this method.
Sabina
Thank you so much
PI
I use pygame instead of playsound.
import pygame // instead of playsound at line 7
pygame.init()
pygame.mixer.music.load(‘alarm.wav’)
pygame.mixer.music.play()
time.sleep(2)
pygame.mixer.music.stop()
instead of playsound.playsound(path) at line 16
I learned a lot from this.
Thanks Adrian
Adrian Rosebrock
Thanks for sharing!
Antonia Mendoza
You are amazing!
bhargav
Hi Adrian
Is there any solution to this problem
i have used python
i got this error
predictor = dlib.shape_predictor(args[“shape_predictor”])
RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat
Adrian Rosebrock
Make sure you use the “Downloads” section of this blog post to download the source code + .dat file used for detecting facial landmarks. Based on your error you are not supplying the correct path to your .dat file.
Gustavo
Hi Adrian.
I’m using Python, and I’m with this problem.
usage: detect_drowsiness.py [-h] -p SHAPE_PREDICTOR [-a ALARM]
detect_drowsiness.py: error: the following arguments are required: -p/–shape_predictor
Do you have any solution?
Adrian Rosebrock
Make sure you read this tutorial on command line arguments.
sadhish
hello andrian.am using raspberrypi zero and pi camera.showing error “object has no atribute ‘shape’ ”
please help
Adrian Rosebrock
It sounds like OpenCV cannot access your webcam. You can read more about the error and how to solve it here.
Behrouz
Can we similarly use mouth aspect ratio to detect whether mouth is open for yawn detection?
Adrian Rosebrock
Absolutely! Give it a try 🙂
sukshi
How can i detect smile of a person using facial landmarks,considering different people will be having different lip size and smile type having different.
Adrian Rosebrock
There are a few ways to approach this problem. The first is to not use facial landmarks at all and train a custom model to perform smile detection. In fact, this is the exact approach I took when writing a chapter for smile detection inside Deep Learning for Computer Vision with Python.
Secondly, even though lip sizes may vary from person to person you should be able to compute the aspect ratio of the lips, similar to what we’ve done in this post. Give it a try!
Joseph Felix
Hi Adrian
I’m new to this and after i read your post and the paper, i found that u didn’t use the svm or the markov model they propose to classify the drowsiness but instead using just consecutive number of frame to classify it.
Can you tell me the reason why?
Adrian Rosebrock
Refer to my previous post on blink detection for the reasoning (it was just too advanced for an introductory post).
Jyothi
Traceback (most recent call last):
…
ImportError: No module named scipy.spatial
scipy.spatial isnt automatically imported with scipy. What do I do?
Adrian Rosebrock
You need to install the SciPy library:
$ pip install scipy
Ravi
Hi Adrian,
Thanks for the awesome tutorial. I just want to ask something, is there a way we can detect whether the person is in camera frame or not ? Suppose the person fell asleep and his face is not in camera frame, in that case how can we detect that and (maybe) we can trigger another alarm which is for longer duration so that the person wakes up ?
Adrian Rosebrock
If the person’s face is no longer in field of view of the camera then you cannot apply this technique. You may want to consider a more advanced algorithm that includes some sort of temporal information of the face, such as head bobbing, etc. that indicates sleep. I would do some research on “activity recognition”.
Ravi
Just one more doubt. You are using detector and predictor in loops to find the face landmarks( and coordinates ) and convert it into a numpy array. Suppose the person’s face is not in camera, in that case what would be the output of detector and predictor and what values will be stored in numpy array.
Adrian Rosebrock
If the person’s face is not in the view of the camera then the face will not be detected and the facial landmarks will not be computed.
Ravi
Can we know that whether the detector has detected a face or not ? I mean, is there any syntax for that, so that we can check for how many frames the face is detected and vice versa.
Adrian Rosebrock
If the detector reports a face location then by definition the detector has detected a face — that’s how face detectors and object detectors in general work. If you’re trying to count the number of a frames a face appears in you may want to consider performing some super basic face recognition. A great first step would be computing the embeddings for each face via this tutorial and then doing some face clustering as well.
Jugal Haresh Sheth
Can u please provide some video tutorial of the project
Adrian Rosebrock
I tend to prefer writing over creating video. I don’t have any plans to do a video tutorial other than the actual video demos that I publish.
Boo
Can I make it using Android Studio?
Adrian Rosebrock
You would need to convert the code from Python to Java + OpenCV, but yes, the general algorithm will work.
Roshan
Hi,
Great tutorial.
I had used this code on a webcam video. What I see is the EARs are almost around the similar values (as in case of open eyes) even when the eyes are closed. This probably happens when because I am moving my head a little in other directions or back-and-forth. I believe your code is based on the assumption that the face should always be still.
Can you please help me on this?
Yan
how to detect face landmark using c++
Adrian Rosebrock
You should refer to the dlib docs for a C++ example.
Raghav
hi Adrian can you please solve the bellow error
File “detect_drowsiness.py”, line 68, in
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS[“left_eye”]
AttributeError: module ‘imutils.face_utils’ has no attribute ‘FACIAL_LANDMARKS_IDXS’
Adrian Rosebrock
This was caused in the latest release of imutils v0.5. I’ll be fixing it in within the next few days with a release of imutils v0.5.1, but in the meantime just change “FACIAL_LANDMARKS_IDXS” to FACIAL_LANDMARKS__68_IDXS and it will work.
Rohit Thakur
Hello Adrain, I am running this on Jetson TX1 with 9 FPS. However i am wondering how can i run this with GPU support or does it automatically do so? If so can you mark the code which accesses the GPU or underlying hardware in the code as i cannot see any CUDA support or GPU programming code? If not how can i add GPU Support for this? Waiting for your response.
Adrian Rosebrock
In short, there really isn’t an easy way to add GPU support (yet). You can re-code in C++ and use the OpenCV + GPU bindings, but not yet for strict Python. Python + OpenCV + CUDA support is coming soon though!
Janhavi
I just wanted to make some changes in the code. Like what I want is if sleepy for 1 min then it will show that sleepy from 1 min, if sleepy for 40 sec then show sleepy from 40 sec and if sleepy more than 1.30 mins then will show that sleepy from 1.30 mins. This is the changes which our teachers wants. But I am not able to do it. Can you please tell me how should I do this, where I need to do the changes in the code and what will be the code.
Thank you.
Adrian Rosebrock
Hey Janhavi — if your teacher wants you to build a project that can detect drowsiness/sleepiness then I suspect your teacher is looking for you to gain a particular skill by doing it. As someone who spent many years and school, and knows the value of an education, I’m not going to give you the answer.
But I will give you a hint:
You should consider using the
time
library to grab a timestamp when the drowsiness detector starts and ends…asda
adrian is it possible to use this code with an optimize opencv?
Adrian Rosebrock
I’m not sure what you mean by “an optimize OpenCV”? Could you clarify?
Anirudh
Hi, Fantastic article, even for starters. Question. would 720p webcam be sufficient for achieving this? or minimum required is 1080p? How would it make any difference? Thanks.
Adrian Rosebrock
I would suggest experimenting with different resolution webcams. Exactly which one is ideal for your project really depends on your lighting conditions. Experiment and you will find your answer 🙂
Meio
Hi adrian your tutorial is awesome. I have a question you are resizing the frame using imutils by the width of 450. Im trying to implement this code on raspberry pi if i changed the width to something like 280 it is smooth af however i need to keep a near distance to a camera for it to perfom the algorithm because the imutils is maintaing the aspect ratio the face will become smaller. Is there a way where i can resize the frame at the same time not maintaining the aspect ratio? or zoom / crop vid? Thank you sir godbless
Adrian Rosebrock
The less data there is to process, the faster your algorithm will run. Furthermore, resizing an image can be seen as a form of “noise reduction”. If you know there is a particular area of the frame you want to monitor you could:
1. Manually extract it via NumPy array slicing
2. Explicitly set the desired dimensions of the frame via your “picamera” library (assuming you’re using the Pi).
Your boi
Hi adrian. I already installed playsound but somehow i got an error saying import gi not found. How can i fix this? TIA
James
Can I reach you through your email
Thanks
Adrian Rosebrock
You can use the contact form on the PyImageSearch site.
Hamza sameer
it wont work , can u pls help
usage: detect_blinks.py [-h] -p SHAPE_PREDICTOR [-v VIDEO]
detect_blinks.py: error: the following arguments are required: -p/–shape-predictor
Adrian Rosebrock
If you’re new to command line arguments that’s okay, but make sure you read up on them before continuing. From there there your error will be resolved 🙂
mustafa
…
AttributeError: ‘NoneType’ object has no attribute ‘shape’
hey Adrian, plz tell me what is the problem here??
Adrian Rosebrock
Your path to either the input image or input video is incorrect, depending on which version of the script you are using, which is causing your image to be “None”. You can learn more about NoneType errors, including how to resolve them, inside this post.
Kashif Inam
Hi , I just want to know that if I want to use this project without using laptop along with me so what can I do for this.
Adrian Rosebrock
You could use other hardware, such as a Raspberry Pi.
kelemu
Hi andrian
Thanks a lot for this valuable blogs
I run the above code but I get the following error:
ModuleNotFoundError: No module named ‘gi’
I am using ubuntu 18.04 OS
Preetha s
Hey Adrian,
What changes should I make if I’m to use my laptop webcam ?
Thank you.
Adrian Rosebrock
There are no changes required if you want to run it on your laptop webcam.
kritika
hey…
what is required to pass at “-p” argument required error is occurring
Adrian Rosebrock
You’re not passing in the command line argument. Make sure you read-up on command line arguments before continuing.
kritika
Sir, do you have code without the help of command line argument?
Adrian Rosebrock
I believe in your, Kritika. Read the argparse tutorial I linked you to and you will be able to resolve the problem 🙂
Adil
Hi Sir,
Firstly, thanks for the program. There is a problem. My program does not initiate .wav file alarm. Can you tell the possible reason?
Adrian Rosebrock
It’s hard to say what the exact problem is there. Can you play the .wav file from your command line?
Sir
Sir how can i turn on the alarm while the driver is drowsy because the alarm only run once
Adrian Rosebrock
That is much less of a computer vision question as a general programming/engineering question. There are a few ways to approach the problem but I would suggest updating Lines 125-128 to start a thread that will loop infinitely, playing the sound. Then have a global variable that the thread checks to see if the driver has woken up.
Trí
I am very grateful to you for the tutorial on detecting drowsiness with the opening of the eye but how does the detector respond if a user wears sunglasses?
Can you add the detection of the appearance of wrinkles when yawning without saying please do a tutorial to make this article more wonderful.
Adrian Rosebrock
If the user is wearing sunglasses you will be unable to correctly detect and localize the eye. You can also use the same algorithm for yawning as well — just monitor the aspect ratio of the mouth landmarks. I’ll be discussing the yawn detector in my up coming Computer Vision and Raspberry Pi book.
PGT
I was able to detect yawning but I could not find a link between the number of yawning and falling asleep.
If people wear sunglasses and yawn, can they still drive ??
sss
the program is executing successfully but the problem is how to close the window .It is not stopping even if i click on close button.
Adrian Rosebrock
Click on the window opened by OpenCV and press the “q” key onyour keyboard.
Atif
can i have documentation of this project?
Adrian Rosebrock
The blog post/tutorial itself is the documentation.
Najeeb khan
What are the hardware requirements for driver drowsiness alert if I want to map it into my car.
Can u list them
Adrian Rosebrock
I used a standard laptop for this project. I wrote a separate tutorial that uses a Raspberry Pi as well. Most modern machines can run this code.
Amit
First of all, I would like to thank you a lot for sharing your knowledge.
I’ve been following your posts for a while and I gain a lot of skills and comprehension as well.
About the alarm:
Once it starts it wont stop for some reason..Even though I have my eyes open.
I resetted the ALARM_ON flag to False exactly as you shown us, and yet – the alarm keeps on buzzing
Adrian Rosebrock
That is certainly strange behavior. Are you using the code used from the “Downloads” section of this tutorial? Or did you copy and paste it? Make sure you’re using the code from the “Downloads” section to ensure there are no copy and paste errors.
Paul Zikopoulos
As always nice work!
I didn’t see anyone else have this problem, but I’m on the Ubuntu GURUs VM and I installed PLAYSOUND. The code would run great, but no sound came out. Sound is fine in the VM as I can play the wave file.
When I tried calling the wave file from the Python CLP I would get ‘missing GI library’ error.
I ended up running this
pip install vext
pip install vext.gi
to solve the problem and it all works fine now.
In case that helps anyone
Adrian Rosebrock
Thanks so much for sharing, Paul!
xuanthan
I’m trying with pi camera module, but it not run,
vs = VideoStream(usePiCamera=True).start()
IndentationError: unexpected indent
please tell me why
thanks you
Adrian Rosebrock
Make sure you use the “Downloads” section of the tutorial to download the source code (don’t copy and paste). During your copy and paste you introduced an indentation error.
Sai
error: the following arguments are required: -p/–shape-predictor
I got an error !
Adrian Rosebrock
You need to provide the command line arguments to the script.
Irene
Thanks for sharing! By any chance, is it possible to make an android app using this code? I’d like to try running this code on Android. Could you give me any advice?!
Adrian Rosebrock
OpenCV does provide Android/Java bindings. I would suggest researching them (I have never personally used them myself).
Aquid
I am trying to use Yawning Detection in a situation where the Driver is wearing sunglasses and hence Eye region wont be extracted . How should I switch to the Yawn detection script on what condition . I tried using :
if leftEye is None : and if leftEyeHull is None:
Both of them are not working .
Adrian Rosebrock
You’ll want to use this tutorial to extract the mouth region via facial landmarks.
Prashant Bansod
Thanks for the great post , Adrian. I was wondering is it possible to find the head angle w.r.t. camera with this approach? If I want to find the head angle and body angle is it possible?
Adrian Rosebrock
I don’t have any tutorials on head/body pose estimation but I will certainly consider it for the future. Thank you for the suggestion!
Shan
Is anyone suggest how to make Logitech C920 compatible with this program?
Whenever I run this program, it detect my built-in MacBook camera not the Logitech C920.
Thanks!
Adrian Rosebrock
You now have two webcams on your system — the built-in one and the C920. To detect the C920 just add in the
--webcam 1
switch when executing the script.Shan
What you mean is add “–webcam 1” into script or add into executed command ?
Thanks!
Adrian Rosebrock
It sounds like you may be know to command line arguments and how they work. That’s okay, but you need to read this post first. From there you’ll be able to understand command line arguments.
Sachin
Hey,
Can you tell me which dataset you have used in this program?
Kindly share its link also.
Adrian Rosebrock
There was no dataset used for this project, it’s entirely heuristic-based. You can use the “Downloads” section of this tutorial to download the source code if you would like.
Anita
everything is given but i doubt if it’s practical to take a laptop(or Raspberry pi 3) in a car…..is there any other hardware implementation that will support this code??? like smartphone or some other liteweight implementation???
sachin
can you tell which algo have you used in this drowsiness detection program?
Adrian Rosebrock
See this tutorial.
zamaki
nice one!!!!
I would like to detect faces at a distance away is it possible?
Adrian Rosebrock
I’m not sure what you mean by “at a distance away” — are you referring to “long range” face detection of some sort?
Simon
hi sir, what if I wanna to add on a buzzer rather than a alarm by using GPIO, ehat should I put in the main source code? thank you so much.
Adrian Rosebrock
You would want to refer to the documentation of whatever buzzer you are using. That’s unfortunately impossible to answer without knowing which buzzer you are using.
Buddy
Hello There I am getting error when I run the code
ImportError: No module named imutils.video
help needed fast
Thank you
Adrian Rosebrock
You first need to install the “imutils” library:
$ pip install imutils
PGT
hi i am very grateful for your tutorials. it’s so great .
I want to stream this sleepy detection video onto a website but don’t know how. Do you have any tutorials like that?
Adrian Rosebrock
I will actually be covering that exact topic in my upcoming Computer Vision + Raspberry Pi book. Stay tuned!
Simon
I tried to use the command argument –alarm alarm.wav but it doesn’t work if the eye closed is detected, I connected to the speaker by using headphone jack, the music is play well by just playing apart the file at the command prompt.
PGT
Hello! Very helpful tutorials. I have a case where the detector sometimes detects two faces and that makes it work wrong.
How do I handle this case?
Peter
Sir, may I know how to make the alarm keep looping until the open eye is detected, because the alarm is short and it only plays once even though our eyes is still open.
Adrian Rosebrock
You would monitor the eye aspect ratio. Once the eye has been detected as “open” you can turn off the alarm. You will need to update the source code to this yourself. Give it a try!
Syed Ahamed
Great Brother Thanks I should have a try.
sachin
i have downloaded the ‘shape_predictor_68_face_landmarks.dat’ file but still i am having this error :
“usage: ipykernel_launcher.py [-h] -p SHAPE_PREDICTOR [-a ALARM] [-w WEBCAM]
ipykernel_launcher.py: error: argument -p/–shape-predictor is required”
Can you please help me out!
Adrian Rosebrock
You need to supply the command line argument to the script.
piyush
hey,
i am having an error : “name ‘args’ is not defined” in this line – predictor = dlib.shape_predictor(args[“shape_predictor”])
what can i do?
Adrian Rosebrock
Make sure you are using the “Downloads” section of this blog post to download the source code — it sounds like you’re copying and pasting and have introduced an error into the code.
PAVAN PATEL
I want to add Mouth opening ratio(MOR) algorithm to increase the efficiency of drowsiness detection. can you help me with that?
Adrian Rosebrock
I’ll be covering how to use a mouth opening ratio, including yawn detection, inside my upcoming Computer Vision + Raspberry Pi book. Stay tuned!
prabas
Hi Adrian Rosebrock.
Thanks for the great tutorial.
I have a question,
You used pre-trained facial landmark detector to get facial landmarks. i need to know how to create that facial landmark detector manually?
Thank you
Adrian Rosebrock
Make sure you refer to the dlib library and associated documentation. There is an example script that dlib provides for training custom shape prediction models as well.
prabas
Thanks. I will Check that
Simon
Thank you for your code. Could you please help me do the same with cnn? This is my college project and I have no idea how do i do it with cnn. My project guide wants me to get it done with cnn. I have seen the same question posted earlier and the reply. I hope you would help me. Thanks
Adrian Rosebrock
You would need to gather a dataset of “normal” vs. “tired/drowsy” drivers. From there you could attempt to train a CNN on the faces. If you need help getting up to speed quick with deep learning and training your own CNNs, refer to my book, Deep Learning for Computer Vision with Python.
Widhera
Thank you for the tutorial. But when i tried this program in night my webcam couldnot get any images just black. Can you give me the idea for this issue?
Adrian Rosebrock
Have you tried using an infrared camera so the camera can see in the dark? A standard webcam would not work well at night.
Widhera
I’ve tried searching and I think cctv has infrared but when I use cv2.videocap it doesn’t work and after searching for cctv it is an analog signal.
Adrian Rosebrock
Can you be more specific? By “open” do you mean “execute” the script?
Ilee
ModuleNotFoundError: No module named ‘scipy’
how to solve this error, Pls Help
Adrian Rosebrock
You need to install the SciPy library:
$ pip install scipy
Ali
Hi Adrian,
In a video stream, the position of the face does not change much from a frame to the next. Is it possible to include this information into the dlib algorithm to possibly make it faster?
In this way, the search for the face bounding box or even the facial landmarks would be in an area around the one from the previous frame.
Thanks
Ali
Adrian Rosebrock
What you could do is perform “skip frames”. If you know the face won’t move much you could only perform face detection every N frames and use the previous bounding box coordinate as the input to the facial landmark detector.
Anubhav
Why can’t we use only one vertical set of coordinates for computation of EAR ?
mh
Hello dear Adrian
I am working on driver drowsiness detection through analyzing facial expression. To evaluate our proposed work we need to run experiments on facial expression data or driver face dataset.
In fact, I want to use the test results(Driving sleepy people in front of the camera) and find a new algorithm for problem analysis.
Can you help me Adrian?
Adrian Rosebrock
If you’re interested in facial expression recognition I would recommend reading through Deep Learning for Computer Vision with Python where I have an entire chapter on that very topic.
Praveen Kumar
Hi, I have all these and installed everything but there is no “playsound” library file available for conda installation rather i can install using pip but if I did that it cannot be recognized in conda opencv-env. What shall I do? Please help me out.
siniya
Hi Adrian , if my teacher asks where did I found that “shape_predictor.dat” file , then what should I do?
Adrian Rosebrock
See my reply to Vamshi.
sarah
Hi Adrian,
I would be grateful if you could answer to my question.
I’m working on driver drowsiness detection project using video dataset, but i want to know how can i use this dataset to evaluate my algorithm detection (Detection results).
Adrian Rosebrock
In order to evaluate an algorithm you need to decide what is the “ground-truth” or the true correct result for each frame in the camera. From there you can perform the evaluation.
Haoran Zhang
Hello Adrian,
It’s an absolutely great work. THX!!!! I’ve been following you for a long time. I’m a student in university. I’m working hard to detect fatigue. And I also want to recognize fatigue through the mouth. Can you give me some advices? THANKS AGAIN!!!!
Adrian Rosebrock
Thanks, Haoran. I’m glad you’re enjoying the tutorials. I’m covering detecting fatigue through the mouth in Raspberry Pi for Computer Vision.
ivan
Head angle will affect eye detection. Rotation of left or right is relatively ok. But head up or down make a very bad result of eye detection.
What is the origin of this problem? Is there any way to solve or minimize this problem?
AKASH
hello Adrian, can you suggest me how to make it auto executable, like run this on booting the raspberry pi 3. I tried to create a shell file and tun it but, its showing import error for dlib, but when i run the program normally there’s no such problem. Please can you help me out in this.
Adrian Rosebrock
I’m covering how to run the scripts on boot/reboot inside Raspberry Pi for Computer Vision.
Mike Reich
Hello Adrian. Loved the tutorial : )
I was attempting to compile the drowsiness detection script into a single executable file(.exe) using pyinstaller. I ran into several problems, some of which I could fix, and some of which I couldn’t.
If you have perchance successfully compiled this program into exe, could you leave me some guidelines on how to do so?
Thanks again!
[Some of the problems I faced:
-a maximum recursion depth reached problem, which I solved by editing the spec file
-compilation apparently requires PyQt5, which I installed later
-I’m using Windows, and ran the code in an anaconda virtual environment with python 3.6. I ran pyinstaller inside the venv though, so that shouldn’t be a problem]
Adrian Rosebrock
Thanks Mike, I’m glad you liked the tutorial.
As for the problem, sorry, I’ve never used PyInstaller before.
Ramjan
Hi Adrian,
This is awesome project for drowsiness detection thanks a lots for this sharing of skills.
I got an error in this project that sound is not played while all functions are work properly here get error message that “gi” package is not found.
Youssef
Hello Mr Rosebrock i had a probelm in my picamera could you help me to resolve it this error:
“no data received from sensor. check all connections including the sunny one on the camera board”
Nik
I am wondering if there is a simple way to have the code working on existing videos and creating csv files with timestamps? Should I use pandas and datetime?
Adrian Rosebrock
I would use the “datetime” module for your timestamps but Pandas would be overkill. Simple file I/O would work.
Reza Mohammadi Tamanani
Hi Adrian,
Thanks a lot for providing us with your useful website. I am going to work on my project which is drowsiness detection using LBP. but I am not really sure whether lbp is better than the method you used or not!! I also would like to combine my algorithm with a decision tree algorithm to make better decision based on drivers’ age , etc. I wonder if your course is suited for my case or not? I also need any recomendation or help of anyone.
Adrian Rosebrock
It’s definitely possible to build a decision tree but you would need to gather additional data. You mentioned age specifically. Would you know the driver age before hand? Or you would you be predicting the age from a photo? If so, I would go through Deep Learning for Computer Vision with Python where I show you how to perform age prediction.
yash
can we run the code without argparse statements
Adrian Rosebrock
Absolutely. Just convert the “args” variable to a dictionary and hardcode the variables. See this tutorial.
Debal
thanks Adrian for a wonderful tutorial.
i think other users have also faced this problem, i.e. the algorithm fails to give a positive detection if the face at some distance away from the camera (anything > 2 feet).
any ideas how this can be improved?
thanks
Adrian Rosebrock
Try using a higher resolution when performing face detection and facial landmark prediction.
Kim
Hi there! I am very interested in your project and am also trying to do something similar in my industry. Was wondering if you have a similar code in R?
Adrian Rosebrock
Sorry, I only provide Python code here.
AY
Hi Adrian,
Very nice application!
I want to apply this on robots such as pepper or nao from softbank.
Can the above program be modified so that it will use the video streaming from the robot’s camera and return an alert message to the robot using mqtt when drowsiness is detected?
Kevin
Is this applicable in different lighting environment since this will be mostly used while driving at night which there will be no light in the background?
Aditya Lohia
The background thread is not working. The frames stop while the sound is playing. I have rechecked it again and again but I am unable to resolve the issue.
Dominic Ancelm
Dear Adrian,
I appreciate your great service and support.
You have mentioned that you preferred MacBook than Raspberry 3 bcs for fast operation. How about using new RPI 4? Is it ok to handle the project??
Adrian Rosebrock
Yes, this project will run in real-time on the RPi 3 and RPi 4.
Plamen
Hi Adrian,
I would like to share my experience regarding this post and the playsound with python on virtual machine with Ubuntu. Actually I ran this code on the PyImageSearch Gurus Course virtual machine and I found that in order the playsound to work you must do the following:
1. install gstreamer. Here you can find the exact command for Ubuntu: https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c
2. install vest using this console command: $ pip install vext
3. install vext.gi using this console command: $ pip install vext.gi
After that the sound is working when you run this code on your PyImageSearch Gurus Course virtual machine!
You can update this post with the above information if you want.
Adrian Rosebrock
Thank you for sharing, Plamen!
Shawol
Hi Adrian,
Is there possibility of running this project without carrying the laptop with us while streaming the video.
Adrian Rosebrock
Instead of using a laptop you could use an embedded device like the Raspberry Pi. This tutorial will show you how to do exactly that.
rumana
how to apply code to laptop camera
Adrian Rosebrock
This code will work with a laptop camera.
kamrul islam
Hey, this is a fantastic project. but i have a query . this project detects multiple people’s eye . if there is a person behind the driver and he closes his eyes then the alert is generated.how will i solve this problem so that it detects only driver who is more closer to the camera?
Adrian Rosebrock
You should filter your face detections by either:
1. Selecting the detected face with the largest corresponding probability/confidence.
2. Selecting the face with the largest bounding box dimensions (assuming that is the face closest to the camera).
Tariq Almazyad
Great work + tutorial .
I have your code up and running. However, I would like to add the possible addition of detecting eye blinks in the dark. Is there an additional algorithm or code that exists to implement detecting eye blinks at night, or should there just be night vision enabled with the camera that is being used to test the code?
Also , what kind of camera do you recommend to buy so I can use it in a night mode?
Thank you.
Abusufyan Sher
Very nice tutorials. please let me know, you implemented all projects on Resberry Pie, Same source code is also implementable on Laptop?
And we have dlib import errors. please kindly share guidlines for importing dlib in windows.
Adrian Rosebrock
1. This project was actually implemented on a laptop, not a RPi.
2. Sorry, I do not support Windows on the PyImageSearch blog. I recommend you use a Unix-based OS such as Linux (ideally Ubuntu) or macOS.
Nitin S
Hi Adrian,
Amazing tutorial. Works very well. I just have a small doubt. Will the eye landmarks be detected even if the camera is placed on the side-view mirror? I’m just thinking for a different use case. If not, can you suggest/provide some ways to implement it?
Digvijay Dilip Waghmare
Hi Adrian. Can I get the link of the research paper OR reference material you used for the preparation of this program of drowsiness detection?
Adrian Rosebrock
This tutorial includes a link to the original research paper. I suggest you give it a read.
CHAN
Hi Adrian,
How to add intel ncs2 to this project,I can’t work after trying for a long time
Adrian Rosebrock
What do you intend on using the NCS for? Face detection?
CHAN
Drowsiness detection !
Thk you for the tutoria,it’s very helpful to me
Adrian Rosebrock
The NCS is used to speedup deep learning-based models. You could swap in a deep learning-based face detector here and then use the NCS to make it faster, but the NCS won’t speedup the rest of the drowsiness detection (which is already quite fast). The biggest bang for your buck would be speeding up the face detector.
Niti Kaur
Hey Adrian,
I was trying out this drowsiness detection task on Jupyter notebook,but there seems to be a problem with the command line arguments. I also watched the tutorial on it, but is it basically for pycharm. I gave the past of the .dat file and audio file but it seems to be generating an error.Please help me with that.Thank You!
Adrian Rosebrock
I suggest you hardcode the “args” dictionary as this comments hows you how to do.