Inside this tutorial, you will learn how to perform pan and tilt object tracking using a Raspberry Pi, Python, and computer vision.
One of my favorite features of the Raspberry Pi is the huge amount of additional hardware you can attach to the Pi. Whether it’s cameras, temperature sensors, gyroscopes/accelerometers, or even touch sensors, the community surrounding the Raspberry Pi has enabled it to accomplish nearly anything.
But one of my favorite add-ons to the Raspberry Pi is the pan and tilt camera.
Using two servos, this add-on enables our camera to move left-to-right and up-and-down simultaneously, allowing us to detect and track objects, even if they were to go “out of frame” (as would happen if an object approached the boundaries of a frame with a traditional camera).
Today we are going to use the pan and tilt camera for object tracking and more specifically, face tracking.
To learn how to perform pan and tilt tracking with the Raspberry Pi and OpenCV, just keep reading!
Pan/tilt face tracking with a Raspberry Pi and OpenCV
In the first part of this tutorial, we’ll briefly describe what pan and tilt tracking is and how it can be accomplished using servos. We’ll also configure our Raspberry Pi system so that it can communicate with the PanTiltHAT and use the camera.
From there we’ll also review the concept of a PID controller, a control loop feedback mechanism often used in control systems.
We’ll then will implement our PID controller, face detector + object tracker, and driver script used to perform pan/tilt tracking.
I’ll also cover manual PID tuning basics — an essential skill.
Let’s go ahead and get started!
What is pan/tilt object tracking?
The goal of pan and tilt object tracking is for the camera to stay centered upon an object.
Typically this tracking is accomplished with two servos. In our case, we have one servo for panning left and right. We have a separate servo for tilting up and down.
Each of our servos and the fixture itself has a range of 180 degrees (some systems have a greater range than this).
Hardware requirements for today’s project
You will need the following hardware to replicate today’s project:
- Raspberry Pi – I recommend the 3B+ or 3B, but other models may work provided they have the same header pin layout.
- PiCamera – I recommend the PiCamera V2
- Pimoroni pan tilt HAT full kit – The Pimoroni kit is a quality product and it hasn’t let me down. Budget about 30 minutes for assembly. I do not recommend the SparkFun kit as it requires soldering and additional assembly.
- 2.5A, 5V power supply – If you supply less than 2.5A, your Pi might not have enough current causing it to reset. Why? Because the servos draw necessary current away. Get a power supply and dedicate it to this project hardware.
- HDMI Screen – Placing an HDMI screen next to your camera as you move around will allow you to visualize and debug, essential for manual tuning. Do not try X11 forwarding — it is simply too slow for video applications. VNC is possible if you don’t have an HDMI screen but I haven’t found an easy way to start VNC without having an actual screen plugged in as well.
- Keyboard/mouse – Obvious reasons.
Installing software for the PantiltHat
For today’s project, you need the following software:
- OpenCV
- smbus
- pantilthat
- imutils
Everything can easily be installed via pip except the smbus. Let’s review the steps:
Step #1: Create a virtual environment and install OpenCV
Head over to my pip install opencv blog post and you’ll learn how to set up your Raspberry Pi with a Python virtual environment with OpenCV installed. I named my virtual environment py3cv4
.
Step #2: Sym-link smbus
into your py3cv4
virtual environment
Follow these instructions to install smbus
:
$ cd ~/.virtualenvs/py3cv4/lib/python3.5/site-packages/ $ ln -s /usr/lib/python3/dist-packages/smbus.cpython-35m-arm-linux-gnueabihf.so smbus.so
Step #3: Enable the i2c interface as well as the camera interface
Fire up the Raspbian system config and turn on the i2c and camera interfaces (may require a reboot).
$ sudo raspi-config # enable the i2c and camera interfaces via the menu
Step #4: Install pantilthat
, imutils
, and the PiCamera
Using pip, go ahead and install the remaining tools:
$ workon py3cv4 $ pip install pantilthat $ pip install imutils $ pip install "picamera[array]"
You should be all set from here forward!
What is a PID controller?
A common feedback control loop is what is called a PID or Proportional-Integral-Derivative controller.
PIDs are typically used in automation such that a mechanical actuator can reach an optimum value (read by the feedback sensor) quickly and accurately.
They are used in manufacturing, power plants, robotics, and more.
The PID controller calculates an error term (the difference between desired set point and sensor reading) and has a goal of compensating for the error.
The PID calculation outputs a value that is used as an input to a “process” (an electromechanical process, not what us computer science/software engineer types think of as a “computer process”).
The sensor output is known as the “process variable” and serves as input to the equation. Throughout the feedback loop, timing is captured and it is input to the equation as well.
Wikipedia has a great diagram of a PID controller:
Notice how the output loops back into the input. Also notice how the Proportional, Integral, and Derivative values are each calculated and summed.
The figure can be written in equation form as:
Let’s review P, I, and D:
- P (proportional): If the current error is large, the output will be proportionally large to cause a significant correction.
- I (integral): Historical values of the error are integrated over time. Less significant corrections are made to reduce the error. If the error is eliminated, this term won’t grow.
- D (derivative): This term anticipates the future. In effect, it is a dampening method. If either P or I will cause a value to overshoot (i.e. a servo was turned past an object or a steering wheel was turned too far), D will dampen the effect before it gets to the output.
Do I need to learn more about PIDs and where is the best place?
PIDs are a fundamental control theory concept.
There are tons of resources. Some are heavy on mathematics, some conceptual. Some are easy to understand, some not.
That said, as a software programmer, you just need to know how to implement one and tune one. Even if you think the mathematical equation looks complex, when you see the code, you will be able to follow and understand.
PIDs are easier to tune if you understand how they work, but as long as you follow the manual tuning guidelines demonstrated later in this post, you don’t have to be intimate with the equations above at all times.
Just remember:
- P – proportional, present (large corrections)
- I – integral, “in the past” (historical)
- D – derivative, dampening (anticipates the future)
For more information, the Wikipedia PID controller page is really great and also links to other great guides.
Project structure
Once you’ve grabbed today’s “Downloads” and extracted them, you’ll be presented with the following directory structure:
$ tree --dirsfirst . ├── pyimagesearch │ ├── __init__.py │ ├── objcenter.py │ └── pid.py ├── haarcascade_frontalface_default.xml └── pan_tilt_tracking.py 1 directory, 5 files
Today we’ll be reviewing three Python files:
objcenter.py
: Calculates the center of a face bounding box using the Haar Cascade face detector. If you wish, you may detect a different type of object and place the logic in this file.pid.py
: Discussed above, this is our control loop. I like to keep the PID in a class so that I can create newPID
objects as needed. Today we have two: (1) panning and (2) tilting.pan_tilt_tracking.py
: This is our pan/tilt object tracking driver script. It uses multiprocessing with four independent processes (two of which are for panning and tilting, one is for finding an object, and one is for driving the servos with fresh angle values).
The haarcascade_frontalface_default.xml
is our pre-trained Haar Cascade face detector. Haar works great with the Raspberry Pi as it requires fewer computational resources than HOG or Deep Learning.
Creating the PID controller
The following PID script is based on Erle Robotics GitBook‘s example as well as the Wikipedia pseudocode. I added my own style and formatting that readers (like you) of my blog have come to expect.
Go ahead and open pid.py
. Let’s review:
# import necessary packages import time class PID: def __init__(self, kP=1, kI=0, kD=0): # initialize gains self.kP = kP self.kI = kI self.kD = kD
This script implements the PID formula. It is heavy in basic math. We don’t need to import advanced math libraries, but we do need to import time
on Line 2 (our only import).
We define a class called PID
on Line 4.
The PID
class has three methods:
__init__
: The constructor.initialize
: Initializes values. This logic could be in the constructor, but then you wouldn’t have the convenient option of reinitializing at any time.update
: This is where the calculation is made.
Our constructor is defined on Lines 5-9 accepting three parameters, kP
, kI
, and kD
. These values are constants and are specified in our driver script. Three corresponding instance variables are defined in the method body.
Now let’s review initialize
:
def initialize(self): # initialize the current and previous time self.currTime = time.time() self.prevTime = self.currTime # initialize the previous error self.prevError = 0 # initialize the term result variables self.cP = 0 self.cI = 0 self.cD = 0
The initialize
method sets our current timestamp and previous timestamp on Lines 13 and 14 (so we can calculate the time delta in our update
method).
Our self-explanatory previous error term is defined on Line 17.
The P, I, and D variables are established on Lines 20-22.
Let’s move on to the heart of the PID class — the update
method:
def update(self, error, sleep=0.2): # pause for a bit time.sleep(sleep) # grab the current time and calculate delta time self.currTime = time.time() deltaTime = self.currTime - self.prevTime # delta error deltaError = error - self.prevError # proportional term self.cP = error # integral term self.cI += error * deltaTime # derivative term and prevent divide by zero self.cD = (deltaError / deltaTime) if deltaTime > 0 else 0 # save previous time and error for the next update self.prevtime = self.currTime self.prevError = error # sum the terms and return return sum([ self.kP * self.cP, self.kI * self.cI, self.kD * self.cD])
Our update method accepts two parameters: the error
value and sleep
in seconds.
Inside the update
method, we:
- Sleep for a predetermined amount of time on Line 26, thereby preventing updates so fast that our servos (or another actuator) can’t respond fast enough. The
sleep
value should be chosen wisely based on knowledge of mechanical, computational, and even communication protocol limitations. Without prior knowledge, you should experiment for what seems to work best. - Calculate
deltaTime
(Line 30). Updates won’t always come in at the exact same time (we have no control over it). Thus, we calculate the time difference between the previous update and now (this current update). This will affect ourcI
andcD
terms. - Compute
deltaError
(Line 33) The difference between the providederror
andprevError
.
Then we calculate our PID
control terms:
cP
: Our proportional term is equal to theerror
term.cI
: Our integral term is simply theerror
multiplied bydeltaTime
.cD
: Our derivative term isdeltaError
overdeltaTime
. Division by zero is accounted for.
Finally, we:
- Set the
prevTime
andprevError
(Lines 45 and 46). We’ll need these values during our nextupdate
. - Return the summation of calculated terms multiplied by constant terms (Lines 49-52).
Keep in mind that updates will be happening in a fast-paced loop. Depending on your needs, you should adjust the sleep
parameter (as previously mentioned).
Implementing the face detector and object center tracker
The goal of our pan and tilt tracker will be to keep the camera centered on the object itself.
To accomplish this goal, we need to:
- Detect the object itself.
- Compute the center (x, y)-coordinates of the object.
Let’s go ahead and implement our ObjCenter
class which will accomplish both of these goals:
# import necessary packages import imutils import cv2 class ObjCenter: def __init__(self, haarPath): # load OpenCV's Haar cascade face detector self.detector = cv2.CascadeClassifier(haarPath)
This script requires imutils
and cv2
to be imported.
Our ObjCenter
class is defined on Line 5.
On Line 6, the constructor accepts a single argument — the path to the Haar Cascade face detector.
We’re using the Haar method to find faces. Keep in mind that the Raspberry Pi (even a 3B+) is a resource-constrained device. If you elect to use a slower (but more accurate) HOG or a CNN, keep in mind that you’ll want to slow down the PID calculations so they aren’t firing faster than you’re actually detecting new face coordinates.
Note: You may also elect to use a Movidius NCS or Google Coral TPU USB Accelerator for face detection. We’ll be covering that concept in a future tutorial/in the Raspberry Pi for Computer Vision book.
The detector
is initialized on Line 8.
Let’s define the update
method which will find the center (x, y)-coordinate of a face:
def update(self, frame, frameCenter): # convert the frame to grayscale gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect all faces in the input frame rects = self.detector.detectMultiScale(gray, scaleFactor=1.05, minNeighbors=9, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) # check to see if a face was found if len(rects) > 0: # extract the bounding box coordinates of the face and # use the coordinates to determine the center of the # face (x, y, w, h) = rects[0] faceX = int(x + (w / 2.0)) faceY = int(y + (h / 2.0)) # return the center (x, y)-coordinates of the face return ((faceX, faceY), rects[0]) # otherwise no faces were found, so return the center of the # frame return (frameCenter, None)
Today’s project has two update
methods so I’m taking the time here to explain the difference:
- We previously reviewed the
PID
update
method. This method performs the PID calculations to help calculate a servo angle to keep the face in the center of the camera’s view. - Now we are reviewing the
ObjCcenter
update
method. This method simply finds a face and returns its center coordinates.
The update
method (for finding the face) is defined on Line 10 and accepts two parameters:
frame
: An image ideally containing one face.frameCenter
: The center coordinates of the frame.
The frame is converted to grayscale on Line 12.
From there we perform face detection using the Haar Cascade detectMultiScale
method.
On Lines 20-26 we check that faces have been detected and from there calculate the center (x, y)-coordinates of the face itself.
Lines 20-24 makes an important assumption: we assume that only one face is in the frame at all times and that face can be accessed by the 0-th index of rects
.
Note: Without this assumption holding true additional logic would be required to determine which face to track. See the “Improvements for pan/tilt face tracking with the Raspberry Pi” section of this post. where I describe how to handle multiple face detections with Haar.
The center of the face, as well as the bounding box coordinates, are returned on Line 29. We’ll use the bounding box coordinates to draw a box around the face for display purposes.
Otherwise, when no faces are found, we simply return the center of the frame (so that the servos stop and do not make any corrections until a face is found again).
Our pan and tilt driver script
Let’s put the pieces together and implement our pan and tilt driver script!
Open up the pan_tilt_tracking.py
file and insert the following code:
# import necessary packages from multiprocessing import Manager from multiprocessing import Process from imutils.video import VideoStream from pyimagesearch.objcenter import ObjCenter from pyimagesearch.pid import PID import pantilthat as pth import argparse import signal import time import sys import cv2 # define the range for the motors servoRange = (-90, 90)
On Line 2-12 we import necessary libraries. Notably we’ll use:
Process
andManager
will help us withmultiprocessing
and shared variables.VideoStream
will allow us to grab frames from our camera.ObjCenter
will help us locate the object in the frame whilePID
will help us keep the object in the center of the frame by calculating our servo angles.pantilthat
is the library used to interface with the Raspberry Pi Pimoroni pan tilt HAT.
Our servos on the pan tilt HAT have a range of 180 degrees (-90 to 90) as is defined on Line 15. These values should reflect the limitations of your servos.
Let’s define a “ctrl + c” signal_handler
:
# function to handle keyboard interrupt def signal_handler(sig, frame): # print a status message print("[INFO] You pressed `ctrl + c`! Exiting...") # disable the servos pth.servo_enable(1, False) pth.servo_enable(2, False) # exit sys.exit()
This multiprocessing script can be tricky to exit from. There are a number of ways to accomplish it, but I decided to go with a signal_handler
approach.
The signal_handler
is a thread that runs in the background and it will be called using the the signal
module of Python. It accepts two arguments, sig
and the frame
. The sig
is the signal itself (generally “ctrl + c”). The frame
is not a video frame and is actually the execution frame.
We’ll need to start the signal_handler
thread inside of each process.
Line 20 prints a status message. Lines 23 and 24 disable our servos. And Line 27 exits from our program.
You might look at this script as a whole and think “If I have four processes, and signal_handler
is running in each of them, then this will occur four times.”
You are absolutely right, but this is a compact and understandable way to go about killing off our processes, short of pressing “ctrl + c” as many times as you can in a sub-second period to try to get all processes to die off. Imagine if you had 10 processes and were trying to kill them with the “ctrl + c” approach.
Now that we know how our processes will exit, let’s define our first process:
def obj_center(args, objX, objY, centerX, centerY): # signal trap to handle keyboard interrupt signal.signal(signal.SIGINT, signal_handler) # start the video stream and wait for the camera to warm up vs = VideoStream(usePiCamera=True).start() time.sleep(2.0) # initialize the object center finder obj = ObjCenter(args["cascade"]) # loop indefinitely while True: # grab the frame from the threaded video stream and flip it # vertically (since our camera was upside down) frame = vs.read() frame = cv2.flip(frame, 0) # calculate the center of the frame as this is where we will # try to keep the object (H, W) = frame.shape[:2] centerX.value = W // 2 centerY.value = H // 2 # find the object's location objectLoc = obj.update(frame, (centerX.value, centerY.value)) ((objX.value, objY.value), rect) = objectLoc # extract the bounding box and draw it if rect is not None: (x, y, w, h) = rect cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) # display the frame to the screen cv2.imshow("Pan-Tilt Face Tracking", frame) cv2.waitKey(1)
Our obj_center
thread begins on Line 29 and accepts five variables:
args
: Our command line arguments dictionary (created in our main thread).objX
andobjY
: The (x, y)-coordinates of the object. We’ll continuously calculate this.centerX
andcenterY
: The center of the frame.
On Line 31 we start our signal_handler
.
Then, on Lines 34 and 35, we start our VideoStream
for our PiCamera
, allowing it to warm up for two seconds.
Our ObjCenter
is instantiated as obj
on Line 38. Our cascade path is passed to the constructor.
From here, our process enters an infinite loop on Line 41. The only way to escape out of the loop is if the user types “ctrl + c” as you’ll notice no break
command.
Our frame
is grabbed and flipped on Lines 44 and 45. We must flip
the frame
because the PiCamera
is physically upside down in the pan tilt HAT fixture by design.
Lines 49-51 set our frame width and height as well as calculate the center point of the frame. You’ll notice that we are using .value
to access our center point variables — this is required with the Manager
method of sharing data between processes.
To calculate where our object is, we’ll simply call the update
method on obj
while passing the video frame
. The reason we also pass the center coordinates is because we’ll just have the ObjCenter
class return the frame center if it doesn’t see a Haar face. Effectively, this makes the PID error 0
and thus, the servos stop moving and remain in their current positions until a face is found.
Note: I choose to return the frame center if the face could not be detected. Alternatively, you may wish to return the coordinates of the last location a face was detected. That is an implementation choice that I will leave up to you.
The result of the update
is parsed on Line 55 where our object coordinates and the bounding box are assigned.
The last steps are to draw a rectangle around our face (Lines 58-61) and to display the video frame (Lines 64 and 65).
Let’s define our next process, pid_process
:
def pid_process(output, p, i, d, objCoord, centerCoord): # signal trap to handle keyboard interrupt signal.signal(signal.SIGINT, signal_handler) # create a PID and initialize it p = PID(p.value, i.value, d.value) p.initialize() # loop indefinitely while True: # calculate the error error = centerCoord.value - objCoord.value # update the value output.value = p.update(error)
Our pid_process
is quite simple as the heavy lifting is taken care of by the PID
class. Two of these processes will be running at any given time (panning and tilting). If you have a complex robot, you might have many more PID processes running.
The method accepts six parameters:
output
: The servo angle that is calculated by our PID controller. This will be a pan or tilt angle.p
,i
, andd
: Our PID constants.objCoord
: This value is passed to the process so that the process has access to keep track of where the object is. For panning, it is an x-coordinate. Similarly, for tilting, it is a y-coordinate.centerCoord
: Used to calculate ourerror
, this value is just the center of the frame (either x or y depending on whether we are panning or tilting).
Be sure to trace each of the parameters back to where the process is started in the main thread of this program.
On Line 69, we start our special signal_handler
.
Then we instantiate our PID on Line 72, passing the each of the P, I, and D values.
Subsequently, the PID
object is initialized (Line 73).
Now comes the fun part in just two lines of code:
- Calculate the
error
on Line 78. For example, this could be the frame’s y-center minus the object’s y-location for tilting. - Call
update
(Line 81), passing the new error (and a sleep time if necessary). The returned value is theoutput.value
. Continuing our example, this would be the tilt angle in degrees.
We have another thread that “watches” each output.value
to drive the servos.
Speaking of driving our servos, let’s implement a servo range checker and our servo driver now:
def in_range(val, start, end): # determine the input value is in the supplied range return (val >= start and val <= end) def set_servos(pan, tlt): # signal trap to handle keyboard interrupt signal.signal(signal.SIGINT, signal_handler) # loop indefinitely while True: # the pan and tilt angles are reversed panAngle = -1 * pan.value tiltAngle = -1 * tlt.value # if the pan angle is within the range, pan if in_range(panAngle, servoRange[0], servoRange[1]): pth.pan(panAngle) # if the tilt angle is within the range, tilt if in_range(tiltAngle, servoRange[0], servoRange[1]): pth.tilt(tiltAngle)
Lines 83-85 define an in_range
method to determine if a value is within a particular range.
From there, we’ll drive our servos to specific pan and tilt angles in the set_servos
method.
Our set_servos
method will be running in another process. It accepts pan
and tlt
values and will watch the values for updates. The values themselves are constantly being adjusted via our pid_process
.
We establish our signal_handler
on Line 89.
From there, we’ll start our infinite loop until a signal is caught:
- Our
panAngle
andtltAngle
values are made negative to accommodate the orientation of the servos and camera (Lines 94 and 95). - Then we check each value ensuring it is in the range as well as drive the servos to the new angle (Lines 98-103).
That was easy.
Now let’s parse command line arguments:
# check to see if this is the main body of execution if __name__ == "__main__": # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-c", "--cascade", type=str, required=True, help="path to input Haar cascade for face detection") args = vars(ap.parse_args())
The main body of execution begins on Line 106.
We parse our command line arguments on Lines 108-111. We only have one — the path to the Haar Cascade on disk.
Now let’s work with process safe variables and start our processes:
# start a manager for managing process-safe variables with Manager() as manager: # enable the servos pth.servo_enable(1, True) pth.servo_enable(2, True) # set integer values for the object center (x, y)-coordinates centerX = manager.Value("i", 0) centerY = manager.Value("i", 0) # set integer values for the object's (x, y)-coordinates objX = manager.Value("i", 0) objY = manager.Value("i", 0) # pan and tilt values will be managed by independed PIDs pan = manager.Value("i", 0) tlt = manager.Value("i", 0)
Inside the Manager
block, our process safe variables are established. We have quite a few of them.
First, we enable the servos on Lines 116 and 117. Without these lines, the hardware won’t work.
Let’s look at our first handful of process safe variables:
- The frame center coordinates are integers (denoted by
"i"
) and initialized to0
(Lines 120 and 121). - The object center coordinates, also integers and initialized to
0
(Lines 124 and 125). - Our
pan
andtlt
angles (Lines 128 and 129) are integers that I’ve set to start in the center pointing towards a face (angles of0
degrees).
Now is where we’ll set the P, I, and D constants:
# set PID values for panning panP = manager.Value("f", 0.09) panI = manager.Value("f", 0.08) panD = manager.Value("f", 0.002) # set PID values for tilting tiltP = manager.Value("f", 0.11) tiltI = manager.Value("f", 0.10) tiltD = manager.Value("f", 0.002)
Our panning and tilting PID constants (process safe) are set on Lines 132-139. These are floats. Be sure to review the PID tuning section next to learn how we found suitable values. To get the most value out of this project, I would recommend setting each to zero and following the tuning method/process (not to be confused with a computer science method/process).
With all of our process safe variables ready to go, let’s launch our processes:
# we have 4 independent processes # 1. objectCenter - finds/localizes the object # 2. panning - PID control loop determines panning angle # 3. tilting - PID control loop determines tilting angle # 4. setServos - drives the servos to proper angles based # on PID feedback to keep object in center processObjectCenter = Process(target=obj_center, args=(args, objX, objY, centerX, centerY)) processPanning = Process(target=pid_process, args=(pan, panP, panI, panD, objX, centerX)) processTilting = Process(target=pid_process, args=(tlt, tiltP, tiltI, tiltD, objY, centerY)) processSetServos = Process(target=set_servos, args=(pan, tlt)) # start all 4 processes processObjectCenter.start() processPanning.start() processTilting.start() processSetServos.start() # join all 4 processes processObjectCenter.join() processPanning.join() processTilting.join() processSetServos.join() # disable the servos pth.servo_enable(1, False) pth.servo_enable(2, False)
Each process is kicked off on Lines 147-153, passing required process safe values. We have four processes:
- A process which finds the object in the frame. In our case, it is a face.
- A process which calculates panning (left and right) angles with a PID.
- A process which calculates tilting (up and down) angles with a PID.
- A process which drives the servos.
Each of the processes is started and then joined (Lines 156-165).
Servos are disabled when all processes exit (Lines 168 and 169). This also occurs in the signal_handler
just in case.
Tuning the pan and tilt PIDs independently, a critical step
That was a lot of work!
Now that we understand the code, we need to perform manual tuning of our two independent PIDs (one for panning and one for tilting).
Tuning a PID ensures that our servos will track the object (in our case, a face) smoothly.
Be sure to refer to the manual tuning section in the PID Wikipedia article.
The article instructs you to follow this process to tune your PID:
- Set
kI
andkD
to zero. - Increase
kP
from zero until the output oscillates (i.e. the servo goes back and forth or up and down). Then set the value to half. - Increase
kI
until offsets are corrected quickly, knowing that too high of a value will cause instability. - Increase
kD
until the output settles on the desired output reference quickly after a load disturbance (i.e. if you move your face somewhere really fast). Too muchkD
will cause excessive response and make your output overshoot where it needs to be.
I cannot stress this enough: Make small changes while tuning.
Let’s prepare to tune the values manually.
Even if you coded along through the previous sections, make sure you use the “Downloads” section of this tutorial to download the source code to this guide.
Transfer the zip to your Raspberry Pi using SCP or another method. Once on your Pi, unzip the files.
We will be tuning our PIDs independently, first by tuning the tilting process.
Go ahead and comment out the panning process in the driver script:
# start all 4 processes processObjectCenter.start() #processPanning.start() processTilting.start() processSetServos.start() # join all 4 processes processObjectCenter.join() #processPanning.join() processTilting.join() processSetServos.join()
From there, open up a terminal and execute the following command:
$ python pan_tilt_tracking.py --cascade haarcascade_frontalface_default.xml
You will need to follow the manual tuning guide above to tune the tilting process.
While doing so, you’ll need to:
- Start the program and move your face up and down, causing the camera to tilt. I recommend doing squats at your knees and looking directly at the camera.
- Stop the program + adjust values per the tuning guide.
- Repeat until you’re satisfied with the result (and thus, the values). It should be tilting well with small displacements, and large changes in where your face is. Be sure to test both.
At this point, let’s switch to the other PID. The values will be similar, but it is necessary to tune them as well.
Go ahead and comment out the tilting process (which is fully tuned).
From there uncomment the panning process:
# start all 4 processes processObjectCenter.start() processPanning.start() #processTilting.start() processSetServos.start() # join all 4 processes processObjectCenter.join() processPanning.join() #processTilting.join() processSetServos.join()
And once again, execute the following command:
$ python pan_tilt_tracking.py --cascade haarcascade_frontalface_default.xml
Now follow the steps above again to tune the panning process.
Pan/tilt tracking with a Raspberry Pi and OpenCV
With our freshly tuned PID constants, let’s put our pan and tilt camera to the test.
Assuming you followed the section above, ensure that both processes (panning and tilting) are uncommented and ready to go.
From there, open up a terminal and execute the following command:
$ python pan_tilt_tracking.py --cascade haarcascade_frontalface_default.xml
Once the script is up and running you can walk in front of your camera.
If all goes well you should see your face being detected and tracked, similar to the GIF below:
As you can see, the pan/tilt camera tracks my face well.
Improvements for pan/tilt tracking with the Raspberry Pi
There are times when the camera will encounter a false positive face causing the control loop to go haywire. Don’t be fooled! Your PID is working just fine, but your computer vision environment is impacting the system with false information.
We chose Haar because it is fast, however just remember Haar can lead to false positives:
- Haar isn’t as accurate as HOG. HOG is great but is resource hungry compared to Haar.
- Haar is far from accurate compared to a Deep Learning face detection method. The DL method is too slow to run on the Pi and real-time. If you tried to use it panning and tilting would be pretty jerky.
My recommendation is that you set up your pan/tilt camera in a new environment and see if that improves the results. For example, we were testing the face tracking, we found that it didn’t work well in a kitchen due to reflections off the floor, refrigerator, etc. However, when we aimed the camera out the window and I stood outside, the tracking improved drastically because ObjCenter
was providing legitimate values for the face and thus our PID could do its job.
What if there are two faces in the frame?
Or what if I’m the only face in the frame, but consistently there is a false positive?
This is a great question. In general, you’d want to track only one face, so there are a number of options:
- Use the confidence value and take the face with the highest confidence. This is not possible using the default Haar detector code as it doesn’t report confidence values. Instead, let’s explore other options.
- Try to get the
rejectLevels
andrejectWeights
. I’ve never tried this, but the following links may help: - Grab the largest bounding box — easy and simple.
- Select the face closest to the center of the frame. Since the camera tries to keep the face closest to the center, we could compute the Euclidean distance between all centroid bounding boxes and the center (x, y)-coordinates of the frame. The bounding box closest to the centroid would be selected.
What's next? We recommend PyImageSearch University.
86+ total classes • 115+ hours hours of on-demand code walkthrough videos • Last updated: February 2025
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this tutorial, you learned how to perform pan and tilt tracking using a Raspberry Pi, OpenCV, and Python.
To accomplish this task, we first required a pan and tilt camera.
From there we implemented our PID used in our feedback control loop.
Once we had our PID controller we were able to implement the face detector itself.
The face detector had one goal — to detect the face in the input image and then return the center (x, y)-coordinates of the face bounding box, enabling us to pass these coordinates into our pan and tilt system.
From there the servos would center the camera on the object itself.
I hope you enjoyed today’s tutorial!
To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Dr. Adrian, you are awesome! I thought this tutorial would be on your book. I imagine what we’re gonna find, oh my God! Thanks for this awesomeness!
Thanks so much, Faurog! 😀
Thank you so much for your tutorials. I have successfully finished Openvino with face recognition using your tutorials .
Perfect! You are one of the best people i have seen in this universe if not the best. I have been using ur tutorials for 3 years and you have done so much for me.
Luckily i have Pimoroni pan tilt and everything, but I am connecting both webcam and pi cam on the same raspberry pi, and i want to run the code, it uses the web cam automatically, any ideas to change the code to work with Pi cam?
Thank you
Are you using my “VideoStream” class? Or OpenCV’s “cv2.VideoCapture” function?
Oh I am SO going to use this as a “Cat Tracker”!
You absolutely should 😉
Im also!
Im going to combine this perfect PID controlled tracker with Movidius.
to track my dog.
Grate tuto’ Adrian!
like always..
Hey Adrian, I was wondering when will you be releasing your book on Computer Vision with the Raspberry Pi. I am eagerly waiting for it.
Keep an eye on your inbox — I’ll be sharing more details on the release later this week.
Your work is truly awesome! I am 50+ and my daughter is 15, we both follow your work with keen interest.
Thanks so much! I’m so happy you and your daughter are enjoying the blog 😀
Adrian… awesome article as usual. I really love that you took this to the control system domain as well. I am not an expert here, but I think a Kalman filter might be usefully dealing with some of the noise coming from the detection errors.
https://en.wikipedia.org/wiki/Kalman_filter
Thanks Again!
Justin
Great suggestion, Justin!
The post was already long and complicated enough so I didn’t want to introduce yet another algorithm on top of it all.
Dr., thank you!!
This exact project is the reason I’ve put any time into learning OpenCV. Many years ago I stumbled across a student project named Pinokio by Adam Ben-Dror, Joss Doggett, and Shanshan Zhou. And wanted one ever since. I’ve enjoyed your books and tutorials, and am very glad I purchased your pre-loaded OpenCV. But I can read what’s supposed to happen far, far better than I can code it myself. Frankly coding isn’t my gift, and I do what is needed for the project at hand. I’m literally staring at the, uh, pile of Pi, breadboard, camera and to be hooked up servos that has been on my desk since last fall as I’ve struggled with the coding. And the Pi freezing up . . . but thank you! My dream is just a little bit closer to being fulfilled.
One question. Is there a US source for the recommended pan-tilt?
Thanks Ron, I really appreciate your comment 🙂
Unfortunately, I do not know of a US source for the PanTilt. Pimoroni has had pretty fast shipping to the US though.
I got mine from Amazon.
Adafruit sells the pan tilt module. They are located in New York.
You want the pan-til module iself: https://www.adafruit.com/product/1967
and the hat that goes on the RPI: https://www.adafruit.com/product/3353
Thank you for sharing, Joel!
I wanted to ask where is he source code for this tutorial? I am not sure how to find it and follow this tutorial better.
You can use the “Downloads” section of this tutorial to download the source code.
this was a great tutorial. I always enjoyed your tutorials. Can you tell me what are the changes that i need to make if i’m using the normal pan and tilt servo mechanism using GPIO pins because i can’t afford to buy a Hat for the raspberry pi for this purpose.
If you are using the GPIO pins you would need to refer to the documentation for your servo. The documentation will tell you which GPIO pins to use. You should also read this tutorial for an example of using GPIO with Python and OpenCV.
i cannot do it .I tried the full day.please help me.
The code is compiling but the camera moves weirdly I am using pi cam and Rpi 3 b+ with OpenCV version 3.4.4
Are you using the same code and hardware as for this method? Could you be a bit more descriptive regarding the camera moving “weirdly”?
Nice project! Thank you!
I’m glad you liked it! 🙂
Hi Adrian,
This is an interesting and such a wonderful project work. You are a super Genius!
Thanks for sharing a such a wonderful work.
Thank you for the kind words, Chidambar.
Hello Mr Adriane
We are trying to make a project about detecting object with( raspberry pi 3 & pi camera) with audio feedback and we want to ask you about the fastest way to apply that on real time………..if there isn’t any tutorial about that with voice .. please advice us about the best way which we can use its output to transfer it in real time to voice later
If I understand your question correctly, you would like to use an audio library to do text-to-speech?
If so, take a look at Google’s “gTTS” library.
Dear Dr Adrian,
Thank you for your article. Two questions please.
(1) I tried to install the ‘pyimagesearch’ python module as per listing 2-12. Where is location of the ‘pyimagesearch’ module say at your github site https://github.com/jrosebr1?tab=repositories such that I can download it and install it. I cannot ‘pip’ your module.
(2) Given that ‘cv2’ have pan tilt zoom (‘PTZ’) function(s) could your code be easily adapted using the cv2’s PTZ functions?
Thank you,
Anthony of Sydney
1. The “pyimagesearch” module can be found inside the “Downloads” section of this tutorial. Use that section of the post to download it.
2. Which functions specifically are you referring to inside OpenCV?
Dear Dr Adrian,
I may have been confused on this matter regarding opencv being able to control pan and tilt functions. The reason is that I read an article which incorporated video processing using opencv. In this situation, the camera was an IP camera with a pan-tilt-zoom (‘PTZ’) controlled by the python package ‘requests’. IP cameras have an inbuilt server and adjustments to the PTZ require addressing the camera’s protocol http or rtsp , camera’s IP address, supplying a password, the command to pan or tilt and the value of the pan or tilt.
However I did see the following functions in cv2:
My question is if you can read the pan and tilt values in opencv, can you set the pan and tilt in opencv instead of using the requests package?
Thank you
Anthony of Sydney Australia
That’s interesting, I’m not sure what those camera parameter values are in OpenCV. That would be a good question to ask the OpenCV devs.
Dear Dr Adrian,
I am yet to explore and experiment, but here is my understanding of setting pan and tilt operations in opencv,
Again I stress this is yet to experiment and explore. I looked at the API for video capture. OK it was in java or C++, but the parameters are the same.
You may ask why do this? That is a good question.
Answer, the above tutorial for moving the camera in an X and Y direction relies on moving two servo motors driven by separate PWM signals, as in the RPi.
But what if you have say have an IP or USB camera with integrated PTZ motors and want to apply your tutorial tracker using the PID algorithm?
Again I stress this is yet to experiment and explore.
Thank you,
Anthony of Sydney Australia
I am doing a similar thing using 2 stepper motors for pan and tilt motion and using A49888 stepper drivers to drive the motor using Rpi 3B+ I am having a issue with coding, will this code be usefull. Actual project is a sentry turret which will track a person an shoot it with a nerf gun.
Hello Adrian!
I have been your loyal reader since the day I started to learn Python and OpenCV (about 3 years ago).
Your blog and contents have grown so much!
I think I just wanna hug you! ?
Thanks Hilman, I really appreciate your support over the years :hug:
Hello Adrian!
I tried to control the level and inclination of the servo motor with the GPIO pin, but I didn’t know how to integrate PID and process in the end. Could you give me some tips? Thank you!
Hello Adrian!
I try to control ac servo-motor using GPIO pin level and tilt, I consulted PiBits ServoBlaster https://github.com/richardghirst/PiBits/tree/master/ServoBlaster and https://github.com/mitchtech but I finally don’t know how to into the PID and process, can you give me some tips
I am running on Raspberry PI 3. My PI freezes after it runs for a minute.
I noticed the temp goes up to 78 Degree C. Could this be it ?
Hello Adrian!
The pan and tilt camera has 3 pins each, correct. PWM, GND, VCC respectively. Now I am not able to understand to which pin locations of raspberry pi I shoild connect these 6 pins. Moreover I am especially concerned with PWM pins of pan and tilt servo.
I am eagarly waiting for your reply
Moreover I would like to ask whether these 2 PWM pins of each servo represents servo channels? or 2 channels wire are differently provided? In that case where should I connect these channels on raspberry pi?
Dr. Rosebrok:
“Long-time listener; first-time-caller” — kudos on being the go-to source for anything that is to do with image-processing, Raspberry Pi, etc.
Everything works fine when I implement your code, but the pan-tilt tracking eventually moves the camera till the single-face in the frame gets to the bottom-right-hand-side of the image and stays there for a while (before the bounding-box around the face disappears and the camera does not move thereafter).
I tried switching the leads for the two servos on the HAT, and change the “sleep” value in pid.py — with little success. Any help/pointers is greatly appreciated!
Cheers!
-Sreenivas
Mine acts exactly same way. Any idea Adrian?
Hi Adrian!
Can you help why pan tries to keep my face on right side of the frame, not in the center? It moves nicely but not centralize my face. What value I should chance? Also if I go out from the picture camera dont go back to center/starting point? Thanks!
That’s definitely odd. Is the bounding box surrounding your face correctly detected?
Yes. That part works fine. Pan-Tilt moves camera first fine in the center and then starts move camera in that way that my face is in the bottom right corner of frame (but not out of it). If I pan/tilt angle -1 to 1 it focus my face in opposite corner.
I have this issues too. The bounding box surrounds my face but the unit pans and tilts to bottom right of the display. The unit never puts my face in the center.
Hey Chris — I updated the code to help that issue. Could you try again?
Hi Adrian,
Thank you for taking the time to look into this. Little changes but they work well and the tracking is working as expected now.
I’m looking forward to the Raspberry Pi for Computer Vision ebook.
Thanks Chris 🙂 I’m happy to hear everything is working properly now.
Dr. Rosebrok:
The revised code worked like a charm for me. Sincere thanks!
-Sreenivas
Awesome, I’m glad to hear it! 🙂
Process is stabile when camera points 45 deg away from my face.
I think it just dont like my face ;).
Hi, Adrian. Thank you so much for the lesson but the link you sent me by email is broken.
Hi Gildson — please try again, I updated the link so now it works 🙂
Great tutorial, thank you. Any guidance on using more than 2 servos with this? Would like to use this with a 6-DOF robotic arm using adafruit’s servo hat.
Hey Noor, I haven’t worked with a robotic arm with 6-DOF before. I may try to in the future but I cannot guarantee if/when that may be.
Hey Adrian Thanks!!!!!!!!!
A question:
why the usage of multi-process and not multi-threaded?
Love your work.
Gal
Hi Adrian, the tutorial is really great and well explained!! I would like to know which variables are used for pan and tilt angles. So that I can send those variables serially to arduino to control the servos.
Thanks & Regards
See the “set_servos” function.
Hi Adrian, another great works, thanks.
How can do this project without pinomori pantilt hat ?
I connected servos to raspberry directly. In my python app, when I sent angle to servo it is working normally.
But in modified your python script, servos is not working.
I’m wrong somewhere.
Regards
Thank you for another well written tutorial. It speaks volumes when Jeff Dean cites one of these as a Keras tutorial. In fact, I am wondering if you are human all the time or if you have a bot scanning your numerous tutorials. How do you know to answer when someone like me has posted something months after the tutorial was posted? It seems you somehow know and take the time to respond.
I built an owl (like the one from bladerunner) and this code has it so the owl’s gaze follows me. My only problem is the mounting of the owl’s head requires I start the tilt angle at -30. So I set the tltAngle = tltValue – 30. (My camera is mounted upside down to yours so I dont need the negative value. I also commented out the ‘flip’ in the obj_center function)
I want to see what the code is doing I tried adding a print(tltAngle) statement in the def set_servos(pan, tlt) function. but nothing gets printed in the console(terminal). How can I get the angles printed out?
Hey Stefan, thanks for the comment. I do spend a lot of time replying to readers and helping others out. That said, there is a limit to the amount of free help I can offer. At some point I do require readers to purchase one of my books/courses for continued support.
In your case, I would recommend grabbing a copy of Raspberry Pi for Computer Vision — from there I can help you more with the project.
Well, I figured out most of my problems except the initialization. I have to initialize the tilt to -30 degrees to get the head of my model level. When I start the script, the camera flops forward and I have to scoot way down for it to see me. It then tracks properly. I tried subtracting 30 from the tilt angle, but that does not seem to help- it still jerks down on startup. I also tried adding
pth.tilt(-30) in the ‘def set_servos(pan,tlt)’ function just before the ‘while True’
Hi Adrian, i successfully followed each and every step to develop the pan-tilt-hat system and the result was outstanding.Thank you for such a wonderful information, my question is could this be linked with the openvino tutorial you offered and could the Movidius NC2 stick be used to improve the performance and speed of the servo motor and the detection rate, so as to follow the face in a real time event?How can we do that as during your openvino tutorial you made us install opencv for openvino which doesn’t have all the libraries components as optimized opencv does? Would this also work for that? If it will how can i do that?Would i have to re calibrate and tune the PID controller if i use Movidius NC2
Hey Pawan — working with the NCS2 along with servos and pan/tilt tracking is covered in detail inside Raspberry Pi for Computer Vision. If you have questions on those components and algorithms please refer to RPi for CV first.
Hi Adrian,
I got this working out of the box (apart from days tinkering with PID settings), and I really like the setup.
Currently I am building a standalone box for it to act as an “AI showcase” (biiig quotes here), to be placed behind a glass panel adjacent to a door, so the camera ‘follows’ anyone entering, as a gadget. I bought a 7″ screen for this and integrated it in the front side of the box, and the pan-tilt arrangement sits in top.
For this setup, I’d really like the preview window to be larger, say like 600 or 800 pixels wide, or even fill the entirity of the screen.
Can you help me out an tell me how I go about that / give me some pointers / blatantly tell me where to edit in the 600 or 800?
Much obliged, thanks for your time!
Dear Adrian,
Is it possible using this tutorial and all code with any I2C driven servo hats? I am pretty new to raspberry pi and would like to use this as a start to get in to opencv and driving multiple servos. I had this board in mind AZDelivery PCA9685.
Thank you so much for this tutorial
Werner
you can set the resolution in vs = VideoStream(usePiCamera=True,vflip=True,resolution=(640,480)).start()
Set the resolution to your desired resolution BUT!!! performance goes way down on any thing larger tan the default.
Also, note I have set vflip =true if you do this you should
remove line 45 frame = cv2.flip(frame, 0)
Setting the camera to flip does not add cpu cycles while CV2 flip on every frame is cpu intensive.
Hi Adrian. fantastic tutorial. I have been following your blogs. Each one of them is great! However with this blog facing some issues. While face tracking is happening but camera is always facing down from the start itself. Hence the only option is too position camera above working location height and it is very inconvinient. Can you please help me with code change that I need to do camera vertical tilt can be set upright? Much appreciate your help in anticipation.
Hi Adrian,
Do you have any info how to use this code without Pimerone (only servos)?
I have bought RPi for CV and Gurus but there is no more info than here.
// Jussi
Hey Jussi — thanks for picking up a copy of both the PyImageSearch Gurus course and Raspberry Pi for Computer Vision. Can you send me an email so we can discuss there? I don’t want the blog post comments section to get too off track.
I am Facing the same problem, can you please provide me the solution to it ??
Your install opencv uses a virtenv of cv and you use py3cv4 in this one.
I am using buster and the ln for smbus is not 35. I have a choice of 36 or 37. Which one?
In general, is this exercise going to work with buster?
Thanx
I would use whichever Python version your Python virtual environment is using. And yes, this exercise will work with Buster.
I want to detect any other object rather than my face what changes should be made to the code can you please suggest
Hi Adrian, how can i resize the frame? i think the frame is too small for me 🙁
oh, i find it, thanks a lot !
https://answers.opencv.org/question/84985/resizing-the-output-window-of-imshow-function/
Can you make this camera zoom in? I am looking to replicate something like the Pixio camera to take videos while I am riding my horse in an indoor arena. I plan to try this with my Raspberry Pi! Any help or suggestions would be appreciated!
Dear Adrian,
a great project !! Very well structured and well explained.
I learned a lot about multiprocessing and PID control also
by slightly modifying it.
..and just by the way..it works !
Thanks for sharing
Peter
Thanks Peter! And congrats on a successful project!
I just want to thank you for the PID information and function. I’ve been working on a panning function, with the intention of a robot being able to turn to you (not necessarily constantly tracking, but to engage you noticeably when you face it, or perhaps when you speak), and I had not problem to integrate the PID into making the movement more graceful.
Thanks Karen, I’m glad you found the tutorial helpful! 🙂
Hi Adrian
Hope you’re having a great day. First of all- fantastic tutorial. I really appreciate al the help you’re offering here on your website. I do have a question though…
I was wondering if it’s safe to drive the servos directly from the Raspberry Pi. They do seem to draw a lot of current.
Would it be possible to connect only the PWM wires to the Pan-Tilt HAT, and connect the remaining 5V and GND wires to an external source? Would such an arrangement work?
Would really appreciate an answer.
Thank you very much.
Kind regards
Rob
would there be a may to send the servo commands to dynamixel servos there is a package called pydynamixel ?
is there a way to run the pan/tilt at boot ?
Yes, follow this guide.
Hello Adrian! Thank you a lot for this tutorial, once again you did an amazing job!
I noticed that in the first image the camera movement is responding a bit slow.
Is there a way to accelerate it?
Would I be able to track something way faster, such as a tennis ball?
Thanks
@Agapi,
Try here: https://towardsdatascience.com/real-time-object-tracking-with-tensorflow-raspberry-pi-and-pan-tilt-hat-2aeaef47e134
Hi Adrian,
A very interesting project that is well documented and professional.
Question for you: The Pimoroni Servo Driver HAT does not use the PCA9685 Servo Driver chip like the Sparkfun Servo Driver does, therefore it is not possible to duplicate your project without purchasing the Pimoroni Servo Driver HAT which is presently out of stock.
I would like to know if you would be willing to branch your pan_tilt_tracking Python code to use the adafruit-pca9685 pan/tilt driver library in place of the Pimoroni pantilthat library?
An example of a Pimoroni Pan/Tilt Face Tracker that uses the adafruit-pca9685 servo driver library can be found here: https://github.com/RogueM/PanTiltFacetracker
The code is very similar to yours except that it lacks your PID code which is a significant control improvement over the original RogueM project code.
I would attempt the code conversion, but I am just a Python beginner and really learn well with examples if you have the time.
Regards,
TCIII
Thanks Thomas. I’m more than happy to provide these tutorials for free, including keeping them updated the best I can, but I cannot take on any additional customizations to a project — I would leave such an exercise to you (or any) reader who as an educational exercise.