You may have noticed that over the past couple of weeks we have been using a special Python package called face_recognition
quite a bit on the PyImageSearch blog:
- We first used it to build a face recognition system
- We then applied face recognition to the Raspberry Pi
- And most recently we clustered faces to find unique individuals in a dataset
Without both (1) the face_recognition
module and (2) the dlib library, creating these face recognition applications would not be possible.
Today, I am pleased to share an interview with Adam Geitgey, the creator of the face_recognition
library.
Inside the interview Adam discusses:
- How and why he created the
face_recognition
Python module - Using RNNs to generate new Super Mario Bros video game levels
- His favorite tools and libraries of choice
- His advice to you, as a PyImageSearch reader, on how to get started studying both computer vision and deep learning
- His work with deep learning-based segmentation, self-driving cars, and satellite/aerial photography
To check out the full email with Adam, just keep reading!
An interview with Adam Geitgey, creator of the face_recognition Python library
Adrian: Hey Adam! Thank you for being here today. It’s wonderful to have you here on the PyImageSearch blog. For people who do not know you, who are you and what do you do?
Adam: Hi Adrian! Thanks for having me!
I’ve been programming since I was a little kid. I started teaching myself Basic and Pascal from whatever books I could find when I was seven. So I’ve been interested in programming pretty much as long as I can remember.
That led me to eventually working on software in all kinds of industries – from 3D CAD software in China to Silicon Valley start-ups. For the last year and a half, I’ve been creating machine learning educational courses for LinkedIn Learning and doing consulting projects. Most recently I’ve been working with a team on an AI project for the Bill and Melinda Gates Foundation.
That’s a long way of saying I’m not someone who specifically studied machine learning in college. I have a more traditional computer science and software engineering background and I got into machine learning later in my career.
Adrian: How did you first become interested in machine learning and computer vision?
Adam: When I was in college, I wasn’t that interested in what was called “AI” then. This around 2000, several years before GPUs made large neural networks practical. At that point, the whole AI field seemed like a Sci-Fi fantasy to me and I was a lot more interested in how to use computers to solve more immediate problems.
But a few years later, I read Peter Norvig’s “How to Write a Spelling Corrector” article. It explains the idea of using probability to solve a problem that would otherwise be really hard to solve. This totally blew my mind! I felt like I had been missing out on a whole parallel world of programming and I started learning everything I could. That led to machine learning, computer vision, natural language processing and everything else.
Adrian: Your Machine Learning is Fun! series on your blog is excellent — what inspired you to create it?
Adam: Whenever I am learning something new, whether it’s photography or music theory or machine learning, I get frustrated when I feel like the learning materials could be simpler and more direct.
People who are experts in a field tend to write for other future experts. But it’s much more challenging and fun for me to try to reduce a topic to it’s bare essentials and present it in a way that anyone could understand. I get obsessed with trying to break down complex topics into their simplest forms. It probably has a lot to do with how I learn – I like to understand the big picture before I dig into the details.
For any programmers out there, here’s some free career advice – If you can write really clear and simple emails that explain the problems your team is working on, that alone will get you very far in your career in a large tech company.
Adrian: What is you favorite article you have written at Machine Learning is Fun! and why?
Adam: I love the idea of using computers as a tool for creativity and exploring the line between human-generated art and computer-generated art. I had a lot of fun writing the article on using RNNs to generate new Super Mario Bros video game levels.
Adrian: Your face_recognition library is one of the most popular facial recognition libraries on GitHub. I even used it in the past three blog posts! Can you tell us a bit more the process behind creating it?
Adam: Almost every week, exciting new research papers are coming out and a lot of them include working code. It’s an amazing time to be a programmer! The problem is that most research code is written as a one-off experiment. It usually has to be re-written before you can use it in a production application.
I was looking for a good solution for face recognition in Python, but everything I found was was cobbled together in several different programming languages and required downloading extra files. I had written an article about how face recognition works but the majority of questions I got were from readers who couldn’t get the libraries installed correctly. I wanted something that was as simple to deploy as “pip install face_recognition”.
Around that time, Davis King updated his excellent dlib library to include a new face recognition model. It seemed like a perfect fit for what I wanted to do. Since Davis had already done most of the hard work, all I had to do was package it up with an API that I liked, write a bunch of documentation and examples and get it hosted on pip. Honestly the hardest part of the whole process was convincing the folks who run pip to host it since it was larger than the maximum file size they typically allow.
Adrian: What is the coolest/neatest project you’ve seen your face_recognition
library used for?
Adam: People have used it to build all kinds of neat stuff, like tools that use face recognition to automatically take attendance in their classroom. But the coolest to me was a user who used the encodings generated by the face recognition model as input to train an emotion detection model. Transfer learning is really cool and I love the idea of leveraging existing models to build new models.
Adrian: What are your tools and libraries of choice?
Adam: I’ve coding with Python on and off for around 20 years and I use it for all my machine learning projects. The best part are all the amazing libraries!
Some of my favorite libraries are Keras for training new neural networks, spaCy for natural language processing and fastText for quickly building text classification models. And knowing how to use numpy and Pandas really well will save you many hours of work cleaning up data.
Also, I really like the recent changes in Python 3.6 and 3.7. If you are still using Python 2, it’s definitely time to upgrade!
Adrian: What’s next? What type of projects do you have coming down the pipeline?
Adam: I’m working on a new series of articles on Natural Language Processing. If you want to learn how to understand written text with computers, keep an eye on my site for new posts!
Adrian: Do you have any advice for readers who are just getting started studying both computer vision and deep learning?
Adam: If you are brand new to the Python programming language, spend a little time learning the syntax and language constructs well. It will save you a lot of time in the long run. Many of the questions I get are really just Python syntax questions or misunderstandings. It’s a very different language than C++ or Java.
Beyond that, learn the basic concepts of training a model and then just jump in and try it out on a project that you are interested in! You’ll learn a lot more by doing it than reading about it.
Adrian: You’ll be speaking at PyImageConf 2018, PyImageSearch’s very own computer vision and deep learning conference — we’re really excited to have you! Can you tell us a bit more about what your talk is going to be on?
Adam: I’m going to be talking about how to do image segmentation. Image segmentation is where you take a photograph and not only identify the objects in it, but actually trace lines around each object.
There are tons of potential uses for image segmentation. For example, a self-driving car needs to be able to point a camera at the road and identify each pedestrian with high accuracy and image segmentation is one way to do that. You can also imagine a future version of Photoshop being really good at extracting objects from images by being able to automatically trace out each object on it’s own.
But to me, the most exciting use of image segmentation is processing satellite and aerial photography. With image segmentation, we can actually feed in raw satellite photos and then have computers automatically trace the outline of each building and trace each road. This is work that used to be done by hand by thousands of people and cost millions of dollars.
This technology has the potential to change the entire mapping industry. For example, OpenStreetMap is an amazing project where volunteers are trying to map the entire world by using their local knowledge to trace satellite images and annotate GPS tracks. But as image segmentation technology becomes more accessible, computers will be able to do 80% of the grunt work of tracing the maps and then humans will only have to clean up and annotate the results. Image segmentation technology has the potential to eventually speed up the process so much than the goal would no longer be just to map the entire world, but instead to see how fast we can map the entire world based on the latest satellite images.
Access to high quality map data of remote areas is critical to humanitarian and medical teams. Better availability of maps of where remote populations are living has the potential to save lives.
Adrian: If a PyImageSearch reader wants to connect with you, where is the best place to reach out?
Adam: You can read all my articles at https://www.machinelearningisfun.com/ and you can hit me up on twitter at @ageitgey. Thanks!
What's next? We recommend PyImageSearch University.
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In today’s blog post, we interviewed Adam Geitgey, computer vision and deep learning practitioner, author of the popular Machine Learning is Fun! blog series, and creator of the highly popular face_recognition
library.
Please take a second to thank Adam for taking the time to do the interview.
To be notified when future blog posts and interviews are published here on PyImageSearch, just be sure to enter your email address in the form below, and I’ll be sure to keep you in the loop.
Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning.