In this tutorial, you will learn how to automatically detect COVID-19 in a hand-created X-ray image dataset using Keras, TensorFlow, and Deep Learning.
Like most people in the world right now, I’m genuinely concerned about COVID-19. I find myself constantly analyzing my personal health and wondering if/when I will contract it.
The more I worry about it, the more it turns into a painful mind game of legitimate symptoms combined with hypochondria:
- I woke up this morning feeling a bit achy and run down.
- As I pulled myself out of bed, I noticed my nose was running (although it’s now reported that a runny nose is not a symptom of COVID-19).
- By the time I made it to the bathroom to grab a tissue, I was coughing as well.
At first, I didn’t think much of it — I have pollen allergies and due to the warm weather on the eastern coast of the United States, spring has come early this year. My allergies were likely just acting up.
But my symptoms didn’t improve throughout the day.
I’m actually sitting here, writing the this tutorial, with a thermometer in my mouth; and glancing down I see that it reads 99.4° Fahrenheit.
My body runs a bit cooler than most, typically in the 97.4°F range. Anything above 99°F is a low-grade fever for me.
Cough and low-grade fever? That could be COVID-19…or it could simply be my allergies.
It’s impossible to know without a test, and that “not knowing” is what makes this situation so scary from a visceral human level.
As humans, there is nothing more terrifying than the unknown.
Despite my anxieties, I try to rationalize them away. I’m in my early 30s, very much in shape, and my immune system is strong. I’ll quarantine myself (just in case), rest up, and pull through just fine — COVID-19 doesn’t scare me from my own personal health perspective (at least that’s what I keep telling myself).
That said, I am worried about my older relatives, including anyone that has pre-existing conditions, or those in a nursing home or hospital. They are vulnerable and it would be truly devastating to see them go due to COVID-19.
Instead of sitting idly by and letting whatever is ailing me keep me down (be it allergies, COVID-19, or my own personal anxieties), I decided to do what I do best — focus on the overall CV/DL community by writing code, running experiments, and educating others on how to use computer vision and deep learning in practical, real-world applications.
That said, I’ll be honest, this is not the most scientific article I’ve ever written. Far from it, in fact. The methods and datasets used would not be worthy of publication. But they serve as a starting point for those who need to feel like they’re doing something to help.
I care about you and I care about this community. I want to do what I can to help — this blog post is my way of mentally handling a tough time, while simultaneously helping others in a similar situation.
I hope you see it as such.
Inside of today’s tutorial, you will learn how to:
- Sample an open source dataset of X-ray images for patients who have tested positive for COVID-19
- Sample “normal” (i.e., not infected) X-ray images from healthy patients
- Train a CNN to automatically detect COVID-19 in X-ray images via the dataset we created
- Evaluate the results from an educational perspective
Disclaimer: I’ve hinted at this already but I’ll say it explicitly here. The methods and techniques used in this post are meant for educational purposes only. This is not a scientifically rigorous study, nor will it be published in a journal. This article is for readers who are interested in (1) Computer Vision/Deep Learning and want to learn via practical, hands-on methods and (2) are inspired by current events. I kindly ask that you treat it as such.
To learn how you could detect COVID-19 in X-ray images by using Keras, TensorFlow, and Deep Learning, just keep reading!
Looking for the source code to this post?
Jump Right To The Downloads SectionDetecting COVID-19 in X-ray images with Keras, TensorFlow, and Deep Learning
In the first part of this tutorial, we’ll discuss how COVID-19 could be detected in chest X-rays of patients.
From there, we’ll review our COVID-19 chest X-ray dataset.
I’ll then show you how to train a deep learning model using Keras and TensorFlow to predict COVID-19 in our image dataset.
Disclaimer
This blog post on automatic COVID-19 detection is for educational purposes only. It is not meant to be a reliable, highly accurate COVID-19 diagnosis system, nor has it been professionally or academically vetted.
My goal is simply to inspire you and open your eyes to how studying computer vision/deep learning and then applying that knowledge to the medical field can make a big impact on the world.
Simply put: You don’t need a degree in medicine to make an impact in the medical field — deep learning practitioners working closely with doctors and medical professionals can solve complex problems, save lives, and make the world a better place.
My hope is that this tutorial inspires you to do just that.
But with that said, researchers, journal curators, and peer review systems are being overwhelmed with submissions containing COVID-19 prediction models of questionable quality. Please do not take the code/model from this post and submit it to a journal or Open Science — you’ll only add to the noise.
Furthermore, if you intend on performing research using this post (or any other COVID-19 article you find online), make sure you refer to the TRIPOD guidelines on reporting predictive models.
As you’re likely aware, artificial intelligence applied to the medical domain can have very real consequences. Only publish or deploy such models if you are a medical expert, or closely consulting with one.
How could COVID-19 be detected in X-ray images?
COVID-19 tests are currently hard to come by — there are simply not enough of them and they cannot be manufactured fast enough, which is causing panic.
When there’s panic, there are nefarious people looking to take advantage of others, namely by selling fake COVID-19 test kits after finding victims on social media platforms and chat applications.
Given that there are limited COVID-19 testing kits, we need to rely on other diagnosis measures.
For the purposes of this tutorial, I thought to explore X-ray images as doctors frequently use X-rays and CT scans to diagnose pneumonia, lung inflammation, abscesses, and/or enlarged lymph nodes.
Since COVID-19 attacks the epithelial cells that line our respiratory tract, we can use X-rays to analyze the health of a patient’s lungs.
And given that nearly all hospitals have X-ray imaging machines, it could be possible to use X-rays to test for COVID-19 without the dedicated test kits.
A drawback is that X-ray analysis requires a radiology expert and takes significant time — which is precious when people are sick around the world. Therefore developing an automated analysis system is required to save medical professionals valuable time.
Note: There are newer publications that suggest CT scans are better for diagnosing COVID-19, but all we have to work with for this tutorial is an X-ray image dataset. Secondly, I am not a medical expert and I presume there are other, more reliable, methods that doctors and medical professionals will use to detect COVID-19 outside of the dedicated test kits.
Our COVID-19 patient X-ray image dataset
The COVID-19 X-ray image dataset we’ll be using for this tutorial was curated by Dr. Joseph Cohen, a postdoctoral fellow at the University of Montreal.
One week ago, Dr. Cohen started collecting X-ray images of COVID-19 cases and publishing them in the following GitHub repo.
Inside the repo you’ll find example of COVID-19 cases, as well as MERS, SARS, and ARDS.
In order to create the COVID-19 X-ray image dataset for this tutorial, I:
- Parsed the
metadata.csv
file found in Dr. Cohen’s repository. - Selected all rows that are:
- Positive for COVID-19 (i.e., ignoring MERS, SARS, and ARDS cases).
- Posterioranterior (PA) view of the lungs. I used the PA view as, to my knowledge, that was the view used for my “healthy” cases, as discussed below; however, I’m sure that a medical professional will be able clarify and correct me if I am incorrect (which I very well may be, this is just an example).
In total, that left me with 25 X-ray images of positive COVID-19 cases (Figure 2, left).
The next step was to sample X-ray images of healthy patients.
To do so, I used Kaggle’s Chest X-Ray Images (Pneumonia) dataset and sampled 25 X-ray images from healthy patients (Figure 2, right). There are a number of problems with Kaggle’s Chest X-Ray dataset, namely noisy/incorrect labels, but it served as a good enough starting point for this proof of concept COVID-19 detector.
After gathering my dataset, I was left with 50 total images, equally split with 25 images of COVID-19 positive X-rays and 25 images of healthy patient X-rays.
I’ve included my sample dataset in the “Downloads” section of this tutorial, so you do not have to recreate it.
Additionally, I have included my Python scripts used to generate the dataset in the downloads as well, but these scripts will not be reviewed in this tutorial as they are outside the scope of the post.
Project structure
Go ahead and grab today’s code and data from the “Downloads” section of this tutorial. From there, extract the files and you’ll be presented with the following directory structure:
$ tree --dirsfirst --filelimit 10 . ├── dataset │ ├── covid [25 entries] │ └── normal [25 entries] ├── build_covid_dataset.py ├── sample_kaggle_dataset.py ├── train_covid19.py ├── plot.png └── covid19.model 3 directories, 5 files
Our coronavirus (COVID-19) chest X-ray data is in the dataset/
directory where our two classes of data are separated into covid/
and normal/
.
Both of my dataset building scripts are provided; however, we will not be reviewing them today.
Instead, we will review the train_covid19.py
script which trains our COVID-19 detector.
Let’s dive in and get to work!
Implementing our COVID-19 training script using Keras and TensorFlow
Now that we’ve reviewed our image dataset along with the corresponding directory structure for our project, let’s move on to fine-tuning a Convolutional Neural Network to automatically diagnose COVID-19 using Keras, TensorFlow, and deep learning.
Open up the train_covid19.py
file in your directory structure and insert the following code:
# import the necessary packages from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import VGG16 from tensorflow.keras.layers import AveragePooling2D from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import to_categorical from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import cv2 import os
This script takes advantage of TensorFlow 2.0 and Keras deep learning libraries via a selection of tensorflow.keras
imports.
Additionally, we use scikit-learn, the de facto Python library for machine learning, matplotlib for plotting, and OpenCV for loading and preprocessing images in the dataset.
To learn how to install TensorFlow 2.0 (including relevant scikit-learn, OpenCV, and matplotlib libraries), just follow my Ubuntu or macOS guide.
With our imports taken care of, next we will parse command line arguments and initialize hyperparameters:
# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-d", "--dataset", required=True, help="path to input dataset") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output loss/accuracy plot") ap.add_argument("-m", "--model", type=str, default="covid19.model", help="path to output loss/accuracy plot") args = vars(ap.parse_args()) # initialize the initial learning rate, number of epochs to train for, # and batch size INIT_LR = 1e-3 EPOCHS = 25 BS = 8
Our three command line arguments (Lines 24-31) include:
--dataset
: The path to our input dataset of chest X-ray images.--plot
: An optional path to an output training history plot. By default the plot is namedplot.png
unless otherwise specified via the command line.--model
: The optional path to our output COVID-19 model; by default it will be namedcovid19.model
.
From there we initialize our initial learning rate, number of training epochs, and batch size hyperparameters (Lines 35-37).
We’re now ready to load and preprocess our X-ray data:
# grab the list of images in our dataset directory, then initialize # the list of data (i.e., images) and class images print("[INFO] loading images...") imagePaths = list(paths.list_images(args["dataset"])) data = [] labels = [] # loop over the image paths for imagePath in imagePaths: # extract the class label from the filename label = imagePath.split(os.path.sep)[-2] # load the image, swap color channels, and resize it to be a fixed # 224x224 pixels while ignoring aspect ratio image = cv2.imread(imagePath) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (224, 224)) # update the data and labels lists, respectively data.append(image) labels.append(label) # convert the data and labels to NumPy arrays while scaling the pixel # intensities to the range [0, 1] data = np.array(data) / 255.0 labels = np.array(labels)
To load our data, we grab all paths to images in in the --dataset
directory (Lines 42). Then, for each imagePath
, we:
- Extract the class
label
(eithercovid
ornormal
) from the path (Line 49). - Load the
image
, and preprocess it by converting to RGB channel ordering, and resizing it to 224×224 pixels so that it is ready for our Convolutional Neural Network (Lines 53-55). - Update our
data
andlabels
lists respectively (Lines 58 and 59).
We then scale pixel intensities to the range [0, 1] and convert both our data
and labels
to NumPy array format (Lines 63 and 64).
Next we will one-hot encode our labels
and create our training/testing splits:
# perform one-hot encoding on the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels) # partition the data into training and testing splits using 80% of # the data for training and the remaining 20% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.20, stratify=labels, random_state=42) # initialize the training data augmentation object trainAug = ImageDataGenerator( rotation_range=15, fill_mode="nearest")
One-hot encoding of labels
takes place on Lines 67-69 meaning that our data will be in the following format:
[[0. 1.] [0. 1.] [0. 1.] ... [1. 0.] [1. 0.] [1. 0.]]
Each encoded label consists of a two element array with one of the elements being “hot” (i.e., 1
) versus “not” (i.e., 0
).
Lines 73 and 74 then construct our data split, reserving 80% of the data for training and 20% for testing.
In order to ensure that our model generalizes, we perform data augmentation by setting the random image rotation setting to 15 degrees clockwise or counterclockwise.
Lines 77-79 initialize the data augmentation generator object.
From here we will initialize our VGGNet model and set it up for fine-tuning:
# load the VGG16 network, ensuring the head FC layer sets are left # off baseModel = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3))) # construct the head of the model that will be placed on top of the # the base model headModel = baseModel.output headModel = AveragePooling2D(pool_size=(4, 4))(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(64, activation="relu")(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(2, activation="softmax")(headModel) # place the head FC model on top of the base model (this will become # the actual model we will train) model = Model(inputs=baseModel.input, outputs=headModel) # loop over all layers in the base model and freeze them so they will # *not* be updated during the first training process for layer in baseModel.layers: layer.trainable = False
Lines 83 and 84 instantiate the VGG16 network with weights pre-trained on ImageNet, leaving off the FC layer head.
From there, we construct a new fully-connected layer head consisting of POOL => FC = SOFTMAX
layers (Lines 88-93) and append it on top of VGG16 (Line 97).
We then freeze the CONV
weights of VGG16 such that only the FC
layer head will be trained (Lines 101-102); this completes our fine-tuning setup.
We’re now ready to compile and train our COVID-19 (coronavirus) deep learning model:
# compile our model print("[INFO] compiling model...") opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the head of the network print("[INFO] training head...") H = model.fit_generator( trainAug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS)
Lines 106-108 compile the network with learning rate decay and the Adam
optimizer. Given that this is a 2-class problem, we use "binary_crossentropy"
loss rather than categorical crossentropy.
To kick off our COVID-19 neural network training process, we make a call to Keras’ fit_generator method, while passing in our chest X-ray data via our data augmentation object (Lines 112-117).
Next, we’ll evaluate our model:
# make predictions on the testing set print("[INFO] evaluating network...") predIdxs = model.predict(testX, batch_size=BS) # for each image in the testing set we need to find the index of the # label with corresponding largest predicted probability predIdxs = np.argmax(predIdxs, axis=1) # show a nicely formatted classification report print(classification_report(testY.argmax(axis=1), predIdxs, target_names=lb.classes_))
For evaluation, we first make predictions on the testing set and grab the prediction indices (Lines 121-125).
We then generate and print out a classification report using scikit-learn’s helper utility (Lines 128 and 129).
Next we’ll compute a confusion matrix for further statistical evaluation:
# compute the confusion matrix and and use it to derive the raw # accuracy, sensitivity, and specificity cm = confusion_matrix(testY.argmax(axis=1), predIdxs) total = sum(sum(cm)) acc = (cm[0, 0] + cm[1, 1]) / total sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1]) specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1]) # show the confusion matrix, accuracy, sensitivity, and specificity print(cm) print("acc: {:.4f}".format(acc)) print("sensitivity: {:.4f}".format(sensitivity)) print("specificity: {:.4f}".format(specificity))
Here we:
- Generate a confusion matrix (Line 133)
- Use the confusion matrix to derive the accuracy, sensitivity, and specificity (Lines 135-137) and print each of these metrics (Lines 141-143)
We then plot our training accuracy/loss history for inspection, outputting the plot to an image file:
# plot the training loss and accuracy N = EPOCHS plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, N), H.history["loss"], label="train_loss") plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss") plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc") plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy on COVID-19 Dataset") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"])
Finally we serialize our tf.keras
COVID-19 classifier model to disk:
# serialize the model to disk print("[INFO] saving COVID-19 detector model...") model.save(args["model"], save_format="h5")
Training our COVID-19 detector with Keras and TensorFlow
With our train_covid19.py
script implemented, we are now ready to train our automatic COVID-19 detector.
Make sure you use the “Downloads” section of this tutorial to download the source code, COVID-19 X-ray dataset, and pre-trained model.
From there, open up a terminal and execute the following command to train the COVID-19 detector:
$ python train_covid19.py --dataset dataset [INFO] loading images... [INFO] compiling model... [INFO] training head... Epoch 1/25 5/5 [==============================] - 20s 4s/step - loss: 0.7169 - accuracy: 0.6000 - val_loss: 0.6590 - val_accuracy: 0.5000 Epoch 2/25 5/5 [==============================] - 0s 86ms/step - loss: 0.8088 - accuracy: 0.4250 - val_loss: 0.6112 - val_accuracy: 0.9000 Epoch 3/25 5/5 [==============================] - 0s 99ms/step - loss: 0.6809 - accuracy: 0.5500 - val_loss: 0.6054 - val_accuracy: 0.5000 Epoch 4/25 5/5 [==============================] - 1s 100ms/step - loss: 0.6723 - accuracy: 0.6000 - val_loss: 0.5771 - val_accuracy: 0.6000 ... Epoch 22/25 5/5 [==============================] - 0s 99ms/step - loss: 0.3271 - accuracy: 0.9250 - val_loss: 0.2902 - val_accuracy: 0.9000 Epoch 23/25 5/5 [==============================] - 0s 99ms/step - loss: 0.3634 - accuracy: 0.9250 - val_loss: 0.2690 - val_accuracy: 0.9000 Epoch 24/25 5/5 [==============================] - 27s 5s/step - loss: 0.3175 - accuracy: 0.9250 - val_loss: 0.2395 - val_accuracy: 0.9000 Epoch 25/25 5/5 [==============================] - 1s 101ms/step - loss: 0.3655 - accuracy: 0.8250 - val_loss: 0.2522 - val_accuracy: 0.9000 [INFO] evaluating network... precision recall f1-score support covid 0.83 1.00 0.91 5 normal 1.00 0.80 0.89 5 accuracy 0.90 10 macro avg 0.92 0.90 0.90 10 weighted avg 0.92 0.90 0.90 10 [[5 0] [1 4]] acc: 0.9000 sensitivity: 1.0000 specificity: 0.8000 [INFO] saving COVID-19 detector model...
Automatic COVID-19 diagnosis from X-ray image results
Disclaimer: The following section does not claim, nor does it intend to “solve”, COVID-19 detection. It is written in the context, and from the results, of this tutorial only. It is an example for budding computer vision and deep learning practitioners so they can learn about various metrics, including raw accuracy, sensitivity, and specificity (and the tradeoffs we must consider when working with medical applications). Again, this section/tutorial does not claim to solve COVID-19 detection.
As you can see from the results above, our automatic COVID-19 detector is obtaining ~90-92% accuracy on our sample dataset based solely on X-ray images — no other data, including geographical location, population density, etc. was used to train this model.
We are also obtaining 100% sensitivity and 80% specificity implying that:
- Of patients that do have COVID-19 (i.e., true positives), we could accurately identify them as “COVID-19 positive” 100% of the time using our model.
- Of patients that do not have COVID-19 (i.e., true negatives), we could accurately identify them as “COVID-19 negative” only 80% of the time using our model.
As our training history plot shows, our network is not overfitting, despite having very limited training data:
Being able to accurately detect COVID-19 with 100% accuracy is great; however, our true negative rate is a bit concerning — we don’t want to classify someone as “COVID-19 negative” when they are “COVID-19 positive”.
In fact, the last thing we want to do is tell a patient they are COVID-19 negative, and then have them go home and infect their family and friends; thereby transmitting the disease further.
We also want to be really careful with our false positive rate — we don’t want to mistakenly classify someone as “COVID-19 positive”, quarantine them with other COVID-19 positive patients, and then infect a person who never actually had the virus.
Balancing sensitivity and specificity is incredibly challenging when it comes to medical applications, especially infectious diseases that can be rapidly transmitted, such as COVID-19.
When it comes to medical computer vision and deep learning, we must always be mindful of the fact that our predictive models can have very real consequences — a missed diagnosis can cost lives.
Again, these results are gathered for educational purposes only. This article and accompanying results are not intended to be a journal article nor does it conform to the TRIPOD guidelines on reporting predictive models. I would suggest you refer to these guidelines for more information, if you are so interested.
Limitations, improvements, and future work
One of the biggest limitations of the method discussed in this tutorial is data.
We simply don’t have enough (reliable) data to train a COVID-19 detector.
Hospitals are already overwhelmed with the number of COVID-19 cases, and given patients rights and confidentiality, it becomes even harder to assemble quality medical image datasets in a timely fashion.
I imagine in the next 12-18 months we’ll have more high quality COVID-19 image datasets; but for the time being, we can only make do with what we have.
I have done my best (given my current mental state and physical health) to put together a tutorial for my readers who are interested in applying computer vision and deep learning to the COVID-19 pandemic given my limited time and resources; however, I must remind you that I am not a trained medical expert.
For the COVID-19 detector to be deployed in the field, it would have to go through rigorous testing by trained medical professionals, working hand-in-hand with expert deep learning practitioners. The method covered here today is certainly not such a method, and is meant for educational purposes only.
Furthermore, we need to be concerned with what the model is actually “learning”.
As I discussed in last week’s Grad-CAM tutorial, it’s possible that our model is learning patterns that are not relevant to COVID-19, and instead are just variations between the two data splits (i.e., positive versus negative COVID-19 diagnosis).
It would take a trained medical professional and rigorous testing to validate the results coming out of our COVID-19 detector.
And finally, future (and better) COVID-19 detectors will be multi-modal.
Right now we are using only image data (i.e., X-rays) — better automatic COVID-19 detectors should leverage multiple data sources not limited to just images, including patient vitals, population density, geographical location, etc. Image data by itself is typically not sufficient for these types of applications.
For these reasons, I must once again stress that this tutorial is meant for educational purposes only — it is not meant to be a robust COVID-19 detector.
If you believe that yourself or a loved one has COVID-19, you should follow the protocols outlined by the Center for Disease Control (CDC), World Health Organization (WHO), or local country, state, or jurisdiction.
I hope you enjoyed this tutorial and found it educational. It’s also my hope that this tutorial serves as a starting point for anyone interested in applying computer vision and deep learning to automatic COVID-19 detection.
What’s next?
I typically end my blog posts by recommending one of my books/courses, so that you can learn more about applying Computer Vision and Deep Learning to your own projects. Out of respect for the severity of the coronavirus, I am not going to do that — this isn’t the time or the place.
Instead, what I will say is we’re in a very scary season of life right now.
Like all seasons, it will pass, but we need to hunker down and prepare for a cold winter — it’s likely that the worst has yet to come.
To be frank, I feel incredibly depressed and isolated. I see:
- Stock markets tanking.
- Countries locking down their borders.
- Massive sporting events being cancelled.
- Some of the world’s most popular bands postponing their tours.
- And locally, my favorite restaurants and coffee shops shuttering their doors.
That’s all on the macro-level — but what about the micro-level?
What about us as individuals?
It’s too easy to get caught up in the global statistics.
We see numbers like 6,000 dead and 160,000 confirmed cases (with potentially multiple orders of magnitude more due to lack of COVID-19 testing kits and that some people are choosing to self-quarantine).
When we think in those terms we lose sight of ourselves and our loved ones. We need to take things day-by-day. We need to think at the individual level for our own mental health and sanity. We need safe spaces where we can retreat to.
When I started PyImageSearch over 5 years ago, I knew it was going to be a safe space. I set the example for what PyImageSearch was to become and I still do to this day. For this reason, I don’t allow harassment in any shape or form, including, but not limited to, racism, sexism, xenophobia, elitism, bullying, etc.
The PyImageSearch community is special. People here respect others — and if they don’t, I remove them.
Perhaps one of my favorite displays of kind, accepting, and altruistic human character came when I ran PyImageConf 2018 — attendees were overwhelmed with how friendly and welcoming the conference was.
Dave Snowdon, software engineer and PyImageConf attendee said:
PyImageConf was without a doubt the most friendly and welcoming conference I’ve been to. The technical content was also great too! It was privilege to meet and learn from some of the people who’ve contributed their time to build the tools that we rely on for our work (and play).
David Stone, Doctor of Engineering and professor at Virginia Commonwealth University shared the following:
Thanks for putting together PyImageConf. I also agree that it was the most friendly conference that I have attended.
Why do I say all this?
Because I know you may be scared right now.
I know you might be at your whits end (trust me, I am too).
And most importantly, because I want PyImageSearch to be your safe space.
- You might be a student home from school after your semester prematurely ended, disappointed that your education has been put on hold.
- You may be a developer, totally lost after your workplace chained its doors for the foreseeable future.
- You may be a researcher, frustrated that you can’t continue your experiments and authoring that novel paper.
- You might be a parent, trying, unsuccessfully, to juggle two kids and a mandatory “work from home” requirement.
Or, you may be like me — just trying to get through the day by learning a new skill, algorithm, or technique.
I’ve received a number of emails from PyImageSearch readers who want to use this downtime to study Computer Vision and Deep Learning rather than going stir crazy in their homes.
I respect that and I want to help, and to a degree, I believe it is my moral obligation to help how I can:
- To start, there are over 350+ free tutorials you can learn from on the PyImageSearch blog. I publish a new tutorial every Monday at 10AM EST.
- I’ve categorized, cross-referenced, and compiled these tutorials on my “Get Started” page.
- The most popular topics on the “Get Started” page include “Deep Learning” and “Face Applications”.
All these guides are 100% free. Use them to study and learn from.
That said, many readers have also been requesting that I run a sale on my books and courses. At first, I was a bit hesitant about it — the last thing I want is for people to think I’m somehow using the coronavirus as a scheme to “make money”.
But the truth is, being a small business owner who is not only responsible for myself and my family, but the lives and families of my teammates, can be terrifying and overwhelming at times — peoples lives, including small businesses, will be destroyed by this virus.
To that end, just like:
- Bands and performers are offering discounted “online only” shows
- Restaurants are offering home delivery
- Fitness coaches are offering training sessions online
…I’ll be following suit.
Starting tomorrow I’ll be running a sale on PyImageSearch books. This sale isn’t meant for profit and it’s certainly not planned (I’ve spent my entire weekend, sick, trying to put all this together).
Instead, it’s sale to help people, like me (and perhaps like yourself), who are struggling to find their safe space during this mess. Let myself and PyImageSearch become your retreat.
I typically only run one big sale per year (Black Friday), but given how many people are requesting it, I believe it’s something that I need to do for those who want to use this downtime to study and/or as a distraction from the rest of the world.
Feel free to join in or not. It’s totally okay. We all process these tough times in our own ways.
But if you need rest, if you need a haven, if you need a retreat through education — I’ll be here.
Thank you and stay safe.
What's next? We recommend PyImageSearch University.
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 86 Certificates of Completion
- ✓ 115+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this tutorial you learned how you could use Keras, TensorFlow, and Deep Learning to train an automatic COVID-19 detector on a dataset of X-ray images.
High quality, peer reviewed image datasets for COVID-19 don’t exist (yet), so we had to work with what we had, namely Joseph Cohen’s GitHub repo of open-source X-ray images:
- We sampled 25 images from Cohen’s dataset, taking only the posterioranterior (PA) view of COVID-19 positive cases.
- We then sampled 25 images of healthy patients using Kaggle’s Chest X-Ray Images (Pneumonia) dataset.
From there we used Keras and TensorFlow to train a COVID-19 detector that was capable of obtaining 90-92% accuracy on our testing set with 100% sensitivity and 80% specificity (given our limited dataset).
Keep in mind that the COVID-19 detector covered in this tutorial is for educational purposes only (refer to my “Disclaimer” at the top of this tutorial). My goal is to inspire deep learning practitioners, such as yourself, and open your eyes to how deep learning and computer vision can make a big impact on the world.
I hope you enjoyed this blog post.
To download the source code to this post (including the pre-trained COVID-19 diagnosis model), just enter your email address in the form below!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Sovit Ranjan Rath
First of all, you said that you are sick. Please take care of yourself and get well soon. Looking at your articles and hard work is one of the most motivating things for me to keep learning and teaching deep learning.
Great article. Learned a ton from it. I have my own machine learning blog and it is nowhere near your level. But I hope someday that I will be able to build it into a livelihood for myself and provide quality ML/DL education at a very affordable price by converting it into a full-fledged website and business. The thing is, ML education in India (my country) is costly. I constantly try to look at free/affordable sources to teach myself. And believe me, your website is at the top of my list for learning deep learning and computer vision. What you are doing for the deep learning community is great and I hope that you continue with the great work. GET WELL SOON.
Adrian Rosebrock
Thank you for the kind words, Sovit. I am indeed resting up. I will get better in time.
Congrats on starting your own machine learning blog! As I’m sure you know, it’s a lot of work and yes, it can feel like a grind in the beginning. Keep working at it, and most importantly, be passionate!
I have no doubt you will get there in time 🙂
John Chumley
Adrian,
Thanks for everything you do. I appreciated this blog and discussed it with my oncologist (I have MDS and am particularly susceptable to COVID-19) who thought that it is a promising technique. Of course with things moving so fast with COVID-19 now alternative tests are becoming much more accessible. I thought that it may be especially useful to look at deep learning to analyze data from personal fitness trackers such as FitBit or apple watch to predict asymptomatic cases. I understand that there is some research in that regard going on now, but I thought that you may have unique insight.
Adrian Rosebrock
Thanks John. I agree, using more than just computer vision would be helpful here, although I have seen some work done with IR cameras that attempt to estimate a person’s body temperature to see if they are running a fever. That may also be worth exploring, but I think sensor fusion of some sort will give the best results (excluding a dedicated medical test, of course).
Hyo jeong
Thanks for the great article as usual. I’m preparing graduate school in computer vision and your articles really helps me a lot. And about covid, I had cough, phlegm, sore throat and fever a week ago but thankfully, those symptoms are disappeared now. I hope you are like me and get better. I live in seoul, Korea. Today, more then 40 people are tested positive in my city. There were ambulances everyware. Stay safe. I wish we get back our daily life soon.
Adrian Rosebrock
Thank you Hyo Jeong. Please stay safe!!
Rangel Alvarado
Thanks @Adrian, very valuable, probably people can construct an apriori estimator portal for their countries because test kits are valuable.
Thanks for your know-how.
Adrian Rosebrock
Thanks Rangel. I will leave any further experimentation to the medical experts though — I just hope this tutorial is able to inspire others.
Mahmud Hasan
Thanks a lot for the great tutorial Adrian. I had a hint that you were using Dr. Cohen’s dataset. When I was going through his dataset, I was wondering which pretrained model could be useful in making some good detections. But anyways, great article and thank you so much for your contribution. Hope all of us can make it through these tough times.
Adrian Rosebrock
Thanks Mahmud. I used VGG16 here and but other models can be fine-tuned as well. We ran a few experiments but didn’t see much increase in accuracy between architectures — we just don’t have enough good, reliable data.
David Bonn
Thanks for the post. This is a very weird and spooky time and unless you are extremely old you are unlikely to have experienced anything like what we are going through today. So stay safe and stay well.
Adrian Rosebrock
Thanks David. This is indeed a spooky time. I’ve certainly never experienced anything like this before.
But I hope that all of us adults can learn from this experience, and when it’s all over, took to issues outside disease transmission vectors. I’d really like to see people also take stock of their emotions, reflect on them, and use that information to help them in the future.
I take a stoic attitude towards terrible world events like this:
“Worse things have happened, worse things will.”
I doubt this is the last traumatic event the world will see in our lifetime.
Enrique Blanc
Again thanks for the newsletter and all the content you put out. Currently out of my scope of coding, but it’s insightful and I grab little pebbles here and there. Thanks for the monumental work you put in weekly.
Adrian Rosebrock
Thanks Enrique 🙂
Riaz Sulaimi
This is an awesome and timely tutorial which i cant wait to try out. Always awesome step-by-step guides fro you, Adrian! 🙂
Adrian Rosebrock
Thanks Riaz <3
Amin
Hey Adrian,
I just ran your code on the dataset you provided. However, actually my results are far different from yours. After 25 epochs, I have
acc: 0.7000,
sensitivity: 1.0000,
specificity: 0.4000.
How come? Is it because of the small number of training data?
Best,
Amin
Adrian Rosebrock
It’s absolutely due to the small amount of data. Without more (and better) data we really can’t improve on this method, unfortunately.
Aditya Mangalampalli
Amazing tutorial as always! Seeing an email saying that you released a new tutorial always makes my day! It’s amazing to see how quickly people adapt and using such powerful concepts in computing to save lives is amazing, keep up the amazing work!
And you said you were sick, drink lots of water and rest up!
Can’t wait for the next tutorial!
Adrian Rosebrock
Thank you Aditya!
Ruediger Jungbeck
I understand and appreciate the educational value of your exercise.
But I think you are solving a non problem:
The real problem is not deciding if a patient is sick or not (when he is already experiencing respiratory problems that you can see in an X-ray). The real problem is to decide if a patient, that might have no (or mild) symptoms, has the virus (because he could infect others). Another problem is detecting if the virus is still there after the patient is cured. A third problem would be make sure that the patient (with symptoms) has CoVid and not something else.
So classifying healthy patient (with no respiratory problems) and sick patients (with manifested respiratory problems) is definitely not solving any of the questions.
Adrian Rosebrock
In many ways you are correct, Ruediger. But I would urge you to consider what I’ve stated multiple times throughout the tutorial — this isn’t about building a super accurate coronavirus detector. It’s instead about enabling computer vision and deep learning practitioners to “feel” like they are helping. It’s about giving a purpose. It’s about the emotional and mental health of CV/DL practitioners.
Again, as I’ve said, the technique covered here is not worth of scientific publication. It’s arguably one of the least scientific blog posts I’ve published.
This tutorial may not save the lives of people who have/will contract COVID-19. But maybe it will help save the lives and mental state of others who didn’t know what to do with themselves otherwise.
Jason
Nevertheless, Mr. Ruediger Jungbeck made a very valuable point. We come here to learn by examples given by leading experts in DL like yourself. After we practice this example, should we think we have accomplished something practical and concrete? What if you design an experiment with a proper set of control cases (for example, viral vs non viral pnemonia, COVID-19 vs other viral pnemonia)? Wouldn’t that be more educational?
Adrian Rosebrock
Absolutely. And maybe that will indeed be a future post.
But as I already mentioned, such high quality image datasets are not readily available and may not be until after the pandemic is well over, defeating the purpose as to why this tutorial exists in the first place — to help CV/DL practitioners who are going through a hard time during this event.
Perumal
The 3 problems that you had pointed out can be biologically solved with 100% accuracy by a simple RTPCR based diagnostic test. The primer and probes for these test kits are now open sourced and major players like Roche, Qiagen, Thermo and IDT are producing millions of test kits per day. The test kit is costing one fourth of a dollar per sample and it’s gonna be more cheap. That’s a solution with advancements in genomics and diagnostics. Still, we should greatly appreciate these kinda interdisciplinary studies wherein different technologies are put in use to manage this COVID-19 crisis.
Jason
For COVID-10 diagnosis, RT-PCR was NOT 100% accurate. CT imaging was as valuable as RT-PCR. Please refer to some studies produced by Chinese researchers. See https://pubs.rsna.org/doi/10.1148/radiol.2020200642
Henry Hazan
Agree in part, once it adds another tool to discover. For the broad problem: https://github.com/henry-hz/digital-quarantine
Marcelo Ratton
Hi,
this is fantastic. But I have a doubt: how do you can determinate that is this pneumonia caused by COVID-19 or caused by other disease?
Kat Lo
Hi Adrian this is great work, as always. What’s the code to use the model to predict on a new xray?
Adrian Rosebrock
Hi Kat — you would need to:
1. Load the trained model from disk (along with the image you want to use for prediction)
2. Preprocess it in the same manner as we did for training (RGB channel ordering and resizing)
3. Make the prediction
This tutorial will help you get started.
Manthan Admane
Hey Adrian, thanks for this amazing tutorial.
I’m trying to follow up and want to predict on a new image.
Will I have to create a new load_model script specifically for covid19.model to make a prediction?
Adrian Rosebrock
Yes, I would recommend you create a separate Python script that loads the model and the new input image, preprocesses the input image in the same manner we did for training, and then makes a prediction.
Lakshay Goyal
Hey, I’m getting an accuracy of 95-97% on your code by making some changes.
Adrian Rosebrock
Nice job Lakshay. If you feel comfortable with it, please feel free to share your changes/updates with the rest of the community 🙂
Lakshay Goyal
Nothing Much, I’ve just changed the pool size to (2,2).
Adrian Rosebrock
Thanks for sharing!
Marc
Thanks for sharing!! I used this fantastic tutorial as a starting point for my own project and indeed, changing the pool size makes a huge difference. @Adrian, do you know why this parameter is so sensitive?
Adrian Rosebrock
A pooling operation reduces the volume size. Large pool sizes will reduce the volume size quicker. How fast you reduce the volume size is dependent on a number of factors, including the depth of your network, how large your input images are, and how many training images you have (in particular if you are training from scratch). I discuss that in more detail inside Deep Learning for Computer Vision with Python.
Chander
Hi Lakshay, Can you please share your code changes that makes the accuracy as 95% ?
thanks
PaulSaul
I’m really sorry about what your going through. Hope you get better. I;m from kenya and things are starting to get bad as well, I made a similar pneumonia detection application and suprisingly it can detect COVID19 but wil classify it as pneumonia. Check it out and tell me what you think https://paulwababu.github.io/radiologyAssistant/
Adrian Rosebrock
Thanks job, thanks for sharing!
Veeru Talreja
Hi Adrian..great initiative for solving a major problem using CV/DL. I just have a small doubt. I suppose it is a binary classification problem, so why would you have two elements in the Target label. It could very easily just be 1 (‘yes) or 0 (‘no’), why would you want to use one hot encoding. I am a little confused about it. Any advantages of using the one hot encoding? We could very well just use sigmoid activation instead of softmax and just use binary cross entropy. Can you please help me in understanding the benefits of using one hot encoding for such a problem.
Adrian Rosebrock
You are correct that it’s a binary classification problem. We ran a few experiments with using just a sigmoid activation but results weren’t as good (we even ran the experiments multiple times and averaged the results to verify). I would definitely encourage you to download the code and play with those changes and see what you get.
Daniel Jadi
Amazing and very useful tutorial, very informative article, Mr. ADRIAN. I request you to educate us in front end application through computer vision topics, please excuse me if you have covered it earlier.
Adrian Rosebrock
Hey there Daniel, what do you mean by “in front end application”? Could you clarify?
PaulSaul
Use FASTAPI to deploy your model, check out my pneumonia detection application https://paulwababu.github.io/radiologyAssistant/
Ilyas
Hello,
How could we know its caused by c-19? Any legitimate explanation to that?
Sully
First off, thanks for the cool article! Interesting read and love the enthusiasm towards applying ML to medicine. Sorry to hear you are feeling ill— I wish you a speedy recovery and good health!
My one concern is that it’s possible what you’ve essentially built is a detector for pulmonary edema, not COVID-19. Pulmonary edema is what is being looked at in the X-ray, but the edema is not diagnostic to COVID — plenty of more common illnesses can cause this symptom.
I think it would be cool if you could rerun this experiment with three test sets: healthy, COVID-19 positive, and pneumonia cases that are COVID-19 negative! This would be a good control to see if the model is truly detecting COVID cases, or just detecting fluid in the lungs.
Adrian Rosebrock
Great point and suggestion, thank you Sully.
Antonio
Grazie dell’aiuto Adrian, bellissimo articolo provero a testarlo!! Forza Italia ! Forza Mondo! We can do it!
Thanks for the help Adrian, beautiful article I’ll try to test it !! Strength and courage Italy! Strength and courage World! We can do it!
Adrian Rosebrock
Thanks Antonio, stay safe!
Aras
Good work but hope you can increase your dataset because we can not predict based on this few images, such systems requires more images to be trained and tested in.
Adrian Rosebrock
Indeed, more images are needed — I also really like Sully’s suggestion.
Eloy
Hi Adrien, congratulations, your project looks amazing, I’ve found an article for the early diagnosis of COVID-19 pneumonia using Ultrasounds.
https://pubs.rsna.org/doi/10.1148/radiol.2020200847
It looks also promising, maybe a similar approach with deep learning could be used here, don’t you think?
Adrian Rosebrock
Thanks for sharing!
Saheed
Thanks Adrian for putting this easy to follow tutorial together. Two questions;
Please how do we know when the sales on your books commence and how long would it last?
I also want to enquire if you welcome other ML and CV practitioners to publish contents on pyimagesearch.com.
Adrian Rosebrock
1. I’ll be publishing the details on the PyImageSearch blog and emailing my newsletter in ~4-5 hours with the sale details.
2. Sorry, I do not allow guest posts on the PyImageSearch blog.
Guillaume
Hi Adrian,
Looking at the two image banks, it seems like they have a within-set family resemblance and between-set differences that are not covid related. The framing is different between the sets, the amounts of marks left by the specialist, and possibly the contrast is also different. Building on your comment that ‘it’s possible that our model is learning patterns that are not relevant to COVID-19’ I was wondering if the images could be either made more uniform between sets, or going the other way processed so that variations are introduced in each sets.
Adrian Rosebrock
That’s a great point. In general, one of the reasons I said this is not the “highest quality scientific work I’ve done” is because of the dataset itself. There’s a number of problems, including, as you suggested, pretty significant brightness/contrast changes between the images. The network could very well be just learning those differences.
Sideeq Abdwaheed
wow…Thanks for this article, i’m really impressed about the write-up, I’m a year one computer science student in Obafemi Awolowo University, Nigeria, my aim is to work in any medical field where there will be need for any technology assistance, Your article really enlighten me about some things i need to know , thanks alot and get well soon.
i’ll try and follow up…
can you pls provide me your twitter handler? if possible
Adrian Rosebrock
Thank you for the kind words, Sideeq. All of my social media information can be found on my contact page.
Chukwuebuka Ogwo
Please stop it already….!!! you are not detecting COVID-19. What x-rays tell you is pneumonia or emphysema (which is a complication from the virus attacking the lungs and which can also be caused by variety of other conditions). So, while your codes are great and intuitive, your claim is actually misleading.
Thank you.
Hrishikesh
Hey! Great initiative! I have been following you since last 3 months. Your approach to a problem and the simplicity of the code makes it really easier for a beginner like me to learn a lot about this field.
I had a small doubt regarding the code above.
In the snippet,
# convert the data and labels to NumPy arrays while scaling the pixel
# intensities to the range [0, 255]
data = np.array(data) / 255.0
Isn’t the intensities being scaled to the range of [0,1] ?
Because a gray scale image will be having intensities in the range [0,255], so dividing it by 255.0 will give the range as [0,1].
Please correct me if I am wrong anywhere.
Adrian Rosebrock
Thank you for catching that! It is indeed a typo. I have updated it now.
Syl
Dear Adrian,
I am new to ML.
Isn’t it overfitting? Since you get the F1-Score from the validation dataset.
As long as I know, you need to divide the data into three categories: train/val/test
Adrian Rosebrock
For a reportable experiment, yes, you should. But given the limited quantity (and quality) of the dataset that really wasn’t possible.
Pawel Rolbiecki
Hi Adrian.
Even if you don’t have enough data or you have noisy data set you can always do self training with noisy student. That idea was created against messy datasets like Kaggle.This method can gain more reliable data.
And you don’t have to worry about accuracy. Today’s blood COVID-19 testers have more or less 92-96% of accuracy. The key here is to make this test common and easy to use. You made the first step. But unfortunately lungs are attacked in final stage of this disease.
It is community work. You make first step. Another guy makes second basing on you research and model. He/She cleans the data set for community to fight against COVID-19. Somebody else makes application.
I have never done self training by my self. But you inspired me to do it on my blog. Thank you again Adrian.
Adrian Rosebrock
Thank you Pawel.
Alistair Yap
It’s not obvious from the Kaggle page, but if you dig up the original source, the Kaggle Pneumonia dataset consists of only pediatric patients (children), so there might be some bias!
One should consider using Stanford’s CheXpert dataset or the NIH Chest X-ray dataset for negative cases instead.
(but note that those contain very few, if any, pediatric cases, so you may want to mix some in anyway)
Adrian Rosebrock
Thanks for sharing that detail, Alistair!
Jordan Bennett
Great point.
Pavlo Sidelov
Please help with the volunteer project.
For example we have access to all scanners in country.
We trained the model.
How can I send a new examples for processing to model from the code?
We plan to make an API and let medical workers just send the new images for analysis and receive an immediate response.
Please help with part of the code which actually uses the model with new images as input to get a classification response “Positive/Negative”
Thank you!
Adrian Rosebrock
See my reply to Kat Lo which describes how to accomplish your goal.
Hussain Salih
Take Care of your self, please sad to hear that,
good project thanks so so much
Adrian Rosebrock
Thank you <3
mehdi benchoufi
hi, I am medical doctor from Paris. Could you give more information about the Covid+ dataset : how many normal X-ray images for patient Covid+ ? some demographic charateristics of the Covid+ images ?
The idea is to ensure that you have some specificity and that the algo is not detecting ill vs Not ill
Adrian Rosebrock
Hi Mehdi — I discuss the normal of “normal” versus “COVID-19 cases” in the “Our COVID-19 patient X-ray image dataset” section of this post. For additional details on the COVID-19 cases be sure to refer to the official dataset repository.
Ali.S
Your Covid dataset and normal dataset are from different sources. What if the detector is just learning to differentiate the quality of the images to differentiate them? Don’t you think that might be a problem?
Adrian Rosebrock
Yes, absolutely. Read the rest of the comments on this post as I have addressed that concern as well.
Jason
More relevantly, COVID set are from adults whose ages range from 30~70, whereas normal set are from children. I suspect the detector might be learning the difference between adult and children’s anatomy.
John Napari
I need a project on Automatic vehicle number plate detection using NN. I need the full project including source code in tensor flow. Ready to negotiate
Adrian Rosebrock
That’s not really relevant to this post; however, I do cover automatic license plate recognition inside the PyImageSearch Gurus course. I suggest you start there.
Abkul
Thanks for the great inspiring work towards solving this global pandemic.
I am were working on a similar problem but datasets are hard to come by leading to overfitting.
Keep it up, your article like all the rest is second to none.
Adrian Rosebrock
Thanks Abkul — and good luck with your project!
Jordan Bennett
Since February 9th, I foresaw this having started an ai based ct scan initiative:
https://github.com/JordanMicahBennett/SMART-CT-SCAN_BASED-COVID19_VIRUS_DETECTOR/blob/master/README.md
Adrian Rosebrock
Thanks for sharing, Jordan.
Jordan Bennett
1) Just read a bit more, and I notice you are unwell. Take care.
2) I also added your work to the repository.
3) Is there any chance you could upload to google drive? I’m seeing 0b per second dl speed via the aws link you shared 🙁
Jordan Bennett
Also, as noted in repository, on Feb 26, Chinese researchers released an online ai based covid19 detection tool, claimed with 95% + accuracy, and 94%+ (per image sensitivity).
a) Online tool:
b) Paper that tool was taken from: https://www.medrxiv.org/content/10.1101/2020.02.25.20021568v2
Aryhar
Very good tutorial and article. Very useful for developers to research on this virus
Adrian Rosebrock
Thanks Aryhar!
Ramiro M
Hello Adrian!
Before anything, don’t forget to take care of yourself.
I loved the spirit of the exercise! And the code was quite clear too. With medicine being an area quite delicate about the results, I added an interpretability step based on DeepLIFT to your results:
https://i.imgur.com/ccAra0N.jpg
Here we can see some interesting stuff:
* The models are focusing mainly of the top left bone structure shape
* In one case, the model focuses on the xray text! Interestingly, only 2 xrays have text on them there, both on the covid group.
* Both classes look for the same things, and which means is quite focused.
From that I get the reading that this model is actually finding bias in the data, and being quite a big model and so few observations, it seems it’s “memorizing” the answers.
I would like to try other interpretability methods to check if they all show the same results to double check, but as you warned (about 7 times… ?) the model requires way more data in order to remove the bias and should not be used for medical purposes as is.
In any case, thanks again for the time to take writing all this content, and hang in there.
Adrian Rosebrock
Thanks Ramiro. You may also be interested in both Mahesh Sudhakar and Safwen Naimi who have extended this work to improve performance and include visualizations (such as Grad-CAM) that demonstrate what the model is actually “learning”.
Walid
Great and just in time.
One question, since the images are single channel. is it mandatory to train with three channels as we are doing here?
Walid
Adrian Rosebrock
Yes, because we are fine-tuning a network that was originally trained on 3-channel image.
Vishant Batta
Hi Adrian,
First of all, wish you a speedy recovery! We need people like you 🙂
This project has been stuck in my mind since I read the title!
I ran it on my machine, and have been since tweaking the model and parameters. The results are a bit ambiguous mostly because of the lack of the data, I am looking for more sources and even contacting some hospitals regarding this. Do you know any other sources?
Patrick
Hi Adrian and PyImageSearch community
I hope you are feeling better. The effort you put into your work, even while not feeling well, is greatly appreciated by us all.
I really enjoyed this post. I have read through you DL4CV book through the Practitioner Bundle. Using VGG16 for transfer learning was a familiar example. I decided to take what I learned from your DL4CV ( which is awesome. I highly recommend it to everyone! ), and I tried 3 other models:
VGG19, ResNet50 and ResNet50V2.
ResNet50V2 actually performed the best:
[[12 1]
[ 0 14]]
acc: 0.9630 (note same accuracy as VGG16)
sensitivity: 0.9231
specificity: 1.0000
Notice the specificity is 1, and the sensitivity is lower. I think in this case we would rather have a better specificity value – but as you point out – they both have downsides.
Then I read the comment by Sully about testing with pneumonia images. I took the Kaggle dataset, and predict on the test/NORMAL and test/PNEUMONIA using the ResNet50V2. I wanted to see how the model would classify unseen NORMAL and unseen PNEUMONIA – hoping for the model to classify them both as NORMAL in this case.
Test/NORMAL was predicted as Normal 98.7% of the time and Test/PNEUMONIA was predicted as Normal 98.5% of the time.
VGG16 was not as good with NORMAL accuracy of 80.8 and a PNEUMONIA accuracy of 63.3.
I just wanted to share my observations and thank you for all you do.
Adrian Rosebrock
Thank you for running these experiments and sharing the results, Patrick!
Jordan Bennett
Hey Patrick.
I’m reasonably not surprised that ResNet performed better.
1) About 7 hours ago in an email to a ml researcher who wanted to contribute to the covid19 detection task, I mentioned that one of my next steps would be to try a residual neural architecture:
(Link removed by spam filter)
2) Those resnets are amazing. I remember doing a kaggle contest 4 years ago, and seeing my model jumpy to 76/500 just by switching from LeNet ConvNet to a 20 layer deep resNet:
https://github.com/JordanMicahBennett/EJECTION-FRACTION-IRREGULARITY-DETECTION-MODEL
3.) I bet we could squeeze out even more performance with an even deeper ResNet, beyond 50 layers?
4.) Perhaps it would be nice if you shared your code and weight urls. Not everyone here can find/compose/modify the required items easily.
For those interested, the resnet50 item can be loaded with one line of code:
https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33
Parth Purvesh
Hi Patrick, thanks for sharing the results.
Do you mind sharing your ResNet50V2 based code with us? It would be really helpful in a statistical side by side evaluation of the couple of models, and obviously, I don’t want to reinvent the wheel right now.
Thanks in advance 🙂
vikas
how can i run this file? there are no any test file in this tutorial
TANVEER
Hi Adrian Rosebrock,
I’m glad to see this article and your dedication towards the startup contribution for early detection of COVID-19.
Adrian Rosebrock
Thank you Tanveer.
Abkul
The lung/chest x-ray images could be as a result of TB, pneumonia, chronic bronchitis etc infections.which may be misleading to say the least.
why not consider the detection of the viral strains Covid19 including virus strains like SARS etc so that the focus is actual diagnosis of the presence of the virus in the sample blood or fluid under investigation ( compare with malaria diagnosis in the blood sample)!!
Jordan Bennett
I don’t know about the malaria portion, but it may really be a good idea to run training on SARS COV 1 samples if available, as I wouldn’t be surprised if the ground glass state of covid19/Sars Cov2 pneumonia patients turns out to be similar to states found in SARS Cov 1 patients.
Rhinane Hassan
Many thanks Adrian for your tuto and whish good health
I tried your tuto and i obtained this result
precision recall f1-score support
covid 1.00 1.00 1.00 5
normal 1.00 1.00 1.00 5
avg / total 1.00 1.00 1.00 10
but when feed the model with the unseen data from PNEUMONIA dataset the prediction are not good
Would you like please to explain this
Adrian Rosebrock
As I mention in the tutorial, we don’t have enough data for the model to learn discriminative, reliable patterns. We need more data to both improve the model and ensure it generalizes better.
Jordan Bennett
Hey Adrian,
In your 50 image dataset, I notice you have viral and bacterial images in the NORMAL folder.
a) Running my regular model on **regular pnemonia test data**, gets sensitivity/specificity/accuracy of 89/88/89% respectively roughly.
b) Running the same model on **your covid19 dataset** gets sensitivity/specificity/accuracy of 48/40/40 % respectively roughly.
b) Running the same model on **covid19 dataset with normal folder containing only normal samples from kaggle dataset, instead of the virus and bacteria samples that were mixed in your dataset** gets sensitivity/specificity/accuracy of 96/48/72% respectively roughly.
*******************
***Questions***
May I ask what the reasoning was in mixing in virus/bacteria images in the normal folder? Was it to introduce noise of some form to avoid overfitting?
Albeit, wouldn’t it make more logical sense to have virus and bacteria images in the covid19 folder since all these (covid19/viral pneumonia/bacterial pneumonia) seem similar compared to normal lung states, although this would be somewhat illegal?
Adrian Rosebrock
No, there wasn’t any reason for that. If you take a look at the source code for my data sampling (included in the downloads of the post) you’ll see it was purely random sampling. That must have been by chance.
Tahira
Hi, Is this dataset or the dataset in the github repo Authentic enough for publication purpose?
Hammad
Thanks for this post dear.
Could you please post some code regarding the detection of Covid-19? I mean detection by means of visualization technique to get the spots highlighted in X-ray images for the understanding of the spots or changes in X-ray occur to confirm that COVID-19 is present in that image?
Thanks
Adrian Rosebrock
That’s been discussed in a few other comments on this post. You should take a look at my Grad-CAM tutorial.
JG
Hi Adrian,
Very nice motivation to start working on MachineLearning
DeepLearning (ML/DL), a promising tool.
My main comments on yours (statistical) results are:
1.) The test images are shown to the model during the training phase (as validation images) so are not unseeing to the model, what is required to get a more objective result.
2.) I would like to know what happens if it is ‘unfrozen? the weights of the last VGG16 block (e.g. Block 5) due to higher performance are expected?, besides we must replace ‘Adam’ optimizer by SGD to apply a more efficient fine tuning.
3.) Also I recommend to implement other ‘transfer learning’ apps available on Keras such as VGG19, ResNet50, Xception, NASNetLarge, etc. with larger expectations (but also with more CPU/GPU resource and time consuming demanding)
Finally I would suggest you to consider to lead/promote and build a worldwide ‘open’ project to everybody, where more X-rays datasets could be added, every day, plus other more efficient codes, to solve this big and critical worldwide issue of Covid_19 .
Anyway, congratulation for this tutorial! it is a big starting point to ‘democratize’ ML/DL tools in addition to provide the main ML concepts to approach health diagnosis issues by images.
Regards,
JG
Adrian Rosebrock
Hi JG,
1. Yes, I would definitely suggest using additional data and creating a test set that consists of images *outside* the original dataset to test for generalization.
2. Those are just hyperparameters you can tune. Whether you use SGD or Adam doesn’t impact the validity of the method. Again, just hyperparameters.
3. By all means, feel free to take the code and extend it using whatever methods you see fit. PyImageSearch is an educational blog — use the code here as a starting point for your own research.
Thanks again JG!
JG
Thank to you Adrian for your tutorial and job for us!.
One more specific question I forgot to mention before, why do you not use the ‘ad hoc’ preprocess_input module of VGG16?
namely:
from tensorflow.keras.applications.vgg16 import preprocess_input
It must perform a standardization process more ‘adequate’ and complete to any general data input for VGG16 App than just merely the one you perform in the code consisting on dividing image pixels by 255. or it does not matter at all?
regards
JG
Adrian Rosebrock
The type of preprocessing/standardization you perform is a hyperparameter you should tune for your own specific dataset and project.
As you noted, VGG16 was trained by performing mean subtraction using the mean RGB values computed on the ImageNet dataset. You may be able to obtain higher accuracy in some situations by performing such mean subtraction versus standard [0, 1] scaling.
That said, the filters learned by networks trained on ImageNet tend to be quite robust and in some cases, you can get away with just [0, 1] scaling.
I typically recommend running an experiment to compare the two. Let the empirical results guide you with your experiments.
Adam Milton-Barker
Hi Adrian you have a typo, it is false negative not true negative:
“Being able to accurately detect COVID-19 with 100% accuracy is great; however, our true negative rate is a bit concerning — we don’t want to classify someone as “COVID-19 negative” when they are “COVID-19 positive”.”
Adrian Rosebrock
Thank you for catching the typo, Adam.
SRM
Hey. Amazing tutorial. I just had one question. How do I test individual images on the trained model.
Thanks
Adrian Rosebrock
Hi there — I’ve addressed that question a handful of times in the comments section. Please give them a read.
Sand Boa
Hey, Actually I have labeled the images as COVIC when I found it on “findings”. It has 140 COVIC images and 25 Non-Covic Images. I tried many Models. The accuracy fluctuates between 70% to 85%. I changed the loss function from a paper recently released by Google and I get 95.67% accuracy on the test set. My training set and test set are totally different than all of you!!!
JS
Why you just select 25 of the total images? In the metadata.csv file of the image’s repo there more than 100images of Covid 19
Adrian Rosebrock
At the time of this writing there were only 25 COVID-positive examples of the PA view. The repo has since grown and additional images have been added.
Venkat Mukthineni
Hey Adrian!
Great work. I have changes few parameters (maxpool, activation function and the optimizer). The accuracy of the model is close to 98%. However, I tried other based model (ResNet 50) for which the validation accuracy is only 50%. VGG16 is far better than ResNet50.
Adrian Rosebrock
Thanks for sharing the results of your experiments!
ali
Hi Adrian,
Thanks for the post, pretty educational. This is the first time I am looking at a vision problem. I would like to ask you some basic questions for more learning:
1- why the image that we read from ‘imread’ is a tensor rather than a matrix since it is a greyscale image?
2- why do we need to swap channels?
3- the reason we resize the images to 224×224 is because the VCG only takes in 224×224 resolution images?
4- why intensities are rescaled to the range [0, 1]?
5- why we do not make this problem a binary classification with one neuron(Sigmoid) for the last layer rather than a softmax?
6- can you share the links of the works that extended your work to improve performance and include visualizations as I was not able to find them?
Thank you!
Adrian Rosebrock
Hey Ali — I would recommend you read my Keras tutorial along with Deep Learning for Computer Vision with Python. The answers to your questions are all addressed there.
PTS
Hello Adrian, I hope (as of today) you have recovered your health. Thank you very much for your work and contributions in the ML area. I think we are all trying to find ways to buy time and look for ways to apply our skills in the global effort to mitigate this pandemic.
I consider your work to be deeply inspiring and I am grateful for this initiative and for having developed this “post” in a state of illness.
All the best.
Adrian Rosebrock
Thank you for the kind words, I really appreciate it 🙂
hsu
Thanks very much for sharing this post. I try to download more dataset from Dr. Cohen github and kiggle (increase to 36 samples, respectively) and run the program. The result shows acc: 1.0000
sensitivity: 1.0000, specificity: 1.0000. Is this result normal?
JeF
Hello,
Thanks for this great article, It really informative and helpful.
I wanted to ask, any particular reason of not using a Kfold(3 or 5) cross validation in this model. CV helps to prevent overfitting, should it not be great/must to use it.
1 possible answer i can think of is that CV is used to control the hyperparameters in case of CNN, and as we are using so many methods already like Data Augmentation or VGG16/imagnet (may also add dropuout), we are not worried about overfitting our data, as these above methods, does the same at some level.
Your response to this will be highly appreciable.
Thanks
Adrian Rosebrock
If you were to create a technical report for this article, then yes, k-fold cross-validation should absolutely be used. The short answer is that this post is not a technical report that would be published.
Venkat Mukthineni
Hello,
Instead of splitting the data using keras, I would like to use manually splitted data i.e. load the train, validation and test images separately and then train the model and visualize the results. Can you help me with this?
Adrian Rosebrock
I would suggest you learn the fundamentals of machine learning and deep learning first. It’s okay if you are new to working with image datasets but I’d suggest you read Deep Learning for Computer Vision with Python first.
ahmad
Hi Master, I hope you are fine. This is great and you can make it with tflearn too. Thanks.
I think tflearn is more attractive.
Adrian Rosebrock
Thanks Ahmad!
Dr. S. Swapna Kumar
Dear Dr. Adrian Rosebrock
A good article that you have articulated and made viewers to take interest towards ML/DL. Appreciate for your effort in putting the article in a simplified way.
Your insight to share the knowledge to others without confining to self for personal benefit is really well recognized. Keep it up and do more like Issac Newton did to this world by sharing his awareness.
Regards
Dr. S. Swapna Kumar
India
Adrian Rosebrock
Thank you for the kind words!
Saman
Hello, and thanks for sharing this tutorial!
My question is about the characteristic of images. You said that you chose the posterioranterior (PA) view of COVID-19 positive cases because that was the view used by ‘healthy’ cases. But, as I am checking the Kaggle website for the normal cases, it is written that those images are in the AP view.
Adrian Rosebrock
Thanks Saman. I wasn’t able to find where in the Kaggle dataset that the images were listed as AP views. Could you share that link?
Carlos Ferreira
Did you notice that this dataset that you used: covid-chestxray-dataset only uses images from children between 1 and 5 years? You are biasing your research! Be careful!
Adrian Rosebrock
Hey Carlos, that question is addressed in the comments as well, but yes, you are right. You should see my notes in the post regarding the dataset.
Scott Quadrelli
Hi PyImageSearch Team,
I am working on a similar problem. Why do you leave the XRAY images which are greyscale images as RGB ? What are the considerations here ?
Adrian Rosebrock
We simply left them as RGB images. You could convert to single channel grayscale images if you wished.
Tanuja Shrestha
Hi Adrian,
why the variances in loss are higher in training data than in testing?
I am so thakful to your resources. They are so great.
Best,
Tanuja
Huy
Hey Adrian, thanks for your blog. I have been follow your blog around 3 years and it helps me a lot in learning.
I also has some problem: detect defect part from manufacturing that I am facing regards to the data limitation that I have learnt from you to used data augmentation because the defect is less less than the good and we will never reach the number that it need, but the augmentation sometime exclude the defect itself.
what should I do in those scenario
Adrian Rosebrock
Thanks for being a long time reader, Huy! I would definitely suggest you support the blog by purchasing a book/course if you can. That support allows me to continue authoring free, high quality tutorials. I would suggest reading Deep Learning for Computer Vision with Python which covers data augmentation and suggestions on how to train your own high accuracy models.
Kahiru Aqsara
Hello Adrian,
I’m following and learn a lot from your tutorials, but I am curious, we use images with the format jpeg,png etc for dataset, can we use DCOM file image format ? which is the default format of the x-rays machine, can this affect the level of accuracy?
Adrian Rosebrock
That really depends on the project and the data format. You actually can using DICOM images without too much additional work if you use the pydicom library.
Muhriddin
Hello, Adrian Rosebrock.
I am so inspired by your work! Very helpful article! Thank you tutorial, it was helpful to me.
My program works on Windows operating system.
I want to connect Corvid19.model to android. is it possible? How can I do this? if possible explain with a few examples …
Good luck with your next endeavors! be healthy!
Regards
G.Muhriddin
from Uzbekistan.
Adrian Rosebrock
Sorry, I do not have any examples of deploying a trained model to Android.