Deep Learning: COVID-19 detection in X-Ray with CNN

Publicado por

In this project we develop a Deep Learning detector of Covid-19 in radiographs. For this purpose, we use images from the “Covid-chestxray-dataset” [3], generated by researchers from the Mila research group and the University of Montreal [4]. We also use images of radiographs of healthy and bacterial pneumonia patients extracted from Kaggle’s “Chest X-Ray Images (Pneumonia)” competition [5].

In total, we have a number of 426 images, divided into training (339 images), validation (42 images) and test (45 images) sets.

The partitions are divided into training (339 images), validation (42 images) and test (45 images).

The partitions are given in “.txt” lists, in which each image is assigned a tag:

  • 0) Healthy
  • 1) Covid-19
  • 2) Pneumonia

Note: The results obtained by the models trained in this database are purely for educational purposes and cannot be used for actual diagnosis without clinical validation.

References

  1. María Climent, 2020 Covid-19: La Inteligencia Artificial De La Española Quibim Puede Acelerar El Diagnóstico Del Coronavirus
  2. Angel Alberich-bayarri,2020 Imagin, AI and Radiomix to understand and fight Coronavirus Covid-19
  3. Ieee8023/covid-chestxray-dataset
  4. Cohen, J.P., Morrison, P. and Dao, L., 2020. COVID-19 image data collection.
  5. Paul Mooney, 2019 Chest X-ray Images (pneumonia)
dordorica_M2_875_20192_PracticaFinal
This notebook is run on Google Collab so we setup google drive to upload the images.
In [8]:
from google.colab import drive 
drive.mount('/content/gdrive')
In [0]:
#Import libraries

import numpy as np
import re,shutil,os,timeit,glob
import matplotlib.pyplot as plt
import random
from IPython.display import Image
from sklearn.dummy import DummyClassifier
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten, Activation, Dropout, MaxPooling2D, BatchNormalization
from sklearn.metrics import accuracy_score
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam,RMSprop,SGD,Adadelta
To load the images we will use the ImageDataGenerator from Keras, which generates images from the train, validation and test sets with the characteristics indicated. To increase the train data we will generate moved or rotated images from the original dataset, also we will generate images with some noise. The images will be scaled to the appropriate size for each network architecture and the pixel values will be normalized.
In [0]:
#Function to clean txt files path.
def remChars(string):
    #Remove first character
    ret=string[2:]
    ret=ret[:-1]
    return ret

#Function to add noise to the images.
def add_noise(img):
    VARIABILITY = 50
    deviation = VARIABILITY*random.random()
    noise = np.random.normal(0, deviation, img.shape)
    img += noise
    np.clip(img, 0., 255.)
    return img
In [0]:
#Images path base
basepath="/content/gdrive/My Drive/"
In [0]:
#Let's create folder structure, train, test and validation with a folder per class inside them.

#Paths and txt files with the image names.
paths=['test','train','validation']
files=['testing.txt','training.txt','validation.txt']

#Read image names from the txt files

#Copy file in the proper folder.
for p,f in zip(paths, files):
    
    file = open(basepath+f,"r")
    imgfiles= file.readlines()

    #Clean path and create folder structure
    os.makedirs(basepath+p+"/COVID",exist_ok =True)
    os.makedirs(basepath+p+"/HEALTHY",exist_ok =True)
    os.makedirs(basepath+p+"/PNEUMONIA",exist_ok =True)
    for s in imgfiles:
        s=remChars(s)
        if "COVID" in s:
            shutil.copy(basepath+s, basepath+p+"/COVID")
        elif "HEALTHY" in s:
            shutil.copy(basepath+s,basepath+p+"/HEALTHY")
        else:
            shutil.copy(basepath+s,basepath+p+"/PNEUMONIA")
        
In [6]:
#Import images. Reduce size to 224x224
#Training dataset augmentation.

train_data_gen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=25,
    width_shift_range=0.3,
    height_shift_range=0.3,
    zoom_range=0.2,
    horizontal_flip=True,
    fill_mode='nearest',
    preprocessing_function=add_noise)
#Only scale validation and test images.
validation_data_gen = ImageDataGenerator(rescale=1./255)
test_data_gen = ImageDataGenerator(rescale=1./255)

train_generator  =train_data_gen.flow_from_directory(basepath+'train',                                          
                                          target_size=(224,224),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=True)

validation_generator = validation_data_gen.flow_from_directory(basepath+'validation',                                          
                                          target_size=(224,224),                                                               
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=True)


test_generator = test_data_gen.flow_from_directory(basepath+'test',                                          
                                          target_size=(224,224),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=False)
Found 339 images belonging to 3 classes.
Found 42 images belonging to 3 classes.
Found 45 images belonging to 3 classes.
In [0]:
#Show some transformed images examples.
img = load_img(train_generator.filepaths[0])  # this is a PIL image
x = img_to_array(img)  # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape)  # this is a Numpy array with shape (1, 3, 150, 150)

i = 0
for batch in train_data_gen.flow(x, batch_size=1,
                          save_to_dir=basepath+'augmented', save_prefix='hi', save_format='jpeg'):
    i += 1
    if i > 2:
        break

        
In [8]:
#Show images

for filename in glob.glob(basepath+'augmented/*.jpeg'): #assuming gif
    display(Image(filename,width=150,height=150))
    
<
As the best option to classify images we will use convolutional networks. For the classification of these images we will train models from different known CNN architectures (VGG,Inception,Densenet) and measure their accuracy and time performance to determine the most suitable one. We will use the pre-trained keras models, since our dataset is small and not similar to the dataset with which the network is trained, we will freeze some layers and re-train others.We will measure the accuracy of the results with the data from the testing dataset. We will also show the accuracy and loss plots to study the architectures. We will first build a network with only one convolutional layer that we will use as a basis for defining further improvements. The optimizers vary, as they have been adjusted in different test runs to try to get the most out of each type of network, the best results of each one are shown here.
In [0]:
#Simple model as baseline to measure performance improvements.
num_classes=3
model = Sequential()
model.add(Conv2D(11, kernel_size=3,strides=1, activation='relu', input_shape=(224,224,3)))
model.add(Flatten())
model.add((Dense(num_classes, activation='softmax')))
In [0]:
#Adam optimizer
optimizer=Adam(lr=2E-4)
# Compile the model
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
In [0]:
#Reset batch generator
train_generator.reset()
validation_generator.reset()
test_generator.reset()
In [14]:
#Run just 10 epochs and storing the execution time.
start_time = timeit.default_timer()
mfit=model.fit_generator(
        train_generator,
        epochs=15,
        validation_data=validation_generator)
elapsedbaseline = timeit.default_timer() - start_time
Epoch 1/15
11/11 [==============================] - 14s 1s/step - loss: 8.6860 - accuracy: 0.3510 - val_loss: 8.2796 - val_accuracy: 0.3333
Epoch 2/15
11/11 [==============================] - 14s 1s/step - loss: 4.3005 - accuracy: 0.4926 - val_loss: 1.6151 - val_accuracy: 0.5238
Epoch 3/15
11/11 [==============================] - 14s 1s/step - loss: 2.2346 - accuracy: 0.5133 - val_loss: 0.9671 - val_accuracy: 0.5714
Epoch 4/15
11/11 [==============================] - 14s 1s/step - loss: 1.6893 - accuracy: 0.5693 - val_loss: 2.3220 - val_accuracy: 0.6905
Epoch 5/15
11/11 [==============================] - 14s 1s/step - loss: 2.6680 - accuracy: 0.5339 - val_loss: 0.5443 - val_accuracy: 0.6190
Epoch 6/15
11/11 [==============================] - 14s 1s/step - loss: 3.0147 - accuracy: 0.5310 - val_loss: 8.0988 - val_accuracy: 0.4524
Epoch 7/15
11/11 [==============================] - 14s 1s/step - loss: 2.9001 - accuracy: 0.4130 - val_loss: 0.6744 - val_accuracy: 0.6667
Epoch 8/15
11/11 [==============================] - 14s 1s/step - loss: 1.2602 - accuracy: 0.6224 - val_loss: 1.5305 - val_accuracy: 0.5952
Epoch 9/15
11/11 [==============================] - 14s 1s/step - loss: 1.1076 - accuracy: 0.6726 - val_loss: 0.5212 - val_accuracy: 0.7143
Epoch 10/15
11/11 [==============================] - 14s 1s/step - loss: 1.0135 - accuracy: 0.6637 - val_loss: 0.4781 - val_accuracy: 0.7381
Epoch 11/15
11/11 [==============================] - 14s 1s/step - loss: 0.9489 - accuracy: 0.6254 - val_loss: 1.7103 - val_accuracy: 0.5238
Epoch 12/15
11/11 [==============================] - 14s 1s/step - loss: 1.3024 - accuracy: 0.5664 - val_loss: 0.3664 - val_accuracy: 0.7857
Epoch 13/15
11/11 [==============================] - 14s 1s/step - loss: 1.0612 - accuracy: 0.6490 - val_loss: 0.5254 - val_accuracy: 0.5714
Epoch 14/15
11/11 [==============================] - 14s 1s/step - loss: 1.1583 - accuracy: 0.5870 - val_loss: 1.2536 - val_accuracy: 0.6190
Epoch 15/15
11/11 [==============================] - 14s 1s/step - loss: 1.2919 - accuracy: 0.5811 - val_loss: 2.2327 - val_accuracy: 0.5714
In [15]:
#The elapsed time of this model is:
print('Baseline model elapsed time: '+str(elapsedbaseline) + ' seg.')
Baseline model elapsed time: 210.87823743600006 seg.
In [16]:
#Accuracy and Loss graphs for training and validation
# TODO
# summarize history for accuracy
plt.plot(mfit.history['accuracy'])
plt.plot(mfit.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

# TODO
# summarize history for loss
plt.plot(mfit.history['loss'])
plt.plot(mfit.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
We see that the network is learning, albeit somewhat unevenly. Let's check a VGG16 architecture, we import it from Keras.
In [20]:
from keras.applications.vgg16 import VGG16
vgg_c = VGG16(weights='imagenet', include_top=False,classes=3,input_shape=(224,224,3))
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 1s 0us/step
In [21]:
#Let's do transfer learning using the weights available in imagenet, 
#having a small dataset, different from the original one with which the network was trained, we will run
#one part (only blocks 4 and 5).
for l in vgg_c.layers[:-8]:
  l.trainable=False
for l in vgg_c.layers:
  print(l.name+" "+str(l.trainable))
input_1 False
block1_conv1 False
block1_conv2 False
block1_pool False
block2_conv1 False
block2_conv2 False
block2_pool False
block3_conv1 False
block3_conv2 False
block3_conv3 False
block3_pool False
block4_conv1 True
block4_conv2 True
block4_conv3 True
block4_pool True
block5_conv1 True
block5_conv2 True
block5_conv3 True
block5_pool True
In [0]:
#Creamos las capas de salida
vgg_model=Sequential()
vgg_model.add(vgg_c)
vgg_model.add(Flatten())
vgg_model.add(Dense(4096,activation="relu"))
vgg_model.add(Dense(4096,activation="relu"))
vgg_model.add(Dense(3,activation="softmax"))
In [0]:
#Reset batches generators
train_generator.reset()
validation_generator.reset()
test_generator.reset()
In [24]:
#Let's define callbacks in order to save checkpoints just in case the process stops.

filepath = basepath+"vgg_model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

#Optimizer setup
optimizer=Adam(lr=1E-6)

#Use same optimizer than in the base model
vgg_model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

#Let's run 15 epochs, saving the elapsed time
start_time = timeit.default_timer()

mfitVGG=vgg_model.fit(
        train_generator,
        epochs=15,
        validation_data=validation_generator,callbacks=callbacks_list)

elapsedVGG = timeit.default_timer() - start_time
Epoch 1/15
11/11 [==============================] - 16s 1s/step - loss: 1.0807 - accuracy: 0.4189 - val_loss: 1.0469 - val_accuracy: 0.6190

Epoch 00001: loss improved from inf to 1.08054, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 2/15
11/11 [==============================] - 15s 1s/step - loss: 1.0360 - accuracy: 0.5162 - val_loss: 0.9619 - val_accuracy: 0.6667

Epoch 00002: loss improved from 1.08054 to 1.03561, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 3/15
11/11 [==============================] - 16s 1s/step - loss: 0.9910 - accuracy: 0.5988 - val_loss: 0.9363 - val_accuracy: 0.6190

Epoch 00003: loss improved from 1.03561 to 0.99032, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 4/15
11/11 [==============================] - 16s 1s/step - loss: 0.9430 - accuracy: 0.6755 - val_loss: 0.8787 - val_accuracy: 0.7381

Epoch 00004: loss improved from 0.99032 to 0.94067, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 5/15
11/11 [==============================] - 18s 2s/step - loss: 0.8900 - accuracy: 0.7168 - val_loss: 0.9179 - val_accuracy: 0.6667

Epoch 00005: loss improved from 0.94067 to 0.89178, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 6/15
11/11 [==============================] - 16s 1s/step - loss: 0.8543 - accuracy: 0.7375 - val_loss: 0.9096 - val_accuracy: 0.7143

Epoch 00006: loss improved from 0.89178 to 0.85317, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 7/15
11/11 [==============================] - 16s 1s/step - loss: 0.8153 - accuracy: 0.7286 - val_loss: 0.7130 - val_accuracy: 0.7619

Epoch 00007: loss improved from 0.85317 to 0.81548, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 8/15
11/11 [==============================] - 15s 1s/step - loss: 0.7624 - accuracy: 0.7994 - val_loss: 0.6643 - val_accuracy: 0.8095

Epoch 00008: loss improved from 0.81548 to 0.76142, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 9/15
11/11 [==============================] - 21s 2s/step - loss: 0.7302 - accuracy: 0.7906 - val_loss: 0.6457 - val_accuracy: 0.7619

Epoch 00009: loss improved from 0.76142 to 0.73008, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 10/15
11/11 [==============================] - 19s 2s/step - loss: 0.6810 - accuracy: 0.7965 - val_loss: 0.5080 - val_accuracy: 0.8095

Epoch 00010: loss improved from 0.73008 to 0.67957, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 11/15
11/11 [==============================] - 17s 2s/step - loss: 0.6693 - accuracy: 0.7876 - val_loss: 0.5994 - val_accuracy: 0.8095

Epoch 00011: loss improved from 0.67957 to 0.66766, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 12/15
11/11 [==============================] - 18s 2s/step - loss: 0.6258 - accuracy: 0.8142 - val_loss: 0.4948 - val_accuracy: 0.8333

Epoch 00012: loss improved from 0.66766 to 0.62577, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 13/15
11/11 [==============================] - 17s 2s/step - loss: 0.5978 - accuracy: 0.8466 - val_loss: 0.6697 - val_accuracy: 0.7857

Epoch 00013: loss improved from 0.62577 to 0.59842, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 14/15
11/11 [==============================] - 18s 2s/step - loss: 0.5697 - accuracy: 0.8260 - val_loss: 0.3782 - val_accuracy: 0.8333

Epoch 00014: loss improved from 0.59842 to 0.56735, saving model to /content/gdrive/My Drive/vgg_model.h5
Epoch 15/15
11/11 [==============================] - 15s 1s/step - loss: 0.5242 - accuracy: 0.8555 - val_loss: 0.3373 - val_accuracy: 0.8095

Epoch 00015: loss improved from 0.56735 to 0.52378, saving model to /content/gdrive/My Drive/vgg_model.h5
In [25]:
#The elapsed time of this model has been:
print('VGG model elapsed time: '+str(elapsedVGG) + ' seg.')
VGG model elapsed time: 627.1579273820003 seg.
In [27]:
#Train and validation accuarcy and loss graphs:
# TODO
# summarize history for accuracy
plt.plot(mfitVGG.history['accuracy'])
plt.plot(mfitVGG.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

# TODO
# summarize history for loss
plt.plot(mfitVGG.history['loss'])
plt.plot(mfitVGG.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
With the VGG we see that we achieve greater accuracy in the validation dataset and that learning is performed more regularly. We continue with an Inception architecture.
In [28]:
from keras.applications import InceptionV3
inception_c = InceptionV3(weights='imagenet', include_top=False,classes=3,input_shape=(299,299,3))
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.5/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
87916544/87910968 [==============================] - 1s 0us/step
In [29]:
#We adapt the generators to the size of inception


train_generator  =train_data_gen.flow_from_directory(basepath+'train',                                          
                                          target_size=(299,299),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=True)

validation_generator = validation_data_gen.flow_from_directory(basepath+'validation',                                          
                                          target_size=(299,299),                                                               
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=True)


test_generator = test_data_gen.flow_from_directory(basepath+'test',                                          
                                          target_size=(299,299),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=False)
Found 339 images belonging to 3 classes.
Found 42 images belonging to 3 classes.
Found 45 images belonging to 3 classes.
In [30]:
#Let's do transfer learning using the weights available in imagenet, 
#having a small dataset, different from the original one with which the network was trained, we will run
#one part (only the last blocks).
for l in inception_c.layers[:-82]:
  l.trainable=False
for l in inception_c.layers:
  print(l.name+" "+str(l.trainable))
input_2 False
conv2d_2 False
batch_normalization_1 False
activation_1 False
conv2d_3 False
batch_normalization_2 False
activation_2 False
conv2d_4 False
batch_normalization_3 False
activation_3 False
max_pooling2d_1 False
conv2d_5 False
batch_normalization_4 False
activation_4 False
conv2d_6 False
batch_normalization_5 False
activation_5 False
max_pooling2d_2 False
conv2d_10 False
batch_normalization_9 False
activation_9 False
conv2d_8 False
conv2d_11 False
batch_normalization_7 False
batch_normalization_10 False
activation_7 False
activation_10 False
average_pooling2d_1 False
conv2d_7 False
conv2d_9 False
conv2d_12 False
conv2d_13 False
batch_normalization_6 False
batch_normalization_8 False
batch_normalization_11 False
batch_normalization_12 False
activation_6 False
activation_8 False
activation_11 False
activation_12 False
mixed0 False
conv2d_17 False
batch_normalization_16 False
activation_16 False
conv2d_15 False
conv2d_18 False
batch_normalization_14 False
batch_normalization_17 False
activation_14 False
activation_17 False
average_pooling2d_2 False
conv2d_14 False
conv2d_16 False
conv2d_19 False
conv2d_20 False
batch_normalization_13 False
batch_normalization_15 False
batch_normalization_18 False
batch_normalization_19 False
activation_13 False
activation_15 False
activation_18 False
activation_19 False
mixed1 False
conv2d_24 False
batch_normalization_23 False
activation_23 False
conv2d_22 False
conv2d_25 False
batch_normalization_21 False
batch_normalization_24 False
activation_21 False
activation_24 False
average_pooling2d_3 False
conv2d_21 False
conv2d_23 False
conv2d_26 False
conv2d_27 False
batch_normalization_20 False
batch_normalization_22 False
batch_normalization_25 False
batch_normalization_26 False
activation_20 False
activation_22 False
activation_25 False
activation_26 False
mixed2 False
conv2d_29 False
batch_normalization_28 False
activation_28 False
conv2d_30 False
batch_normalization_29 False
activation_29 False
conv2d_28 False
conv2d_31 False
batch_normalization_27 False
batch_normalization_30 False
activation_27 False
activation_30 False
max_pooling2d_3 False
mixed3 False
conv2d_36 False
batch_normalization_35 False
activation_35 False
conv2d_37 False
batch_normalization_36 False
activation_36 False
conv2d_33 False
conv2d_38 False
batch_normalization_32 False
batch_normalization_37 False
activation_32 False
activation_37 False
conv2d_34 False
conv2d_39 False
batch_normalization_33 False
batch_normalization_38 False
activation_33 False
activation_38 False
average_pooling2d_4 False
conv2d_32 False
conv2d_35 False
conv2d_40 False
conv2d_41 False
batch_normalization_31 False
batch_normalization_34 False
batch_normalization_39 False
batch_normalization_40 False
activation_31 False
activation_34 False
activation_39 False
activation_40 False
mixed4 False
conv2d_46 False
batch_normalization_45 False
activation_45 False
conv2d_47 False
batch_normalization_46 False
activation_46 False
conv2d_43 False
conv2d_48 False
batch_normalization_42 False
batch_normalization_47 False
activation_42 False
activation_47 False
conv2d_44 False
conv2d_49 False
batch_normalization_43 False
batch_normalization_48 False
activation_43 False
activation_48 False
average_pooling2d_5 False
conv2d_42 False
conv2d_45 False
conv2d_50 False
conv2d_51 False
batch_normalization_41 False
batch_normalization_44 False
batch_normalization_49 False
batch_normalization_50 False
activation_41 False
activation_44 False
activation_49 False
activation_50 False
mixed5 False
conv2d_56 False
batch_normalization_55 False
activation_55 False
conv2d_57 False
batch_normalization_56 False
activation_56 False
conv2d_53 False
conv2d_58 False
batch_normalization_52 False
batch_normalization_57 False
activation_52 False
activation_57 False
conv2d_54 False
conv2d_59 False
batch_normalization_53 False
batch_normalization_58 False
activation_53 False
activation_58 False
average_pooling2d_6 False
conv2d_52 False
conv2d_55 False
conv2d_60 False
conv2d_61 False
batch_normalization_51 False
batch_normalization_54 False
batch_normalization_59 False
batch_normalization_60 False
activation_51 False
activation_54 False
activation_59 False
activation_60 False
mixed6 False
conv2d_66 False
batch_normalization_65 False
activation_65 False
conv2d_67 False
batch_normalization_66 False
activation_66 False
conv2d_63 False
conv2d_68 False
batch_normalization_62 False
batch_normalization_67 False
activation_62 False
activation_67 False
conv2d_64 False
conv2d_69 False
batch_normalization_63 False
batch_normalization_68 False
activation_63 False
activation_68 False
average_pooling2d_7 False
conv2d_62 False
conv2d_65 False
conv2d_70 False
conv2d_71 False
batch_normalization_61 False
batch_normalization_64 False
batch_normalization_69 False
batch_normalization_70 False
activation_61 False
activation_64 False
activation_69 False
activation_70 False
mixed7 False
conv2d_74 True
batch_normalization_73 True
activation_73 True
conv2d_75 True
batch_normalization_74 True
activation_74 True
conv2d_72 True
conv2d_76 True
batch_normalization_71 True
batch_normalization_75 True
activation_71 True
activation_75 True
conv2d_73 True
conv2d_77 True
batch_normalization_72 True
batch_normalization_76 True
activation_72 True
activation_76 True
max_pooling2d_4 True
mixed8 True
conv2d_82 True
batch_normalization_81 True
activation_81 True
conv2d_79 True
conv2d_83 True
batch_normalization_78 True
batch_normalization_82 True
activation_78 True
activation_82 True
conv2d_80 True
conv2d_81 True
conv2d_84 True
conv2d_85 True
average_pooling2d_8 True
conv2d_78 True
batch_normalization_79 True
batch_normalization_80 True
batch_normalization_83 True
batch_normalization_84 True
conv2d_86 True
batch_normalization_77 True
activation_79 True
activation_80 True
activation_83 True
activation_84 True
batch_normalization_85 True
activation_77 True
mixed9_0 True
concatenate_1 True
activation_85 True
mixed9 True
conv2d_91 True
batch_normalization_90 True
activation_90 True
conv2d_88 True
conv2d_92 True
batch_normalization_87 True
batch_normalization_91 True
activation_87 True
activation_91 True
conv2d_89 True
conv2d_90 True
conv2d_93 True
conv2d_94 True
average_pooling2d_9 True
conv2d_87 True
batch_normalization_88 True
batch_normalization_89 True
batch_normalization_92 True
batch_normalization_93 True
conv2d_95 True
batch_normalization_86 True
activation_88 True
activation_89 True
activation_92 True
activation_93 True
batch_normalization_94 True
activation_86 True
mixed9_1 True
concatenate_2 True
activation_94 True
mixed10 True
In [0]:
#We create the output layers
inception_model=Sequential()
inception_model.add(inception_c)
inception_model.add(Flatten())
inception_model.add(Dense(2048,activation="relu"))
inception_model.add(Dense(3,activation="softmax"))
In [0]:
#Reset batch generators
train_generator.reset()
validation_generator.reset()
test_generator.reset()
In [33]:
#We define callback to save checkpoints in case the process stops.

filepath = basepath+"inception.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

#Setup the optimizer
optimizer=Adam(lr=1E-6)
inception_model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

#We run the model, saving the execution time, with 15 epochs
start_time = timeit.default_timer()

mfitxception=inception_model.fit(
        train_generator,
        epochs=15,
        validation_data=validation_generator,callbacks=callbacks_list)

elapsedinception = timeit.default_timer() - start_time
Epoch 1/15
11/11 [==============================] - 28s 3s/step - loss: 1.1917 - accuracy: 0.4012 - val_loss: 1.5662 - val_accuracy: 0.2381

Epoch 00001: loss improved from inf to 1.17950, saving model to /content/gdrive/My Drive/inception.h5
Epoch 2/15
11/11 [==============================] - 18s 2s/step - loss: 0.9305 - accuracy: 0.5752 - val_loss: 1.2183 - val_accuracy: 0.3571

Epoch 00002: loss improved from 1.17950 to 0.92223, saving model to /content/gdrive/My Drive/inception.h5
Epoch 3/15
11/11 [==============================] - 27s 2s/step - loss: 0.8801 - accuracy: 0.6165 - val_loss: 1.1269 - val_accuracy: 0.3810

Epoch 00003: loss improved from 0.92223 to 0.87871, saving model to /content/gdrive/My Drive/inception.h5
Epoch 4/15
11/11 [==============================] - 28s 3s/step - loss: 0.7753 - accuracy: 0.6696 - val_loss: 1.2065 - val_accuracy: 0.4524

Epoch 00004: loss improved from 0.87871 to 0.78262, saving model to /content/gdrive/My Drive/inception.h5
Epoch 5/15
11/11 [==============================] - 25s 2s/step - loss: 0.6892 - accuracy: 0.7168 - val_loss: 1.3833 - val_accuracy: 0.4286

Epoch 00005: loss improved from 0.78262 to 0.68629, saving model to /content/gdrive/My Drive/inception.h5
Epoch 6/15
11/11 [==============================] - 26s 2s/step - loss: 0.6518 - accuracy: 0.7286 - val_loss: 1.2656 - val_accuracy: 0.5000

Epoch 00006: loss improved from 0.68629 to 0.65995, saving model to /content/gdrive/My Drive/inception.h5
Epoch 7/15
11/11 [==============================] - 29s 3s/step - loss: 0.7012 - accuracy: 0.7404 - val_loss: 1.2838 - val_accuracy: 0.5476

Epoch 00007: loss did not improve from 0.65995
Epoch 8/15
11/11 [==============================] - 37s 3s/step - loss: 0.6430 - accuracy: 0.7522 - val_loss: 1.0714 - val_accuracy: 0.5476

Epoch 00008: loss improved from 0.65995 to 0.64491, saving model to /content/gdrive/My Drive/inception.h5
Epoch 9/15
11/11 [==============================] - 27s 2s/step - loss: 0.5974 - accuracy: 0.7581 - val_loss: 1.2045 - val_accuracy: 0.5476

Epoch 00009: loss improved from 0.64491 to 0.58384, saving model to /content/gdrive/My Drive/inception.h5
Epoch 10/15
11/11 [==============================] - 28s 3s/step - loss: 0.6138 - accuracy: 0.7876 - val_loss: 0.9025 - val_accuracy: 0.6190

Epoch 00010: loss did not improve from 0.58384
Epoch 11/15
11/11 [==============================] - 37s 3s/step - loss: 0.5576 - accuracy: 0.7935 - val_loss: 0.8435 - val_accuracy: 0.6190

Epoch 00011: loss improved from 0.58384 to 0.55703, saving model to /content/gdrive/My Drive/inception.h5
Epoch 12/15
11/11 [==============================] - 31s 3s/step - loss: 0.5313 - accuracy: 0.8112 - val_loss: 0.8695 - val_accuracy: 0.6667

Epoch 00012: loss improved from 0.55703 to 0.52901, saving model to /content/gdrive/My Drive/inception.h5
Epoch 13/15
11/11 [==============================] - 26s 2s/step - loss: 0.5533 - accuracy: 0.7788 - val_loss: 0.7830 - val_accuracy: 0.6905

Epoch 00013: loss did not improve from 0.52901
Epoch 14/15
11/11 [==============================] - 37s 3s/step - loss: 0.5799 - accuracy: 0.7965 - val_loss: 0.9584 - val_accuracy: 0.6429

Epoch 00014: loss did not improve from 0.52901
Epoch 15/15
11/11 [==============================] - 32s 3s/step - loss: 0.5258 - accuracy: 0.8024 - val_loss: 0.9754 - val_accuracy: 0.6905

Epoch 00015: loss did not improve from 0.52901
In [34]:
elapsedinception
Out[34]:
1056.1220898800002
In [36]:
#Accuracy and loss graphs for training and validation.
# TODO
# summarize history for accuracy
plt.plot(mfitxception.history['accuracy'])
plt.plot(mfitxception.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

# TODO
# summarize history for loss
plt.plot(mfitxception.history['loss'])
plt.plot(mfitxception.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
The execution time in this case has increased considerably since it is the network with more layers. The accuracy is also gradually increasing, although the accuracy with the validation data does not reach the level of the VGG network. We finally test with a DenseNet type network, in this case being a faster network, we run it completely, without using previously learned weights.
In [0]:
from keras.applications import DenseNet121
densenet = DenseNet121(weights=None, include_top=True,classes=3)
In [38]:
#for validation and test we just scale the image to the right size for the network


train_generator  =train_data_gen.flow_from_directory(basepath+'train',                                          
                                          target_size=(224,224),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=True)

validation_generator = validation_data_gen.flow_from_directory(basepath+'validation',                                          
                                          target_size=(224,224),                                                               
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=True)


test_generator = test_data_gen.flow_from_directory(basepath+'test',                                          
                                          target_size=(224,224),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=False)
Found 339 images belonging to 3 classes.
Found 42 images belonging to 3 classes.
Found 45 images belonging to 3 classes.
In [0]:
train_generator.reset()
validation_generator.reset()
test_generator.reset()
In [46]:
#We define callback to save checkpoints in case the process stops.

filepath = basepath+"densenet.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

#Setup the optimizer
optimizer=SGD(lr=0.001)
densenet.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

#We run the model and save the runtime, with only 10 epochs
start_time = timeit.default_timer()

mfitdensenet=densenet.fit(
        train_generator,
        epochs=15,
        validation_data=validation_generator,callbacks=callbacks_list)

densenetelapsed = timeit.default_timer() - start_time
Epoch 1/15
11/11 [==============================] - 51s 5s/step - loss: 0.4748 - accuracy: 0.8230 - val_loss: 1.0892 - val_accuracy: 0.6429

Epoch 00001: loss improved from inf to 0.47323, saving model to /content/gdrive/My Drive/densenet.h5
Epoch 2/15
11/11 [==============================] - 11s 983ms/step - loss: 0.4603 - accuracy: 0.8289 - val_loss: 0.8434 - val_accuracy: 0.6429

Epoch 00002: loss improved from 0.47323 to 0.46421, saving model to /content/gdrive/My Drive/densenet.h5
Epoch 3/15
11/11 [==============================] - 15s 1s/step - loss: 0.4425 - accuracy: 0.8378 - val_loss: 0.7086 - val_accuracy: 0.6429

Epoch 00003: loss improved from 0.46421 to 0.44957, saving model to /content/gdrive/My Drive/densenet.h5
Epoch 4/15
11/11 [==============================] - 15s 1s/step - loss: 0.4681 - accuracy: 0.8260 - val_loss: 0.8496 - val_accuracy: 0.6667

Epoch 00004: loss did not improve from 0.44957
Epoch 5/15
11/11 [==============================] - 15s 1s/step - loss: 0.4562 - accuracy: 0.8112 - val_loss: 1.0317 - val_accuracy: 0.6667

Epoch 00005: loss did not improve from 0.44957
Epoch 6/15
11/11 [==============================] - 15s 1s/step - loss: 0.4273 - accuracy: 0.8525 - val_loss: 0.3994 - val_accuracy: 0.7619

Epoch 00006: loss improved from 0.44957 to 0.42562, saving model to /content/gdrive/My Drive/densenet.h5
Epoch 7/15
11/11 [==============================] - 14s 1s/step - loss: 0.4507 - accuracy: 0.8171 - val_loss: 0.3914 - val_accuracy: 0.7619

Epoch 00007: loss did not improve from 0.42562
Epoch 8/15
11/11 [==============================] - 15s 1s/step - loss: 0.4098 - accuracy: 0.8555 - val_loss: 0.5168 - val_accuracy: 0.7857

Epoch 00008: loss improved from 0.42562 to 0.41362, saving model to /content/gdrive/My Drive/densenet.h5
Epoch 9/15
11/11 [==============================] - 14s 1s/step - loss: 0.4239 - accuracy: 0.8289 - val_loss: 0.6670 - val_accuracy: 0.7857

Epoch 00009: loss did not improve from 0.41362
Epoch 10/15
11/11 [==============================] - 15s 1s/step - loss: 0.4192 - accuracy: 0.8555 - val_loss: 0.3796 - val_accuracy: 0.7619

Epoch 00010: loss did not improve from 0.41362
Epoch 11/15
11/11 [==============================] - 15s 1s/step - loss: 0.4485 - accuracy: 0.8201 - val_loss: 0.4892 - val_accuracy: 0.7381

Epoch 00011: loss did not improve from 0.41362
Epoch 12/15
11/11 [==============================] - 15s 1s/step - loss: 0.4580 - accuracy: 0.8348 - val_loss: 0.5388 - val_accuracy: 0.7857

Epoch 00012: loss did not improve from 0.41362
Epoch 13/15
11/11 [==============================] - 15s 1s/step - loss: 0.4216 - accuracy: 0.8319 - val_loss: 0.5345 - val_accuracy: 0.7857

Epoch 00013: loss did not improve from 0.41362
Epoch 14/15
11/11 [==============================] - 16s 1s/step - loss: 0.3914 - accuracy: 0.8761 - val_loss: 0.6665 - val_accuracy: 0.7381

Epoch 00014: loss improved from 0.41362 to 0.39336, saving model to /content/gdrive/My Drive/densenet.h5
Epoch 15/15
11/11 [==============================] - 15s 1s/step - loss: 0.4165 - accuracy: 0.8496 - val_loss: 0.6036 - val_accuracy: 0.8333

Epoch 00015: loss did not improve from 0.39336
In [47]:
densenetelapsed
Out[47]:
346.6281982569999
In [48]:
#Accuracy and loss graphs for train and validation data
# TODO
# summarize history for accuracy
plt.plot(mfitdensenet.history['accuracy'])
plt.plot(mfitdensenet.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

# TODO
# summarize history for loss
plt.plot(mfitdensenet.history['loss'])
plt.plot(mfitdensenet.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
In this case we see how we obtain a high accuracy in the validation dataset, but the learning does not seem very gradual, in fact in the training dataset there is hardly any improvement but it starts with a very high accuracy, while the accuracy in the validation does improve but with a lot of fluctuation.

Prediction and comparison

En esta sección se debe implementar la fase de test de los mejores modelos desarrollados anteriormente.

Let's make predictions on the test dataset to see the accuracy obtained with each of the networks with new data. We start with the single-layer base model.
In [19]:
test_generator = test_data_gen.flow_from_directory(basepath+'test',                                          
                                          target_size=(224,224),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=False)

test_generator.reset()

#Prediction
prediction=model.predict_generator(test_generator)
#Cast to classes with probabilities
pclasses=[]
for p in prediction:
    pclasses.append(np.argmax(p))

#We measure accuracy by comparing prediction with test data.

accuracy_score(test_generator.classes, pclasses)
Out[19]:
0.5333333333333333
In this case the accuracy does not improve from 53%. We saw in the previous graphs that both the accuracy and loss of the network had remained somewhat stagnant even started to rise after the 15 stages, it is not expected that the network can improve much if more stages are added. We continue with the VGG.
In [26]:
test_generator.reset()

# Realizamos la predicción con este modelo
prediction_vgg=vgg_model.predict(test_generator)
#Pasamos a clases con las probabilidades
pclasses_vgg=[]
for p in prediction_vgg:
    pclasses_vgg.append(np.argmax(p))

#Medimos la precisión comparando la predicción con los datos de test

accuracy_score(test_generator.classes, pclasses_vgg)
Out[26]:
0.8444444444444444
This model reaches a good accuracy, 84%, although at the cost of a considerable increase in processing time. We can see how we have been able to take good advantage of the previously learned weights provided by keras, from its training with imagenet. Moreover, this network does seem to have prospects for improvement if we add more layers as both accuracy and loss improve regularly and gradually and do not seem to be at their maximum. Let's go with Inception.
In [35]:
test_generator = test_data_gen.flow_from_directory(basepath+'test',                                          
                                          target_size=(299,299),
                                          batch_size=32,
                                          class_mode = 'categorical',
                                          shuffle=False)
test_generator.reset()

# Let's go to the prediction with this model
prediction_inception=inception_model.predict(test_generator)
#Let's cast to classes with probabilities
pclasses_inception=[]
for p in prediction_inception:
    pclasses_inception.append(np.argmax(p))

#We measure accuracy by comparing prediction with test data.

accuracy_score(test_generator.classes, pclasses_inception)
Out[35]:
0.7111111111111111
Inception remains at 71% accuracy with the test data. It is not a bad figure and from the graphs it seems that it can continue to improve, although at the cost of a much longer processing time, given the amount of layers we are retraining. Finally we performed predictions with DenseNet.
In [50]:
# Let's perform the prediction with this model
prediction_densenet=densenet.predict(test_generator)
#Let's cast to classes with probabilities
pclasses_densenet=[]
for p in prediction_densenet:
    pclasses_densenet.append(np.argmax(p))

#We measure accuracy by comparing prediction with test data.

accuracy_score(test_generator.classes, pclasses_densenet)
Out[50]:
0.8666666666666667
The result is quite good, 86%, better than VGG and with a lower processing time, although the graphs are a bit stranger since the network does not seem to learn in a stable way, since the training graphs are practically horizontal.
Discussion:

After evaluating the results of running the 4 networks, my conclusion is that for accuracy, run time and learning stability, the most suitable network for this dataset is VGG. We have been able to take good advantage of previously learned weights with other datasets and it seems that it can improve with more epochs. I think that by tuning the network more by modifying the number of retrained layers and adding epochs it can get more performance out of it than DenseNet. VGG is a deep network that uses small kernels to try to find more complex features in the network.

Explainability

Let's try to visualize the important parts of one of the images for the CNN by implementing CAM (Class Activation Maps) with the library visualize_cam

Let's use a Densenet architecture.
In [34]:
#Gradcam algorithm implementation. 

from keras.preprocessing.image import load_img, img_to_array

#Load and show the image
_img = load_img(basepath+"/train/COVID/ryct.2020200034.fig2.jpeg",target_size=(224,224))
plt.imshow(_img)
plt.show()
In [0]:
#We have to change scipy libraries in order to make visualize_cam to work.
!pip uninstall -y scipy
!pip install scipy==1.2.0
In [0]:
#Let's load the pretrained Densenet model.
from keras.applications.densenet import DenseNet121,preprocess_input
from keras.models import load_model
d_model=load_model(basepath+"densenet.h5")
In [50]:
#Let's generate a prediction with the image.
img               = img_to_array(_img)
img               = preprocess_input(img)
y_pred            = d_model.predict(img[np.newaxis,...])
y_pred
class_idxs_sorted = np.argsort(y_pred.flatten())[::-1]

class_idxs_sorted
Out[50]:
array([0, 1, 2])
In [54]:
#We have to set the linear activation in the last layer.

from vis.utils import utils
from keras.activations import linear
# Buscamos la última capa del modelo
layer_idx = utils.find_layer_idx(d_model, 'fc1000')

# Cambiamos su activación de softmax a linear
d_model.layers[layer_idx].activation = linear
d_model = utils.apply_modifications(d_model)

layer_idx
Out[54]:
428
In [58]:
#Let's search the last convolutional layer and its output.
penultima_capa = utils.find_layer_idx(d_model, "bn") 
penultima_capa
Out[58]:
425
In [0]:
#No we can generate the heat map.
from vis.visualization import visualize_cam



class_idx  = class_idxs_sorted[0]
seed_input = img
grad_top1  = visualize_cam(d_model, layer_idx, class_idx, seed_input, 
                           penultimate_layer_idx = penultima_capa,#None,
                           backprop_modifier     = None,
                           grad_modifier         = None)
In [67]:
#Show the results.
def plot_map(grads):
    fig, axes = plt.subplots(1,2,figsize=(14,5))
    axes[0].imshow(_img)
    axes[1].imshow(_img)
    i = axes[1].imshow(grads,cmap="jet",alpha=0.8)
    fig.colorbar(i)
    plt.suptitle("heatmap")
plot_map(grad_top1)