Sharing is caring!

Deep Learning Exercise 1: Dropout and Strides for Larger Models

Table of Contents

Introduction for Dropout and Strides for Larger Models

This notebook is designed for a hands-on activity in the Computer Vision course focusing on Deep Learning as an exercise Dropout and Strides for Larger Models. The tutorial can be accessed through the following link.

You can also check:

In the last task, you built a model to categorize various clothing items in the MNIST for Fashion dataset. This time, you will upgrade your model by enlarging its dimensions, setting longer stride lengths, and adding dropout.

These modifications are intended to boost both the efficiency and precision of your model, representing the ultimate phase in the Deep Learning Track.

what is Dropout?

Dropout plays the role of a clever trickster in your model’s toolbox. Its main task is to shake things up by randomly removing some neurons during training.

Why? To prevent overfitting, a tricky situation where your model becomes too comfortable with the training data and struggles to apply its knowledge to new information.

By eliminating neurons at random, Dropout pushes your model to develop stronger and more flexible features, almost like providing it with a crash course in adaptability.

This approach stops the model from focusing too much on specific details and guarantees that it can tackle any unexpected challenges that come its way.

What are Strides for Larger Models?

Let’s dive into the concept of Strides. Picture your model as a detective analyzing an image for clues. Strides essentially instruct your detective to take larger steps while investigating.

Instead of meticulously examining each pixel, Strides allow your model to move forward more quickly, covering more ground in fewer steps.

This feature is particularly useful for larger models that require efficiency. By taking bigger strides, your model can accelerate its investigation without compromising accuracy. It’s like enhancing your detective’s skills, enabling them to solve the case in no time.

Whether you’re training a massive model to handle vast amounts of data or fine-tuning a sleek neural network for rapid predictions, Dropout and Strides are your reliable companions in building fast, accurate models that are prepared for any challenge.

Data Preparation

Execute the code cell below to initialize the necessary components:

import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow import keras

# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning.exercise_8 import *
print("Setup Complete")

img_rows, img_cols = 28, 28
num_classes = 10

def prep_data(raw):
    y = raw[:, 0]
    out_y = keras.utils.to_categorical(y, num_classes)
    
    x = raw[:,1:]
    num_images = raw.shape[0]
    out_x = x.reshape(num_images, img_rows, img_cols, 1)
    out_x = out_x / 255
    return out_x, out_y

fashion_file = "../input/fashionmnist/fashion-mnist_train.csv"
fashion_data = np.loadtxt(fashion_file, skiprows=1, delimiter=',')
x, y = prep_data(fashion_data)

Increasing Stride Size in A Layer

Modifying the stride size in a layer can have a significant impact on the model’s performance. In the current model, there are no strides, meaning the stride length is set to 1.

I suggest running the model with this configuration and observing its accuracy and epoch duration. Afterward, you can experiment by adjusting the stride length in one of the layers to see how it affects the model’s performance.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout

batch_size = 16

fashion_model = Sequential()
fashion_model.add(Conv2D(16, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=(img_rows, img_cols, 1)))
fashion_model.add(Conv2D(16, (3, 3), activation='relu'))
fashion_model.add(Flatten())
fashion_model.add(Dense(128, activation='relu'))
fashion_model.add(Dense(num_classes, activation='softmax'))

fashion_model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer='adam',
              metrics=['accuracy'])

fashion_model.fit(x, y,
          batch_size=batch_size,
          epochs=3,
          validation_split = 0.2)

After executing the cell above, you will notice that the code below remains the same, except that the model is now referred to as fashion_model_1.

Modify the settings of fashion_model_1 to have a stride length of 2 in the second convolutional layer. Proceed to run the cell to observe the differences in speed and accuracy compared to the original model.

fashion_model_1 = Sequential()
fashion_model_1.add(Conv2D(16, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=(img_rows, img_cols, 1)))
fashion_model_1.add(Conv2D(16, (3, 3), activation='relu', strides=2))
fashion_model_1.add(Flatten())
fashion_model_1.add(Dense(128, activation='relu'))
fashion_model_1.add(Dense(num_classes, activation='softmax'))

fashion_model_1.compile(loss=keras.losses.categorical_crossentropy,
              optimizer='adam',
              metrics=['accuracy'])

fashion_model_1.fit(x, y,
          batch_size=batch_size,
          epochs=3,
          validation_split = 0.2)

It’s important to note that the training of your model was about twice as fast, while the accuracy remained almost unchanged.

Moreover, this model also makes predictions faster, which is extremely important in various real-life situations. When deploying deep learning models, it’s crucial to evaluate whether this kind of speed improvement is necessary for the specific applications you’ll be using.

By experimenting with extra layers or convolutions in each layer, as well as fine-tuning, you can potentially create a model that is not only faster but also more accurate than the original one.

The Power of Image Segmentation with U-Net

In the realm of computer vision, interpreting images is akin to unraveling a mystery. It involves deconstructing the overall image into more manageable pieces to unveil the treasures hidden within.

This is where image segmentation comes into play—a potent method that dissects images into significant segments, each with its own narrative.

Leading the way in this segmentation breakthrough is U-Net, a truly revolutionary advancement in the field.

What is Image Segmentation?

Image segmentation is like the superhero of computer vision. Instead of viewing images as one big chunk, segmentation divides them into separate regions or objects.

This detailed approach allows for a wide range of applications, such as detecting tumors in medical images or monitoring objects in self-driving cars.

Introducing U-Net with The Segmentation Maverick

U-Net is a standout in the world of image segmentation tools, thanks to its unique U-shaped architecture. This innovative design allows U-Net to capture intricate details while maintaining spatial information. Let’s dive deeper into how U-Net works.

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, UpSampling2D, Concatenate
from tensorflow.keras.models import Model

def unet(input_shape=(256, 256, 3), num_classes=1):
    inputs = tf.keras.layers.Input(input_shape)
    
    # Encoder
    conv1 = Conv2D(64, 3, activation='relu', padding='same')(inputs)
    conv1 = Conv2D(64, 3, activation='relu', padding='same')(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
    
    conv2 = Conv2D(128, 3, activation='relu', padding='same')(pool1)
    conv2 = Conv2D(128, 3, activation='relu', padding='same')(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
    
    conv3 = Conv2D(256, 3, activation='relu', padding='same')(pool2)
    conv3 = Conv2D(256, 3, activation='relu', padding='same')(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
    
    conv4 = Conv2D(512, 3, activation='relu', padding='same')(pool3)
    conv4 = Conv2D(512, 3, activation='relu', padding='same')(conv4)
    drop4 = Dropout(0.5)(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
    
    # Bottom
    conv5 = Conv2D(1024, 3, activation='relu', padding='same')(pool4)
    conv5 = Conv2D(1024, 3, activation='relu', padding='same')(conv5)
    drop5 = Dropout(0.5)(conv5)
    
    # Decoder
    up6 = Conv2D(512, 2, activation='relu', padding='same')(UpSampling2D(size=(2, 2))(drop5))
    merge6 = Concatenate(axis=3)([drop4, up6])
    conv6 = Conv2D(512, 3, activation='relu', padding='same')(merge6)
    conv6 = Conv2D(512, 3, activation='relu', padding='same')(conv6)
    
    up7 = Conv2D(256, 2, activation='relu', padding='same')(UpSampling2D(size=(2, 2))(conv6))
    merge7 = Concatenate(axis=3)([conv3, up7])
    conv7 = Conv2D(256, 3, activation='relu', padding='same')(merge7)
    conv7 = Conv2D(256, 3, activation='relu', padding='same')(conv7)
    
    up8 = Conv2D(128, 2, activation='relu', padding='same')(UpSampling2D(size=(2, 2))(conv7))
    merge8 = Concatenate(axis=3)([conv2, up8])
    conv8 = Conv2D(128, 3, activation='relu', padding='same')(merge8)
    conv8 = Conv2D(128, 3, activation='relu', padding='same')(conv8)
    
    up9 = Conv2D(64, 2, activation='relu', padding='same')(UpSampling2D(size=(2, 2))(conv8))
    merge9 = Concatenate(axis=3)([conv1, up9])
    conv9 = Conv2D(64, 3, activation='relu', padding='same')(merge9)
    conv9 = Conv2D(64, 3, activation='relu', padding='same')(conv9)
    
    outputs = Conv2D(num_classes, 1, activation='sigmoid')(conv9)
    
    model = Model(inputs=inputs, outputs=outputs)
    return model

# Instantiate the U-Net model
model = unet(input_shape=(256, 256, 3), num_classes=1)
model.summary()


This piece of code establishes a U-Net structure for image segmentation. It consists of an encoder-decoder network that includes skip connections to retain spatial information.

The model is specifically designed to handle input images with dimensions of 256×256 pixels. It produces segmentation masks with a single channel, which represents the segmentation class.

Feel free to modify the input shape and the number of output classes to suit the requirements of your dataset.

How U-Net Works?

U-Net is essentially a combination of an encoder-decoder network with skip connections. The encoder acts like a detective, identifying important features in the image, while the decoder acts like a puzzle solver, reconstructing the segmented image based on those features.

The key feature of U-Net is the skip connections, which create shortcuts between corresponding layers of the encoder and decoder, ensuring that fine details are preserved.

Why U-Net?

U-Net is not your average segmentation tool – it’s a game-changer. Here’s why:

  • Spatial Preservation: The skip connections in U-Net maintain spatial details accurately throughout the segmentation process, resulting in precise segmentations.
  • Efficient Training: U-Net’s compact yet powerful architecture makes it highly efficient to train, making it a popular choice for segmentation tasks, even with limited data.
  • Versatility: Whether it’s biomedical imaging or satellite imagery, U-Net excels in various domains, effortlessly adapting to different segmentation challenges.

The world of computer vision is always changing, and image segmentation is a key technique for revealing the mysteries hidden in images.

With U-Net at the forefront, the opportunities are limitless. Get ready, explore, and allow U-Net to lead you on your path to mastering segmentation.

Conclusion

Congratulations on finishing the Deep Learning course! You now have the skills to create and enhance computer vision models.

Feel free to continue exploring the dataset by trying out different models, such as adding dropout or extra layers, or by starting a new project to apply your newfound skills.

If you have any questions or thoughts, don’t hesitate to join the Learn Discussion forum to connect with fellow learners.

Some interesting datasets you can delve into include:

  • Recognizing written letters
  • Identifying flowers
  • Distinguishing between cats and dogs
  • Analyzing images of 10 monkeys
  • Predicting bone age from X-rays

Remember, practice is key to improvement. Enjoy the learning journey!


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *