Table of Contents
Introduction
Hey tech enthusiasts! Are you excited to delve into the amazing realm of deep learning? In this blog post, we’re introducing an interesting project that compares real images with AI-generated synthetic ones.
You can also check:
- Deep Learning Project 1: Real and AI-Generated Synthetic Images
- Deep Learning Project 2: Building a Fashion Recommendation System
Whether you’re looking for deep learning projects on GitHub, searching for deep learning projects for your final year, or simply interested in deep learning projects for students, this project has something for everyone.
deep learning projects kaggle
kaggle deep learning projects
kaggle python projects
kaggle datasets projects
Just imagine training a computer to distinguish between a genuine photograph and one produced by an AI.
It’s a captivating concept, isn’t it? We’ll delve into how advanced neural networks generate incredibly lifelike images and the techniques we employ to differentiate between them. Get ready to combine art and science in a way that’s both thrilling and enlightening. Let’s kick things off!
Deep Learning Code
Import Library¶
!pip install torchinfo
import os import json import numpy as np import pandas as pd import torchvision.models as models from PIL import Image import matplotlib.pyplot as plt import copy import torch import torchvision from torch import nn from torch import optim import torch.nn.functional as F from torchvision import transforms from torchvision.transforms import InterpolationMode from torch.utils.data import DataLoader from torch.utils.data import Dataset from torch.utils.data import random_split from torchinfo import summary from torchmetrics.functional.classification import binary_f1_score from sklearn.model_selection import train_test_split from torchvision.transforms import v2 from tqdm import tqdm from sklearn.metrics import f1_score from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score, precision_recall_curve, ConfusionMatrixDisplay, roc_curve, auc, accuracy_score
# Check and allocate GPU usage device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device)
machine learning projects kaggle
kaggle projects for beginners
deep learning projects for beginners
easy machine learning projects python
ideas for machine learning projects
projects for machine learning
Data Pre-Processing
######################## # Adapted from Practical ######################## class CustomDataset(Dataset): def __init__(self, root, transform=None): self.data = [] # store the filenames of all samples self.targets = [] # store the labels of all samples # get the list of all classes self.classes = os.listdir(root) # save the transformation pipeline self.transform = transform for cls_id, cls in enumerate(self.classes): cls_path = os.path.join(root, cls) cls_filenames = os.listdir(cls_path) cls_filepaths = [os.path.join(cls_path, fn) for fn in cls_filenames] self.data.extend(cls_filepaths) # combine two lists cls_labels = [cls_id] * len(cls_filenames) self.targets.extend(cls_labels) def __len__(self): return len(self.data) def __getitem__ (self, index): # get the image image = Image.open(self.data[index]) # perform transformation if self.transform is not None: image = self.transform(image) # get the label label = self.targets[index] return image, label
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
######################## # Adapted from Practical ######################## # Trim the dataset def trim_dataset(dataset, num_samples): selected = np.random.permutation(np.arange(len(dataset)))[:num_samples] dataset.data = np.array(dataset.data)[selected].tolist() dataset.targets = np.array(dataset.targets)[selected].tolist() return dataset
# Linux command to get the image files location ! ls /kaggle/input/test
FAKE REAL
######################################## # Melissa Yap Chia Chean & Lau Xin Vern ######################################## # Function to split dataset into test and vaidate dataset def split_dataset(dataset, test_size=0.2, shuffle=True, random_state=42): # Extract data and targets from the custom dataset data = np.array(dataset.data) targets = np.array(dataset.targets) # Split data and targets using train_test_split X_train, X_val, y_train, y_val = train_test_split(data, targets, test_size=test_size, shuffle=shuffle, random_state=random_state) # Create new CustomDataset objects for training and validation sets trainset = CustomDataset(root=root, transform=dataset.transform) trainset.data = X_train.tolist() # Convert to list format for compatibility trainset.targets = y_train.tolist() valset = CustomDataset(root=root, transform=dataset.transform) valset.data = X_val.tolist() valset.targets = y_val.tolist() return trainset, valset
deep learning github
deep-learning github
deep learning projects for masters students
neural network projects ideas
machine learning project kaggle
######################################## # Melissa Yap Chia Chean & Lau Xin Vern ######################################## # Create dataset for Training the model root = "/kaggle/input/train" trainset = CustomDataset(root=root, transform=None) trim_dataset(trainset, 10000) # Adjust test_size as needed (0.2 for 20% validation set) trainset, valset = split_dataset(trainset, test_size=0.1) print('Number of samples in training set:', len(trainset)) print('Number of samples in validation set:', len(valset)) print('Number of classes:', trainset.classes) # Classes are preserved in both sets
Number of samples in training set: 9000 Number of samples in validation set: 1000 Number of classes: ['FAKE', 'REAL']
######################################## # Melissa Yap Chia Chean & Lau Xin Vern ######################################## # Create dataset for Testing the model root = "/kaggle/input/test" testset = CustomDataset(root = root, transform = None) trim_dataset(testset, 2000) # Adjust test_size as needed (0.2 for 20% validation set) testset, testvalset = split_dataset(testset, test_size=0.003) testvalset2 = copy.deepcopy(testvalset) print('Number of samples in test set:', len(testset)) print('Number of samples in test validation set:', len(testvalset)) print('Number of samples in test validation set2:', len(testvalset2)) print('Number of classes:', testset.classes) # Classes are preserved in both sets
Number of samples in test set: 1994 Number of samples in test validation set: 6 Number of samples in test validation set2: 6 Number of classes: ['FAKE', 'REAL']
######################################## # Melissa Yap Chia Chean & Lau Xin Vern ######################################## # Observe the number of images for each class in testset real_count_train = sum(label == 0 for label in trainset.targets) fake_count_train = sum(label == 1 for label in trainset.targets) # Define labels and counts labels = ['Real', 'Fake'] test_counts = [real_count_train, fake_count_train] # Define bar width bar_width = 0.35 # Set up the bar chart fig, ax = plt.subplots() index = range(len(labels)) # Plot test set counts bar1 = ax.bar(index, test_counts, bar_width, label='Training Set') # Add labels, title, and legend ax.set_xlabel('Class') ax.set_ylabel('Number of Samples') ax.set_title('Number of Images for Each Class in Train Set') ax.set_xticks([i for i in index]) ax.set_xticklabels(labels) ax.legend() # Show the plot plt.show()
######################################## # Melissa Yap Chia Chean & Lau Xin Vern ######################################## # Observe the number of images for each class in valset real_count_val = sum(label == 0 for label in valset.targets) fake_count_val = sum(label == 1 for label in valset.targets) val_counts = [real_count_val, fake_count_val] fig, bx = plt.subplots() # Plot test set counts bar2 = bx.bar(index, val_counts, bar_width, label='Val Set') # Add labels, title, and legend bx.set_xlabel('Class') bx.set_ylabel('Number of Samples') bx.set_title('Number of Images for Each Class in Validate Set') bx.set_xticks([i for i in index]) bx.set_xticklabels(labels) bx.legend() # Show the plot plt.show()
deep learning projects github
deep learning project github
deep learning github projects
deep-learning project github
github deep learning projects
############################### # Kok How Meng & Kuan Wei Yeow ############################### # Check if can access to the images image, label = trainset[1] display(image) print("Class =", trainset.classes[label])
Image([[[-2.0837, -2.0837, -2.0837, ..., 0.4679, 0.5022, -2.0837], [ 0.2453, 0.2453, 0.2282, ..., 0.4851, 0.5022, -2.0837], [ 0.2453, 0.2453, 0.2282, ..., 0.5022, 0.5193, -2.0837], ..., [-2.0837, 0.0569, 0.0741, ..., -0.0116, -0.0287, -0.0458], [-2.0837, 0.0741, 0.0912, ..., 0.0056, -0.0116, -0.0287], [-2.0837, 0.0912, 0.1254, ..., -2.0837, -2.0837, -2.0837]], [[-2.0007, -2.0007, -2.0007, ..., -1.5980, -1.5630, -2.0007], [-1.6506, -1.6506, -1.6681, ..., -1.5805, -1.5455, -2.0007], [-1.6681, -1.6681, -1.6856, ..., -1.5630, -1.5455, -2.0007], ..., [-2.0007, -1.7381, -1.7206, ..., -1.4230, -1.4405, -1.4405], [-2.0007, -1.7381, -1.7206, ..., -1.4230, -1.4230, -1.4230], [-2.0007, -1.7381, -1.7206, ..., -2.0007, -2.0007, -2.0007]], [[-1.7696, -1.7696, -1.7696, ..., 0.7228, 0.7402, -1.7696], [ 0.5834, 0.5834, 0.5834, ..., 0.7228, 0.7402, -1.7696], [ 0.5834, 0.5834, 0.5834, ..., 0.7402, 0.7576, -1.7696], ..., [-1.7696, 0.4091, 0.4265, ..., 0.4788, 0.4788, 0.4614], [-1.7696, 0.4265, 0.4439, ..., 0.4962, 0.4962, 0.4788], [-1.7696, 0.4439, 0.4614, ..., -1.7696, -1.7696, -1.7696]]], )
Class = REAL
############################### # Kok How Meng & Kuan Wei Yeow ############################### # This part is to observe the images after image transformation transform = v2.Compose([ # resize image to 232 x 232 v2.Resize(size=(232, 232), interpolation=InterpolationMode.BILINEAR, antialias=True), v2.CenterCrop(224), # data augmentation v2.RandomHorizontalFlip(p=0.5), v2.RandomRotation(degrees=30), v2.ColorJitter(brightness=.3, contrast = 0.3), v2.RandomChannelPermutation() # convert PIL image to tensor with image values in [0, 1] #transforms.ToTensor(), #transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) img, label = trainset[1] transformed_img = transform(img) fig = plt.figure(figsize=(5, 3)) fig.add_subplot(1, 2, 1) plt.imshow(img) plt.axis('off') plt.title("Original") fig.add_subplot(1, 2, 2) plt.imshow(transformed_img) plt.axis('off') plt.title("Transformed") plt.show()
deep learning github
deep-learning github
deep learning projects for masters students
neural network projects ideas
machine learning project kaggle
############################### # Kok How Meng & Kuan Wei Yeow ############################### # Define trandformation for trainset and testset train_transform = v2.Compose([ # resize image to 232 x 232 v2.Resize(size=(232, 232), interpolation=InterpolationMode.BILINEAR, antialias=True), v2.CenterCrop(224), # data augmentation v2.RandomHorizontalFlip(p=0.5), v2.RandomRotation(degrees=30), v2.ColorJitter(brightness=.3, contrast = 0.3), v2.RandomChannelPermutation(), # convert PIL image to tensor with image values in [0, 1] v2.ToImage(), v2.ToDtype(torch.float32, scale=True), v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) test_transform = v2.Compose([ # resize image to 232 x 232 v2.Resize(size=(232, 232), interpolation=InterpolationMode.BILINEAR, antialias=True), v2.CenterCrop(224), # convert PIL image to tensor with image values in [0, 1] v2.ToImage(), v2.ToDtype(torch.float32, scale=True), v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ])
############################### # Kok How Meng & Kuan Wei Yeow ############################### # Apply transformation to trainset, valset , testvalset and testset trainset.transform = train_transform valset.transform = test_transform testset.transform = test_transform testvalset.transform = test_transform print(trainset.transform) print(valset.transform) print(testset.transform)
Compose( Resize(size=[232, 232], interpolation=InterpolationMode.BILINEAR, antialias=True) CenterCrop(size=(224, 224)) RandomHorizontalFlip(p=0.5) RandomRotation(degrees=[-30.0, 30.0], interpolation=InterpolationMode.NEAREST, expand=False, fill=0) ColorJitter(brightness=(0.7, 1.3), contrast=(0.7, 1.3)) RandomChannelPermutation() ToImage() ToDtype(scale=True) Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], inplace=False) ) Compose( Resize(size=[232, 232], interpolation=InterpolationMode.BILINEAR, antialias=True) CenterCrop(size=(224, 224)) ToImage() ToDtype(scale=True) Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], inplace=False) ) Compose( Resize(size=[232, 232], interpolation=InterpolationMode.BILINEAR, antialias=True) CenterCrop(size=(224, 224)) ToImage() ToDtype(scale=True) Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], inplace=False) )
machine learning projects github
machine learning project github
deep learning project ideas
deep learning projects ideas
cnn projects
kaggle machine learning projects
################### # Adapted from Practical ################### # Load the dataset into Dataloader with batch size of 16 trainloader16 = DataLoader(trainset, batch_size=16, shuffle=True, num_workers=2) valloader16 = DataLoader(valset, batch_size=16, shuffle=True, num_workers=2) testloader16 = DataLoader(testset, batch_size=16, shuffle=True, num_workers=2) testvalloader16 = DataLoader(testvalset, batch_size=6, shuffle=True, num_workers=2) x_batch_train, y_batch_train = next(iter(trainloader16)) print(f'Shape of {x_batch_train.shape}') print(f'Shape of {y_batch_train.shape}') x_batch_train, y_batch_train = next(iter(valloader16)) print(f'Shape of {x_batch_train.shape}') print(f'Shape of {y_batch_train.shape}') x_batch_test, y_batch_test = next(iter(testloader16)) print(f'Shape of {x_batch_test.shape}') print(f'Shape of {y_batch_test.shape}') x_batch_test, y_batch_test = next(iter(testvalloader16)) print(f'Shape of {x_batch_test.shape}') print(f'Shape of {y_batch_test.shape}')
Shape of torch.Size([16, 3, 224, 224]) Shape of torch.Size([16]) Shape of torch.Size([16, 3, 224, 224]) Shape of torch.Size([16]) Shape of torch.Size([16, 3, 224, 224]) Shape of torch.Size([16]) Shape of torch.Size([6, 3, 224, 224]) Shape of torch.Size([6])
################### # Adapted from Practical ################### # Load the dataset into Dataloader with batch size of 8 trainloader8 = DataLoader(trainset, batch_size=8, shuffle=True, num_workers=2) valloader8 = DataLoader(valset, batch_size=8, shuffle=True, num_workers=2) testloader8 = DataLoader(testset, batch_size=8, shuffle=True, num_workers=2) x_batch_train, y_batch_train = next(iter(trainloader8)) print(f'Shape of {x_batch_train.shape}') print(f'Shape of {y_batch_train.shape}') x_batch_train, y_batch_train = next(iter(valloader8)) print(f'Shape of {x_batch_train.shape}') print(f'Shape of {y_batch_train.shape}') x_batch_test, y_batch_test = next(iter(testloader8)) print(f'Shape of {x_batch_test.shape}') print(f'Shape of {y_batch_test.shape}')
Shape of torch.Size([8, 3, 224, 224]) Shape of torch.Size([8]) Shape of torch.Size([8, 3, 224, 224]) Shape of torch.Size([8]) Shape of torch.Size([8, 3, 224, 224]) Shape of torch.Size([8])
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
Building Resnet50
##################### # Kok How Meng ##################### # Define the layers in each block class Bottleneck(nn.Module): def __init__(self, in_channels, out_channels, downsample=None, stride=1): super(Bottleneck, self).__init__() self.expansion = 4 self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False) self.bn1 = nn.BatchNorm2d(out_channels) self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(out_channels) self.conv3 = nn.Conv2d(out_channels, out_channels * self.expansion, kernel_size=1, stride=1, padding=0, bias=False) self.bn3 = nn.BatchNorm2d(out_channels * self.expansion) self.downsample = downsample self.stride = stride def forward(self, x): residual = x # also called identity x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = self.bn3(self.conv3(x)) if self.downsample != None: residual = self.downsample(residual) x += residual x = F.relu(x) return x
##################### # Kok How Meng ##################### # Define the Resnet Architecture class ResNet(nn.Module): def __init__(self, block, layers, num_classes): super(ResNet, self).__init__() self.in_channels = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # ResNet layers self.layer1 = self._make_layer(block, layers[0], out_channels=64, stride=1) self.layer2 = self._make_layer(block, layers[1], out_channels=128, stride=2) self.layer3 = self._make_layer(block, layers[2], out_channels=256, stride=2) self.layer4 = self._make_layer(block, layers[3], out_channels=512, stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * 4, num_classes) def _make_layer(self, block, num_residual_blocks, out_channels, stride): downsample = None layers = [] if stride != 1 or self.in_channels != out_channels * 4: downsample = nn.Sequential( nn.Conv2d(self.in_channels, out_channels * 4, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(out_channels * 4) ) layers.append(block(self.in_channels, out_channels, downsample, stride)) self.in_channels = out_channels * 4 for i in range(num_residual_blocks - 1): layers.append(block(self.in_channels, out_channels)) return nn.Sequential(*layers) def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x
##################### # Kok How Meng ##################### # Define the ResNet50 def ResNet50(num_classes=1000): return ResNet(Bottleneck, [3, 4, 6, 3], num_classes)
##################### # Kok How Meng ##################### # Modify the Fully Connected layer of the ResNet50 def build_network(): net = ResNet50(1) # customize the fc layer net.fc = nn.Sequential( nn.Linear(in_features=2048, out_features=1024), nn.ReLU(), nn.Linear(in_features=1024, out_features=512), nn.ReLU(), nn.Linear(in_features=512, out_features=1), nn.Sigmoid() ) return net
##################### # Kok How Meng ##################### # Summary of ResNet50 resnet = build_network() summary(resnet, input_size=(16, 3, 224, 224), col_names=["output_size", "num_params", "mult_adds"])
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
Building AlexNet
##################### # Kuan Wei Yeow ##################### class AlexNet(nn.Module): def __init__(self, num_classes=1): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=0), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(96, 256, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(256, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) self.avgpool = nn.AdaptiveAvgPool2d((6, 6)) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), nn.Sigmoid() ) def forward(self, x): x = self.features(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.classifier(x) return x
deep learning project ideas
deep learning projects ideas
##################### # Kuan Wei Yeow ##################### # Create an instance of AlexNet alexnet = AlexNet() # Print the architecture to verify summary(alexnet, input_size=(16, 3, 224, 224), col_names=["output_size", "num_params", "mult_adds"])
=================================================================================================================== Layer (type:depth-idx) Output Shape Param # Mult-Adds =================================================================================================================== AlexNet [16, 1] -- -- ├─Sequential: 1-1 [16, 256, 5, 5] -- -- │ └─Conv2d: 2-1 [16, 96, 54, 54] 34,944 1,630,347,264 │ └─ReLU: 2-2 [16, 96, 54, 54] -- -- │ └─MaxPool2d: 2-3 [16, 96, 26, 26] -- -- │ └─Conv2d: 2-4 [16, 256, 26, 26] 614,656 6,648,119,296 │ └─ReLU: 2-5 [16, 256, 26, 26] -- -- │ └─MaxPool2d: 2-6 [16, 256, 12, 12] -- -- │ └─Conv2d: 2-7 [16, 384, 12, 12] 885,120 2,039,316,480 │ └─ReLU: 2-8 [16, 384, 12, 12] -- -- │ └─Conv2d: 2-9 [16, 384, 12, 12] 1,327,488 3,058,532,352 │ └─ReLU: 2-10 [16, 384, 12, 12] -- -- │ └─Conv2d: 2-11 [16, 256, 12, 12] 884,992 2,039,021,568 │ └─ReLU: 2-12 [16, 256, 12, 12] -- -- │ └─MaxPool2d: 2-13 [16, 256, 5, 5] -- -- ├─AdaptiveAvgPool2d: 1-2 [16, 256, 6, 6] -- -- ├─Sequential: 1-3 [16, 1] -- -- │ └─Dropout: 2-14 [16, 9216] -- -- │ └─Linear: 2-15 [16, 4096] 37,752,832 604,045,312 │ └─ReLU: 2-16 [16, 4096] -- -- │ └─Dropout: 2-17 [16, 4096] -- -- │ └─Linear: 2-18 [16, 4096] 16,781,312 268,500,992 │ └─ReLU: 2-19 [16, 4096] -- -- │ └─Linear: 2-20 [16, 1] 4,097 65,552 │ └─Sigmoid: 2-21 [16, 1] -- -- =================================================================================================================== Total params: 58,285,441 Trainable params: 58,285,441 Non-trainable params: 0 Total mult-adds (G): 16.29 =================================================================================================================== Input size (MB): 9.63 Forward/backward pass size (MB): 77.91 Params size (MB): 233.14 Estimated Total Size (MB): 320.68 ===================================================================================================================
Building VGGNet
################################################################################################### # Adapted from https://www.analyticsvidhya.com/blog/2021/06/build-vgg-net-from-scratch-with-python/ ################################################################################################### vggtype = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'] class VGGnet(nn.Module): def __init__(self, in_channels=3, num_classes=1): super(VGGnet, self).__init__() self.in_channels = in_channels self.conv_layers = self.create_conv_layers(vggtype) self.fcs = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(4096, num_classes), nn.Sigmoid() ) def forward(self, x): x = self.conv_layers(x) x = x.reshape(x.shape[0], -1) x = self.fcs(x) return x def create_conv_layers(self, architecture): layers = [] in_channels = self.in_channels for x in architecture: if type(x) == int: out_channels = x layers += [ nn.Conv2d( in_channels=in_channels, out_channels=out_channels, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), ), nn.BatchNorm2d(x), nn.ReLU(), ] in_channels = x elif x == "M": layers += [nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))] return nn.Sequential(*layers)
#Pending class VGGNet(nn.Module): def __init__(self, num_classes=1): super(VGGNet, self).__init__() self.features = self._make_layers() self.avgpool = nn.AdaptiveAvgPool2d((7, 7)) self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, num_classes), nn.Sigmoid() ) def forward(self, x): x = self.features(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.classifier(x) return x def _make_layers(self): layers = [] in_channels = 3 configuration = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'] for con in configuration: if con == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] else: conv2d = nn.Conv2d(in_channels, con, kernel_size=3, padding=1) layers += [conv2d, nn.ReLU(inplace=True)] in_channels = con return nn.Sequential(*layers)
vggnet = VGGnet() summary(vggnet, input_size=(16, 3, 224, 224), col_names=["output_size", "num_params", "mult_adds"])
=================================================================================================================== Layer (type:depth-idx) Output Shape Param # Mult-Adds =================================================================================================================== VGGnet [16, 1] -- -- ├─Sequential: 1-1 [16, 512, 7, 7] -- -- │ └─Conv2d: 2-1 [16, 64, 224, 224] 1,792 1,438,646,272 │ └─BatchNorm2d: 2-2 [16, 64, 224, 224] 128 2,048 │ └─ReLU: 2-3 [16, 64, 224, 224] -- -- │ └─Conv2d: 2-4 [16, 64, 224, 224] 36,928 29,646,389,248 │ └─BatchNorm2d: 2-5 [16, 64, 224, 224] 128 2,048 │ └─ReLU: 2-6 [16, 64, 224, 224] -- -- │ └─MaxPool2d: 2-7 [16, 64, 112, 112] -- -- │ └─Conv2d: 2-8 [16, 128, 112, 112] 73,856 14,823,194,624 │ └─BatchNorm2d: 2-9 [16, 128, 112, 112] 256 4,096 │ └─ReLU: 2-10 [16, 128, 112, 112] -- -- │ └─Conv2d: 2-11 [16, 128, 112, 112] 147,584 29,620,699,136 │ └─BatchNorm2d: 2-12 [16, 128, 112, 112] 256 4,096 │ └─ReLU: 2-13 [16, 128, 112, 112] -- -- │ └─MaxPool2d: 2-14 [16, 128, 56, 56] -- -- │ └─Conv2d: 2-15 [16, 256, 56, 56] 295,168 14,810,349,568 │ └─BatchNorm2d: 2-16 [16, 256, 56, 56] 512 8,192 │ └─ReLU: 2-17 [16, 256, 56, 56] -- -- │ └─Conv2d: 2-18 [16, 256, 56, 56] 590,080 29,607,854,080 │ └─BatchNorm2d: 2-19 [16, 256, 56, 56] 512 8,192 │ └─ReLU: 2-20 [16, 256, 56, 56] -- -- │ └─Conv2d: 2-21 [16, 256, 56, 56] 590,080 29,607,854,080 │ └─BatchNorm2d: 2-22 [16, 256, 56, 56] 512 8,192 │ └─ReLU: 2-23 [16, 256, 56, 56] -- -- │ └─MaxPool2d: 2-24 [16, 256, 28, 28] -- -- │ └─Conv2d: 2-25 [16, 512, 28, 28] 1,180,160 14,803,927,040 │ └─BatchNorm2d: 2-26 [16, 512, 28, 28] 1,024 16,384 │ └─ReLU: 2-27 [16, 512, 28, 28] -- -- │ └─Conv2d: 2-28 [16, 512, 28, 28] 2,359,808 29,601,431,552 │ └─BatchNorm2d: 2-29 [16, 512, 28, 28] 1,024 16,384 │ └─ReLU: 2-30 [16, 512, 28, 28] -- -- │ └─Conv2d: 2-31 [16, 512, 28, 28] 2,359,808 29,601,431,552 │ └─BatchNorm2d: 2-32 [16, 512, 28, 28] 1,024 16,384 │ └─ReLU: 2-33 [16, 512, 28, 28] -- -- │ └─MaxPool2d: 2-34 [16, 512, 14, 14] -- -- │ └─Conv2d: 2-35 [16, 512, 14, 14] 2,359,808 7,400,357,888 │ └─BatchNorm2d: 2-36 [16, 512, 14, 14] 1,024 16,384 │ └─ReLU: 2-37 [16, 512, 14, 14] -- -- │ └─Conv2d: 2-38 [16, 512, 14, 14] 2,359,808 7,400,357,888 │ └─BatchNorm2d: 2-39 [16, 512, 14, 14] 1,024 16,384 │ └─ReLU: 2-40 [16, 512, 14, 14] -- -- │ └─Conv2d: 2-41 [16, 512, 14, 14] 2,359,808 7,400,357,888 │ └─BatchNorm2d: 2-42 [16, 512, 14, 14] 1,024 16,384 │ └─ReLU: 2-43 [16, 512, 14, 14] -- -- │ └─MaxPool2d: 2-44 [16, 512, 7, 7] -- -- ├─Sequential: 1-2 [16, 1] -- -- │ └─Linear: 2-45 [16, 4096] 102,764,544 1,644,232,704 │ └─ReLU: 2-46 [16, 4096] -- -- │ └─Dropout: 2-47 [16, 4096] -- -- │ └─Linear: 2-48 [16, 4096] 16,781,312 268,500,992 │ └─ReLU: 2-49 [16, 4096] -- -- │ └─Dropout: 2-50 [16, 4096] -- -- │ └─Linear: 2-51 [16, 1] 4,097 65,552 │ └─Sigmoid: 2-52 [16, 1] -- -- =================================================================================================================== Total params: 134,273,089 Trainable params: 134,273,089 Non-trainable params: 0 Total mult-adds (G): 247.68 =================================================================================================================== Input size (MB): 9.63 Forward/backward pass size (MB): 3469.21 Params size (MB): 537.09 Estimated Total Size (MB): 4015.94 ===================================================================================================================
machine learning project ideas
machine learning projects ideas
deep learning projects
kaggle projects
github machine learning projects
github machine learning project
Define Training and Evaluate Function
######################################## # Melissa Yap Chia Chean & Lau Xin Vern ######################################## # Training Function: # To get training loss, validation loass, training accuracy, and validation accuracy during the training def train(net, train_dataloader, val_dataloader, device, num_epochs, lr=0.1, momentum=0.8, step_size=5, gamma=0.5, verbose=True): train_acc_history = [] val_acc_history = [] train_history = [] val_history = [] net = net.to(device) optimizer = optim.SGD(net.parameters(), lr=lr, momentum=momentum) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=gamma) for e in range(num_epochs): running_train_correct = 0.0 running_train_loss = 0.0 running_train_count = 0.0 running_val_correct = 0.0 running_val_loss = 0.0 running_val_count = 0.0 net.train() for i, (inputs, labels) in tqdm(enumerate(train_dataloader), total=len(train_dataloader)): inputs = inputs.to(device) labels = labels.to(device) labels = labels.reshape(-1, 1).float() optimizer.zero_grad() outs = net(inputs) train_loss = F.binary_cross_entropy(outs, labels) train_loss.backward() optimizer.step() running_train_loss += train_loss.item() running_train_count += 1 preds = outs >= 0.5 running_train_correct += (preds == labels).sum().item() train_loss = running_train_loss / running_train_count train_acc = 100 * (running_train_correct / len(train_dataloader.dataset)) net.eval() with torch.no_grad(): for inputs, labels in val_dataloader: inputs = inputs.to(device) labels = labels.to(device) labels = labels.reshape(-1, 1).float() outs = net(inputs) val_loss = F.binary_cross_entropy(outs, labels) running_val_loss += val_loss.item() running_val_count += 1 preds = outs >= 0.5 running_val_correct += (preds == labels).sum().item() val_loss = running_val_loss / running_val_count val_acc = 100 * (running_val_correct / len(val_dataloader.dataset)) if verbose: print(f'[Epoch {e+1}/{num_epochs}] Training loss: {train_loss:.4f}, Validation loss: {val_loss:.4f}') print(f'Training accuracy: {train_acc:.2f}%, Validation accuracy: {val_acc:.2f}%') train_history.append(train_loss) val_history.append(val_loss) train_acc_history.append(train_acc) val_acc_history.append(val_acc) scheduler.step() return train_history, val_history, train_acc_history, val_acc_history
######################## # Adapted From Practical ######################## def evaluate(net, dataloader, device): # set to evaluation mode net.eval() # running_correct all_targets = [] all_predictions = [] for inputs, targets in dataloader: # transfer to the GPU inputs = inputs.to(device) targets = targets.to(device) # perform prediction with torch.no_grad(): outputs = net(inputs) predicted = (outputs >= 0.5).float() # update all_targets and all_predictions all_targets.extend(targets.cpu().numpy()) all_predictions.extend(predicted.cpu().numpy()) accuracy = accuracy_score(all_targets, all_predictions) accuracy = accuracy * 100 print(f'{accuracy = :.2f}')
############################### # Kok How Meng & Kuan Wei Yeow ############################### # Evaluate Function: # To get Confusion Matrix, ROC Curve, Precision Recall Curve, Precision, Recall, and F1 Score of the model def f1(net, dataloader, device): # set to evaluation mode net.eval() all_targets = [] all_predictions = [] all_probs = [] for inputs, targets in dataloader: # transfer to the GPU inputs = inputs.to(device) targets = targets.to(device) # perform prediction with torch.no_grad(): outputs = net(inputs) predicted = (outputs >= 0.5).float() # update all_targets and all_predictions all_targets.extend(targets.cpu().numpy()) all_predictions.extend(predicted.cpu().numpy()) all_probs.extend(outputs.cpu().numpy()) all_targets = np.array(all_targets) all_predictions = np.array(all_predictions) all_probs = np.array(all_probs) all_targets = torch.tensor(all_targets) all_predictions = torch.tensor(all_predictions) #Accuracy accuracy = accuracy_score(all_targets, all_predictions) print('Accuracy:', accuracy) #ROC Curve fpr, tpr, thresholds = roc_curve(all_targets, all_probs) roc_auc = auc(fpr, tpr) #F1 cm = confusion_matrix(all_targets, all_predictions) precision = precision_score(all_targets, all_predictions) recall = recall_score(all_targets, all_predictions) f1 = f1_score(all_targets, all_predictions) print('confusion_matrix: ', cm) print('precision: ', precision) print('recall: ', recall) print('f1 score: ', f1) #Precision Recall Curve precision, recall, _ = precision_recall_curve(all_targets, all_probs) pr_auc = auc(recall, precision) #Draw Confusion Matrix disp = ConfusionMatrixDisplay(confusion_matrix=cm) # Then just plot it: disp.plot() # And show it: plt.show() return roc_auc, fpr, tpr, pr_auc, recall, precision
Training and Evaluate models
Alexnet
train_history_alex, val_history_alex, train_acc_history_alex, val_acc_history_alex = train(alexnet, trainloader16,valloader16, device, num_epochs=15, lr=0.01, step_size=5, gamma=0.1)
100%|██████████| 563/563 [00:36<00:00, 15.57it/s]
[Epoch 1/15] Training loss: 0.6626, Validation loss: 0.5743 Training accuracy: 58.31%, Validation accuracy: 70.50%
100%|██████████| 563/563 [00:35<00:00, 15.79it/s]
[Epoch 2/15] Training loss: 0.5674, Validation loss: 0.5508 Training accuracy: 71.53%, Validation accuracy: 78.10%
100%|██████████| 563/563 [00:35<00:00, 15.70it/s]
[Epoch 3/15] Training loss: 0.5182, Validation loss: 0.5798 Training accuracy: 75.28%, Validation accuracy: 67.20%
100%|██████████| 563/563 [00:36<00:00, 15.56it/s]
[Epoch 4/15] Training loss: 0.4912, Validation loss: 0.5038 Training accuracy: 76.80%, Validation accuracy: 78.20%
100%|██████████| 563/563 [00:36<00:00, 15.58it/s]
[Epoch 5/15] Training loss: 0.4625, Validation loss: 0.6087 Training accuracy: 78.56%, Validation accuracy: 63.10%
100%|██████████| 563/563 [00:35<00:00, 15.72it/s]
[Epoch 6/15] Training loss: 0.3928, Validation loss: 0.4400 Training accuracy: 82.87%, Validation accuracy: 77.30%
100%|██████████| 563/563 [00:36<00:00, 15.45it/s]
[Epoch 7/15] Training loss: 0.3718, Validation loss: 0.4207 Training accuracy: 84.09%, Validation accuracy: 79.10%
100%|██████████| 563/563 [00:35<00:00, 15.76it/s]
[Epoch 8/15] Training loss: 0.3560, Validation loss: 0.3756 Training accuracy: 85.11%, Validation accuracy: 81.90%
100%|██████████| 563/563 [00:36<00:00, 15.39it/s]
[Epoch 9/15] Training loss: 0.3525, Validation loss: 0.4178 Training accuracy: 85.01%, Validation accuracy: 78.90%
100%|██████████| 563/563 [00:36<00:00, 15.51it/s]
[Epoch 10/15] Training loss: 0.3443, Validation loss: 0.3719 Training accuracy: 85.39%, Validation accuracy: 82.80%
100%|██████████| 563/563 [00:35<00:00, 15.64it/s]
[Epoch 11/15] Training loss: 0.3224, Validation loss: 0.3902 Training accuracy: 86.08%, Validation accuracy: 80.80%
100%|██████████| 563/563 [00:36<00:00, 15.46it/s]
[Epoch 12/15] Training loss: 0.3229, Validation loss: 0.3709 Training accuracy: 86.23%, Validation accuracy: 82.10%
100%|██████████| 563/563 [00:36<00:00, 15.60it/s]
[Epoch 13/15] Training loss: 0.3289, Validation loss: 0.3654 Training accuracy: 85.86%, Validation accuracy: 82.80%
100%|██████████| 563/563 [00:35<00:00, 15.70it/s]
[Epoch 14/15] Training loss: 0.3136, Validation loss: 0.3981 Training accuracy: 86.70%, Validation accuracy: 81.10%
100%|██████████| 563/563 [00:36<00:00, 15.56it/s]
[Epoch 15/15] Training loss: 0.3201, Validation loss: 0.3666 Training accuracy: 86.54%, Validation accuracy: 83.40%
evaluate(alexnet, testloader16, device)
accuracy = 81.39
import matplotlib.pyplot as plt plt.plot(val_acc_history_alex, label='Val Acc') plt.plot(train_acc_history_alex, label='Train Acc') plt.legend() plt.show()
import matplotlib.pyplot as plt plt.plot(val_history_alex, label='Val Loss') plt.plot(train_history_alex, label='Train Loss') plt.legend() plt.show()
project Deep learning
Deep learning project ideas
Deep learning projects ideas
deep learning projects
VGGNet
train_history_vgg, val_history_vgg, train_acc_history_vgg, val_acc_history_vgg = train(vggnet, trainloader16,valloader16, device, num_epochs=15, lr=0.01, step_size=5, gamma=0.1)
100%|██████████| 563/563 [01:25<00:00, 6.59it/s]
[Epoch 1/15] Training loss: 0.8552, Validation loss: 0.6796 Training accuracy: 51.64%, Validation accuracy: 55.50%
100%|██████████| 563/563 [01:25<00:00, 6.59it/s]
[Epoch 2/15] Training loss: 0.6925, Validation loss: 0.6621 Training accuracy: 52.53%, Validation accuracy: 58.20%
100%|██████████| 563/563 [01:25<00:00, 6.59it/s]
[Epoch 3/15] Training loss: 0.6876, Validation loss: 0.6586 Training accuracy: 53.17%, Validation accuracy: 57.40%
100%|██████████| 563/563 [01:25<00:00, 6.59it/s]
[Epoch 4/15] Training loss: 0.6864, Validation loss: 0.6645 Training accuracy: 54.24%, Validation accuracy: 58.60%
100%|██████████| 563/563 [01:25<00:00, 6.59it/s]
[Epoch 5/15] Training loss: 0.6674, Validation loss: 0.5817 Training accuracy: 59.92%, Validation accuracy: 71.00%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 6/15] Training loss: 0.6116, Validation loss: 0.5247 Training accuracy: 66.94%, Validation accuracy: 73.60%
100%|██████████| 563/563 [01:25<00:00, 6.59it/s]
[Epoch 7/15] Training loss: 0.5697, Validation loss: 0.5037 Training accuracy: 70.67%, Validation accuracy: 74.80%
100%|██████████| 563/563 [01:25<00:00, 6.58it/s]
[Epoch 8/15] Training loss: 0.5309, Validation loss: 0.5015 Training accuracy: 73.68%, Validation accuracy: 74.70%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 9/15] Training loss: 0.5067, Validation loss: 0.5301 Training accuracy: 76.12%, Validation accuracy: 72.10%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 10/15] Training loss: 0.4719, Validation loss: 0.4052 Training accuracy: 77.84%, Validation accuracy: 80.70%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 11/15] Training loss: 0.4313, Validation loss: 0.3714 Training accuracy: 80.66%, Validation accuracy: 84.50%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 12/15] Training loss: 0.4104, Validation loss: 0.3542 Training accuracy: 82.18%, Validation accuracy: 85.00%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 13/15] Training loss: 0.4031, Validation loss: 0.3295 Training accuracy: 82.29%, Validation accuracy: 86.40%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 14/15] Training loss: 0.3959, Validation loss: 0.3332 Training accuracy: 82.51%, Validation accuracy: 85.80%
100%|██████████| 563/563 [01:25<00:00, 6.60it/s]
[Epoch 15/15] Training loss: 0.3846, Validation loss: 0.3103 Training accuracy: 83.11%, Validation accuracy: 87.20%
evaluate(vggnet, testloader16, device)
accuracy = 84.65
import matplotlib.pyplot as plt plt.plot(val_acc_history_vgg, label='Val Acc') plt.plot(train_acc_history_vgg, label='Train Acc') plt.legend() plt.show()
import matplotlib.pyplot as plt plt.plot(val_history_vgg, label='Val Loss') plt.plot(train_history_vgg, label='Train Loss') plt.legend() plt.show()
Resnet50
train_history_resnetB16, val_history_resnetB16, train_acc_history_resnetB16, val_acc_history_resnetB16 = train(resnet, trainloader16,valloader16, device, num_epochs=15, lr=0.01, step_size=5, gamma=0.1)
evaluate(resnet, testloader16, device)
accuracy = 89.17
import matplotlib.pyplot as plt plt.plot(val_acc_history_resnetB16, label='Val Acc') plt.plot(train_acc_history_resnetB16, label='Train Acc') plt.legend() plt.show()
import matplotlib.pyplot as plt plt.plot(val_history_resnetB16, label='Val Loss') plt.plot(train_history_resnetB16, label='Train Loss') plt.legend() plt.show()
Fine Tuning
Testing batch size 8 of resnet50
resnetB8 = build_network()
train_history_resnetB8, val_history_resnetB8, train_acc_history_resnetB8, val_acc_history_resnetB8 = train(resnetB8, trainloader8,valloader8, device, num_epochs=15, lr=0.01, step_size=5, gamma=0.1)
evaluate(resnetB8, testloader16, device)
accuracy = 91.02
import matplotlib.pyplot as plt plt.plot(val_acc_history_resnetB8, label='Val Acc') plt.plot(train_acc_history_resnetB8, label='Train Acc') plt.legend() plt.show()
import matplotlib.pyplot as plt plt.plot(val_history_resnetB8, label='Val Loss') plt.plot(train_history_resnetB8, label='Train Loss') plt.legend() plt.show()
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
project based on machine learning
supervised machine learning projects
unique machine learning project ideas
Hyperparameters Tuning
Experiment 1:
- lower learning rate and increase the number of epoch
- Decrease step size and increase gamma
Trying to find a better local minima.
fine_tune1 = build_network()
train_history_resnetf1, val_history_resnetf1, train_acc_history_resnetf1, val_acc_history_resnetf1 = train(fine_tune1, trainloader16, valloader16, device, num_epochs=20, lr=0.005, step_size=4, gamma=0.2)
evaluate(fine_tune1, testloader16, device)
accuracy = 91.27
import matplotlib.pyplot as plt plt.plot(val_acc_history_resnetf1, label='Val Acc') plt.plot(train_acc_history_resnetf1, label='Train Acc') plt.legend() plt.show()
machine learning projects github
machine learning project github
deep learning project ideas
deep learning projects ideas
import matplotlib.pyplot as plt plt.plot(val_history_resnetf1, label='Val Loss') plt.plot(train_history_resnetf1, label='Train Loss') plt.legend() plt.show()
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
Experiment 2:
- increase the learning rate and increase the number of epoch
- increase step size and increase gamma
Trying to find another local minima.
fine_tune2 = build_network()
train_history_resnetf2, val_history_resnetf2, train_acc_history_resnetf2, val_acc_history_resnetf2 = train(fine_tune2, trainloader16, valloader16, device, num_epochs=20, lr=0.015, step_size=6, gamma=0.2)
evaluate(fine_tune2, testloader16, device)
accuracy = 92.03
import matplotlib.pyplot as plt plt.plot(val_acc_history_resnetf2, label='Val Acc') plt.plot(train_acc_history_resnetf2, label='Train Acc') plt.legend() plt.show()
import matplotlib.pyplot as plt plt.plot(val_history_resnetf2, label='Val Loss') plt.plot(train_history_resnetf2, label='Train Loss') plt.legend() plt.show()
Experiment 3:
- increase the learning rate and remain the number of epoch
- maintain step size and increase gamma
fine_tune3 = build_network()
train_history_resnetf3, val_history_resnetf3, train_acc_history_resnetf3, val_acc_history_resnetf3 = train(fine_tune3, trainloader16, valloader16, device, num_epochs=15, lr=0.02, step_size=5, gamma=0.15)
evaluate(fine_tune3, testloader16, device)
accuracy = 90.77
import matplotlib.pyplot as plt plt.plot(val_acc_history_resnetf3, label='Val Acc') plt.plot(train_acc_history_resnetf3, label='Train Acc') plt.legend() plt.show()
import matplotlib.pyplot as plt plt.plot(val_history_resnetf3, label='Val Loss') plt.plot(train_history_resnetf3, label='Train Loss') plt.legend() plt.show()
Experiment 4:
- increase learning rate and decrease the number of epoch
- remain step size and increase gamma
fine_tune4 = build_network()
train_history_resnetf4, val_history_resnetf4, train_acc_history_resnetf4, val_acc_history_resnetf4 = train(fine_tune4, trainloader16, valloader16, device, num_epochs=10, lr=0.02, step_size=5, gamma=0.15)
100%|██████████| 563/563 [00:55<00:00, 10.22it/s]
[Epoch 1/10] Training loss: 0.6709, Validation loss: 0.6133 Training accuracy: 58.02%, Validation accuracy: 70.20%
100%|██████████| 563/563 [00:55<00:00, 10.22it/s]
[Epoch 2/10] Training loss: 0.5823, Validation loss: 0.4897 Training accuracy: 69.93%, Validation accuracy: 79.30%
100%|██████████| 563/563 [00:55<00:00, 10.23it/s]
[Epoch 3/10] Training loss: 0.5059, Validation loss: 0.4124 Training accuracy: 75.59%, Validation accuracy: 81.80%
100%|██████████| 563/563 [00:55<00:00, 10.22it/s]
[Epoch 4/10] Training loss: 0.4771, Validation loss: 0.5075 Training accuracy: 78.36%, Validation accuracy: 78.80%
100%|██████████| 563/563 [00:55<00:00, 10.23it/s]
[Epoch 5/10] Training loss: 0.4358, Validation loss: 0.3774 Training accuracy: 80.57%, Validation accuracy: 81.40%
100%|██████████| 563/563 [00:55<00:00, 10.21it/s]
[Epoch 6/10] Training loss: 0.3554, Validation loss: 0.2670 Training accuracy: 84.21%, Validation accuracy: 88.80%
100%|██████████| 563/563 [00:55<00:00, 10.22it/s]
[Epoch 7/10] Training loss: 0.3356, Validation loss: 0.2543 Training accuracy: 85.87%, Validation accuracy: 89.70%
100%|██████████| 563/563 [00:55<00:00, 10.21it/s]
[Epoch 8/10] Training loss: 0.3233, Validation loss: 0.2730 Training accuracy: 86.44%, Validation accuracy: 88.60%
100%|██████████| 563/563 [00:55<00:00, 10.19it/s]
[Epoch 9/10] Training loss: 0.3141, Validation loss: 0.2410 Training accuracy: 86.92%, Validation accuracy: 90.40%
100%|██████████| 563/563 [00:55<00:00, 10.22it/s]
[Epoch 10/10] Training loss: 0.3154, Validation loss: 0.2238 Training accuracy: 86.93%, Validation accuracy: 90.50%
evaluate(fine_tune4, testloader16, device)
accuracy = 89.87
import matplotlib.pyplot as plt plt.plot(val_acc_history_resnetf4, label='Val Acc') plt.plot(train_acc_history_resnetf4, label='Train Acc') plt.legend() plt.show()
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
import matplotlib.pyplot as plt plt.plot(val_history_resnetf4, label='Val Loss') plt.plot(train_history_resnetf4, label='Train Loss') plt.legend() plt.show()
Evaluate The Model
final_model = fine_tune2
roc_auc, fpr, tpr, pr_auc, recall, precision = f1(final_model, testloader16, device)
Accuracy: 0.9202607823470411 confusion_matrix: [[932 51] [108 903]] precision: 0.9465408805031447 recall: 0.8931750741839762 f1 score: 0.9190839694656489
####################### # Kuan Wei Yeow ####################### plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic') plt.legend(loc="lower right") plt.show() plt.figure() plt.plot(recall, precision, color='blue', lw=2, label='Precision-Recall curve (area = %0.2f)' % pr_auc) plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.05]) plt.title('Precision-Recall Curve') plt.legend(loc="lower left") plt.show()
deep-learning project github
github deep learning projects
deep learning github
deep-learning github
Testing the Model
classes = {0:'FAKE', 1:'REAL'} final_model.eval() correct_predictions = 0 incorrect_predictions = 0 for index in range(len(testvalset2)): # Get the image and label image, label = testvalset2[index] print('Image {}:'.format(index + 1)) display(image) # Move the image to the appropriate device (e.g., GPU) image = test_transform(image) image = image.unsqueeze(0) # Add batch dimension image = image.to(device) # Perform prediction with torch.no_grad(): output = final_model(image) output = output.item() print('Score: ', output) #predicted = (outputs >= 0.5).float() if output >= 0.5: predicted = 1 else: predicted = 0 if testvalset2.targets[index] == predicted: correct_predictions += 1 else: incorrect_predictions += 1 print('True Label: ', classes[testvalset2.targets[index]]) print('Predicted Label: ', classes[predicted]) # Print the indices and predicted class labels for correct and incorrect predictions print() print("Correct predictions:", correct_predictions) print("Incorrect predictions:", incorrect_predictions)
deep learning projects for beginners
easy deep learning projects python
ideas for deep learning projects
Image 1:
Score: 0.22087490558624268 True Label: REAL Predicted Label: FAKE Image 2:
Score: 0.9303815960884094 True Label: REAL Predicted Label: REAL Image 3:
Score: 0.9991229176521301 True Label: REAL Predicted Label: REAL Image 4:
Score: 0.0186141524463892 True Label: FAKE Predicted Label: FAKE Image 5:
Score: 0.7645696401596069 True Label: REAL Predicted Label: REAL Image 6:
Score: 0.9880017638206482 True Label: REAL Predicted Label: REAL Correct predictions: 5 Incorrect predictions: 1
Conclusion
Great job! We’ve reached the conclusion of our exploration into the fascinating world of real versus AI-generated synthetic images. Throughout this journey, we’ve witnessed how deep learning has the power to blur the boundaries between reality and artificial creation.
As you continue to delve into your own deep learning projects, always remember that this particular endeavor is merely the starting point.
deep learning project
data to data
public datasets
machine learning projects
project machine learning
Whether you’re an aspiring deep learning project manager, navigating the realm of deep learning project management for the first time, or a master’s student specializing in deep learning, aiming for groundbreaking discoveries, this field is brimming with opportunities.
deep learning projects github
deep learning project github
deep learning github projects
Keep pushing the boundaries, maintain your curiosity, and who knows? Your next project might just redefine what’s achievable in the realm of AI. Happy coding!
2 Comments
Deep Learning Project 2: Fashion Recommendation System · June 5, 2024 at 11:36 am
[…] Deep Learning Project 1: Real and AI-Generated Synthetic Images […]
Best Dropout And Strides Models Deep Learning Exercise 1 · June 5, 2024 at 12:11 pm
[…] Deep Learning Project 1: Real and AI-Generated Synthetic Images […]