![Best Deep Learning Tutorial for Beginners 2024](https://darekdari.com/wp-content/uploads/2024/05/blog-image-1-1024x576.png)
Table of Contents
Introduction
Welcome to the Deep Learning Tutorial for Beginners!
I will guide you through each step, making it easy to understand. By breaking down the information into keywords, you’ll find it much simpler to grasp.
But don’t forget to check:
- Lesson 1: Best Deep Learning Tutorial for Beginners 2024
- Lesson 2: Best Pytorch Tutorial for Deep Learning
- Lesson 3: Best Transformers and BERT Tutorial with Deep Learning and NLP
- Lesson 4: Best Deep Reinforcement Learning Course
- Lesson 5: Introduction to Deep Learning with a Simple LSTM
deep learning projects
project deep learning
deep learning project ideas
deep learning projects ideas
By the end of this tutorial, you’ll have a solid understanding of deep learning and be ready to dive deeper into the subject.
What is Deep Learning?
Deep learning is a powerful technique within the field of machine learning. It excels in situations where there is a large amount of data, as traditional machine learning techniques may struggle to perform well in such cases. Deep learning offers improved performance and accuracy in these scenarios.
Determining what constitutes a “big” amount of data is not a straightforward answer, but generally speaking, having around 1 million samples can be considered a significant amount of data.
deep learning github
deep-learning github
deep learning projects for masters students
Deep learning finds applications in various fields such as speech recognition, image classification, natural language processing (NLP), and recommendation systems.
The main difference between deep learning and machine learning is that deep learning is a subset of machine learning. In machine learning, features are manually provided to the algorithm, whereas in deep learning, the algorithm learns these features directly from the data.
deep learning projects for beginners
easy machine learning projects python
What is the Difference Between Deep Learning and Machine Learning?
Imagine machine learning (ML) as a broad term that encompasses various techniques, while deep learning (DL) is a specific method within that realm.
ML focuses on enabling computers to learn from data and make informed decisions or predictions without explicit programming. Essentially, you provide the computer with data, instruct it to learn from that data, and then it can use that knowledge to make predictions or decisions about new, unseen data.
Here’s a quick cheat sheet to keep it straight:
Feature | Machine Learning | Deep Learning |
---|---|---|
How it learns | Follows instructions and drills | Learns from experience |
Good for | Simpler tasks with clear rules | Complex tasks with lots of data |
Needs from you | More hand-holding and prep work | Less hand-holding, but needs a lot of data to learn |
deep learning project ideas
deep learning projects ideas
Let’s Start Learning with Coding
# This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory # import warnings import warnings # filter warnings warnings.filterwarnings('ignore') from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output.
Sign-language-digits-dataset
deep learning projects github
deep learning project github
deep learning github projects
Overview the Data Set
For this tutorial, we will be utilizing the “sign language digits data set”. This dataset consists of a total of 2062 images of sign language digits:
- Since we are focusing on digits ranging from 0 to 9, there are 10 unique signs in the dataset.
- To keep things simple at the beginning of the tutorial, we will only be using sign 0 and sign 1.
- In the dataset, sign zero can be found between indexes 204 and 408, with a total of 205 instances.
- Similarly, sign one is located between indexes 822 and 1027, with a total of 206 instances. Therefore, we will be using 205 samples from each class (label).
Please note that having only 205 samples is quite limited for deep learning purposes. However, since this is a tutorial, it won’t have a significant impact.
Now, let’s prepare our X and Y arrays. X will represent the image array containing zero and one signs, while Y will represent the label array containing 0 and 1.
deep learning projects for beginners
easy machine learning projects python
# load data set x_l = np.load('../input/Sign-language-digits-dataset/X.npy') Y_l = np.load('../input/Sign-language-digits-dataset/Y.npy') img_size = 64 plt.subplot(1, 2, 1) plt.imshow(x_l[260].reshape(img_size, img_size)) plt.axis('off') plt.subplot(1, 2, 2) plt.imshow(x_l[900].reshape(img_size, img_size)) plt.axis('off')
(-0.5, 63.5, 63.5, -0.5)
![](https://www.kaggleusercontent.com/kf/14675591/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..q6auFBLqJFZgVlweTMA4Vw.nMan1Z4CTXdF-tmFqdKDtBcdeilB3Ld7O3IAlpC92zFnyQUmn04BP8RXTyQYtsLvCcmTYeGDfKlqhjUQsAjhLRWyp_NN1Ppd-FdlB7ZwZvG-TaLgjrWuiTvPvQIlXGz2Y80yEcseDCDzHuUG7Wi7ko3Ow619l18vOBmKz92LunpNB8hdsxS6JK6xOuaX4-W39QCzqH82zxE-Yt-rz5XSTo3ub43C2WdW5J00Mt4e1gEB2wLXHqGVCkQ8_0dH3-u4o-n9WP62LPQGsHzriieySZg3Xyn58Oo2P8-3HrNJgs7avvHSvpKFbXSpa4vnhBjhsAR4c2GiQz-ZtmyL0689NZ0TlJO3xUq-ICc8RWFnG4Ldu2pq0wCJ0I3SIXvEt6e5bVc7NZofGx34nnrQ1DXOJ7Fcjth2fClJEvzBRaygPbnrbvDAS3jjP_rHRc6vZtKUIa3UbGLI-DNjcVLIzxBXJWKVYASAQg3PHTdQqaoNGN7uOBefHPr8mQ3heHIhy2GCCjRnSBl8N8tG6OVfW-Xikz6z8pj5uw0ae2JfUeRI9DMwWpfqeDxKawXVXl1q5IeYBfO1dQn3d7p4NDHwasubTAbpebs4oP6WsxEos1lIeUwIcHWg55qn4Cy3qcn2QTjeecJ1Zts0G76hw-Pho8OKsrfJI5JqOVqAmItf1BSoEOY.VDMS-qtDJ64CYA21ES7Xsw/__results___files/__results___4_1.png)
In order to create image array, I concatenate zero sign and one sign arrays
Then I create label array 0 for zero sign images and 1 for one sign images.
# Join a sequence of arrays along an row axis. X = np.concatenate((x_l[204:409], x_l[822:1027] ), axis=0) # from 0 to 204 is zero sign and from 205 to 410 is one sign z = np.zeros(205) o = np.ones(205) Y = np.concatenate((z, o), axis=0).reshape(X.shape[0],1) print("X shape: " , X.shape) print("Y shape: " , Y.shape)
X shape: (410, 64, 64) Y shape: (410, 1)
deep learning projects for beginners
easy machine learning projects python
The shape of X is (410, 64, 64):
- 410 indicates we have 410 images (depicting zero and one signs).
- 64 means each image has a size of 64×64 pixels.
The shape of Y is (410, 1):
- 410 indicates we have 410 labels (either 0 or 1).
Now, let’s split X and Y into training and testing sets.
test_size
specifies the percentage of the dataset to be used for testing. Here, 15% will be for testing and 75% for training.random_state
ensures reproducibility. Using the same seed while randomizing means that every time we calltrain_test_split
, we get the same train-test distribution if therandom_state
is the same.
# Then lets create x_train, y_train, x_test, y_test arrays from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.15, random_state=42) number_of_train = X_train.shape[0] number_of_test = X_test.shape[0]
deep learning projects github
deep learning project github
deep learning github projects
deep-learning project github
Now we have 3 dimensional input array (X) so we need to make it flatten (2D) in order to use as input for our first deep learning model. Our label array (Y) is already flatten(2D) so we leave it like that.
Lets flatten X array (images array):
X_train_flatten = X_train.reshape(number_of_train,X_train.shape[1]*X_train.shape[2]) X_test_flatten = X_test .reshape(number_of_test,X_test.shape[1]*X_test.shape[2]) print("X train flatten",X_train_flatten.shape) print("X test flatten",X_test_flatten.shape)
X train flatten (348, 4096)
X test flatten (62, 4096)
We have a total of 348 images in the image train array, with each image containing 4096 pixels.
Additionally, there are 62 images in the image test array, also with 4096 pixels per image.
Now, let’s proceed with taking the transpose.
You might wonder why I’m doing this, but there’s no specific technical reason behind it. I simply wrote the code (which you’ll see in the upcoming parts) based on this approach.
x_train = X_train_flatten.T x_test = X_test_flatten.T y_train = Y_train.T y_test = Y_test.T print("x train: ",x_train.shape) print("x test: ",x_test.shape) print("y train: ",y_train.shape) print("y test: ",y_test.shape)
x train: (4096, 348) x test: (4096, 62) y train: (1, 348) y test: (1, 62)
kaggle datasets deep learning projects
project based on deep learning
supervised deep learning projects
unique deep learning project ideas
Here’s what we’ve done so far:
- Selected our labels (classes) as sign zero and sign one.
- Created and flattened the training and testing sets.
Our final inputs (images) and outputs (labels or classes) look like this:
deep learning projects for beginners
easy deep learning projects python
Logistic Regression
When the topic of binary classification (0 and 1 outputs) comes up, the first thing that usually comes to mind is logistic regression.
But what about logistic regression in the context of deep learning tutorials?
Well, it turns out that logistic regression is actually a very basic form of a neural network.
By the way, neural networks and deep learning are essentially the same thing. When we delve into artificial neural networks, I’ll explain in detail what we mean by “deep”.
To grasp the concept of logistic regression (which is a simple form of deep learning), let’s start by learning about computation graphs.
Computation Graph
Computation graphs provide a visual representation of mathematical expressions.
For instance, consider the expression c = √(a^2 + b^2).
The computation graph for this expression visually represents the math involved.
Now, let’s explore the computation graph for logistic regression:
- The parameters include weights and bias.
- Weights are the coefficients for each pixel, while bias is the intercept.
- The formula z = (w^T)x + b can be represented as z = bias + p1w1 + p2w2 + … + p4096*w4096.
- The output y_head is obtained by applying the sigmoid function to z.
The sigmoid function ensures that z is between zero and one, representing probability. This function is visible in the computation graph.
deep learning projects for beginners
easy machine learning projects python
ideas for machine learning projects
projects for machine learning
deep learning projects kaggle
kaggle deep learning projects
Why do we use the sigmoid function?
It provides probabilistic results and is differentiable, making it suitable for gradient descent algorithms.
For example, if z = 4, applying the sigmoid function results in y_head being approximately 0.9. This indicates a 90% probability of the classification result being 1.
Let’s delve deeper into each component of the computation graph from the beginning.
Initializing parameters
As you may already know, the input for our images consists of 4096 pixels each (in the x_train dataset). Each pixel has its own set of weights. The first step involves multiplying each pixel with its corresponding weight.
Now, you might be wondering what the initial value of these weights is, let’s see:
- There are various techniques that can be explained in the context of artificial neural networks, but for now, the initial weights are set to 0.01.
- Alright, so the weights are 0.01, but what is the shape of the weight array? As you can see from the computation graph of logistic regression, it is (4096,1).
- Additionally, the initial bias is set to 0.
Let’s move on and write some code. To be able to use it in upcoming topics like artificial neural networks (ANN), I will create a definition or method.
# short description and example of definition (def) def dummy(parameter): dummy_parameter = parameter + 5 return dummy_parameter result = dummy(3) # result = 8 # lets initialize parameters # So what we need is dimension 4096 that is number of pixels as a parameter for our initialize method(def) def initialize_weights_and_bias(dimension): w = np.full((dimension,1),0.01) b = 0.0 return w, b
#w,b = initialize_weights_and_bias(4096)
deep learning github
deep-learning github
deep learning projects for masters students
neural network projects ideas
Forward Propagation
The entire process from pixels to cost is called forward propagation:
- Calculate ( z ) using the formula ( z = (w^T)x + b ). In this equation:
- ( x ) is the pixel array,
- ( w ) are the weights,
- ( b ) is the bias,
- ( T ) represents the transpose operation.
Here are the 3 steps:
- Apply the sigmoid function to ( z ) to get ( y_{\text{head}} ) (the probability). If you get confused, refer to the computation graph. The sigmoid function equation can also be found in the computation graph.
- Calculate the loss (error) function.
- The cost function is the sum of all the losses (errors).
Let’s start with calculating ( z ), and then define the sigmoid function, which takes ( z ) as an input parameter and returns ( y_{\text{head}} ) (the probability).
# calculation of z #z = np.dot(w.T,x_train)+b def sigmoid(z): y_head = 1/(1+np.exp(-z)) return y_head
y_head = sigmoid(0) y_head 0.5
# Forward propagation steps: # find z = w.T*x+b # y_head = sigmoid(z) # loss(error) = loss(y,y_head) # cost = sum(loss) def forward_propagation(w,b,x_train,y_train): z = np.dot(w.T,x_train) + b y_head = sigmoid(z) # probabilistic 0-1 loss = -y_train*np.log(y_head)-(1-y_train)*np.log(1-y_head) cost = (np.sum(loss))/x_train.shape[1] # x_train.shape[1] is for scaling return cost
When we apply the sigmoid method and calculate y_head, we can understand what the loss (error) function is all about.
For instance, if we input an image, multiply it by its weights, add the bias term, and find z, then apply the sigmoid method to find y_head. At this stage, we have a clear idea of what we have done.
For example, if y_head turns out to be 0.9, which is greater than 0.5, our prediction is that the image is a certain image. Everything seems to be going well. But, how can we be sure if our prediction is accurate or not? The answer lies in the loss (error) function.
deep learning project github
deep learning github projects
deep-learning project github
github deep learning projects
The mathematical representation of the log loss (error) function indicates that making an incorrect prediction results in a significant increase in loss (error).
For example, if our actual image is a certain image with a label of 1 (y = 1), and our prediction is y_head = 1, the loss (error) equation yields a result of 0. This means we have made the correct prediction, resulting in a loss of 0.
However, if we were to make an incorrect prediction such as y_head = 0, the loss (error) would be infinite.
Following this, the cost function is the sum of all loss functions. Each image contributes to the loss function, and the cost function is the total sum of all loss functions generated by each input image.
Optimization Algorithm with Gradient Descent
Identifying errors in our model helps us determine costs, crucial for accurate predictions.
Step | Description |
---|---|
1 | Initialize weights and bias |
2 | Update weights and bias to minimize cost via gradient descent |
In practice:
- Start with w = 5, no bias. Forward propagation yields a cost of 1.5.
- Adjust weight using slope1, yielding w = 2 and cost = 0.4.
- Further refine weight using slope2, resulting in w = 1.3 and cost = 0.3.
- No further weight adjustment needed (slope3 = 0.01).
- Derivatives help determine slope, guiding updates in the direction of cost reduction.
- Learning rate (α) balances speed and precision, crucial for effective updates.
Understanding derivatives aids in optimizing our model. Consider visual resources for deeper comprehension. Two options: Google derivative of the log loss function or search directly for its mathematical representation. My preference leans towards the latter, finding it more intuitive with visual support.
Mathematically, the derivative of the cost function (∂J) relative to weights (w) is expressed as ∂J/∂w = (1/m) * (y_head – y)^T, and for bias (b) as ∂J/∂b = (1/m) * Σ(y_head – y).
# In backward propagation we will use y_head that found in forward progation # Therefore instead of writing backward propagation method, lets combine forward propagation and backward propagation def forward_backward_propagation(w,b,x_train,y_train): # forward propagation z = np.dot(w.T,x_train) + b y_head = sigmoid(z) loss = -y_train*np.log(y_head)-(1-y_train)*np.log(1-y_head) cost = (np.sum(loss))/x_train.shape[1] # x_train.shape[1] is for scaling # backward propagation derivative_weight = (np.dot(x_train,((y_head-y_train).T)))/x_train.shape[1] # x_train.shape[1] is for scaling derivative_bias = np.sum(y_head-y_train)/x_train.shape[1] # x_train.shape[1] is for scaling gradients = {"derivative_weight": derivative_weight,"derivative_bias": derivative_bias} return cost,gradients
Up to this point we learnt how to:
- Initialize parameters (implemented)
- Find cost with forward propagation and cost function (implemented)
- Update (learning) parameters (weight and bias). Now lets implement it.
# Updating(learning) parameters def update(w, b, x_train, y_train, learning_rate,number_of_iterarion): cost_list = [] cost_list2 = [] index = [] # updating(learning) parameters is number_of_iterarion times for i in range(number_of_iterarion): # make forward and backward propagation and find cost and gradients cost,gradients = forward_backward_propagation(w,b,x_train,y_train) cost_list.append(cost) # lets update w = w - learning_rate * gradients["derivative_weight"] b = b - learning_rate * gradients["derivative_bias"] if i % 10 == 0: cost_list2.append(cost) index.append(i) print ("Cost after iteration %i: %f" %(i, cost)) # we update(learn) parameters weights and bias parameters = {"weight": w,"bias": b} plt.plot(index,cost_list2) plt.xticks(index,rotation='vertical') plt.xlabel("Number of Iterarion") plt.ylabel("Cost") plt.show() return parameters, gradients, cost_list #parameters, gradients, cost_list = update(w, b, x_train, y_train, learning_rate = 0.009,number_of_iterarion = 200)
Wow, I’m feeling exhausted 🙂 So far, we’ve been learning about our parameters. This means that we’re fitting the data.
Now, when it comes to making predictions, we rely on these parameters. So, let’s get to predicting!
During the prediction step, we take x_test as our input and use it to make forward predictions.
# prediction
def predict(w,b,x_test):
# x_test is a input for forward propagation
z = sigmoid(np.dot(w.T,x_test)+b)
Y_prediction = np.zeros((1,x_test.shape[1]))
# if z is bigger than 0.5, our prediction is sign one (y_head=1),
# if z is smaller than 0.5, our prediction is sign zero (y_head=0),
for i in range(z.shape[1]):
if z[0,i]<= 0.5:
Y_prediction[0,i] = 0
else:
Y_prediction[0,i] = 1
return Y_prediction
# predict(parameters["weight"],parameters["bias"],x_test)
kaggle projects
github deep learning projects
github deep learning project
deep learning projects github
Now lets put them all together:
def logistic_regression(x_train, y_train, x_test, y_test, learning_rate , num_iterations): # initialize dimension = x_train.shape[0] # that is 4096 w,b = initialize_weights_and_bias(dimension) # do not change learning rate parameters, gradients, cost_list = update(w, b, x_train, y_train, learning_rate,num_iterations) y_prediction_test = predict(parameters["weight"],parameters["bias"],x_test) y_prediction_train = predict(parameters["weight"],parameters["bias"],x_train) # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(y_prediction_train - y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(y_prediction_test - y_test)) * 100)) logistic_regression(x_train, y_train, x_test, y_test,learning_rate = 0.01, num_iterations = 150)
Cost after iteration 0: 14.014222 Cost after iteration 10: 2.544689 Cost after iteration 20: 2.577950 Cost after iteration 30: 2.397999 Cost after iteration 40: 2.185019 Cost after iteration 50: 1.968348 Cost after iteration 60: 1.754195 Cost after iteration 70: 1.535079 Cost after iteration 80: 1.297567 Cost after iteration 90: 1.031919 Cost after iteration 100: 0.737019 Cost after iteration 110: 0.441355 Cost after iteration 120: 0.252278 Cost after iteration 130: 0.205168 Cost after iteration 140: 0.196168
![](https://www.kaggleusercontent.com/kf/14675591/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..q6auFBLqJFZgVlweTMA4Vw.nMan1Z4CTXdF-tmFqdKDtBcdeilB3Ld7O3IAlpC92zFnyQUmn04BP8RXTyQYtsLvCcmTYeGDfKlqhjUQsAjhLRWyp_NN1Ppd-FdlB7ZwZvG-TaLgjrWuiTvPvQIlXGz2Y80yEcseDCDzHuUG7Wi7ko3Ow619l18vOBmKz92LunpNB8hdsxS6JK6xOuaX4-W39QCzqH82zxE-Yt-rz5XSTo3ub43C2WdW5J00Mt4e1gEB2wLXHqGVCkQ8_0dH3-u4o-n9WP62LPQGsHzriieySZg3Xyn58Oo2P8-3HrNJgs7avvHSvpKFbXSpa4vnhBjhsAR4c2GiQz-ZtmyL0689NZ0TlJO3xUq-ICc8RWFnG4Ldu2pq0wCJ0I3SIXvEt6e5bVc7NZofGx34nnrQ1DXOJ7Fcjth2fClJEvzBRaygPbnrbvDAS3jjP_rHRc6vZtKUIa3UbGLI-DNjcVLIzxBXJWKVYASAQg3PHTdQqaoNGN7uOBefHPr8mQ3heHIhy2GCCjRnSBl8N8tG6OVfW-Xikz6z8pj5uw0ae2JfUeRI9DMwWpfqeDxKawXVXl1q5IeYBfO1dQn3d7p4NDHwasubTAbpebs4oP6WsxEos1lIeUwIcHWg55qn4Cy3qcn2QTjeecJ1Zts0G76hw-Pho8OKsrfJI5JqOVqAmItf1BSoEOY.VDMS-qtDJ64CYA21ES7Xsw/__results___files/__results___32_1.png)
train accuracy: 92.816091954023 % test accuracy: 93.54838709677419 %
Now that we have grasped the concept, we can utilize the sklearn library, which simplifies the implementation of logistic regression by eliminating the need to manually execute each step.
Logistic Regression with Sklearn
The sklearn library provides a convenient logistic regression method for easy implementation of logistic regression. If you want to learn more about the parameters of logistic regression in sklearn, you can refer to this link: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
It’s important to note that the accuracies may vary from what we expect because logistic regression utilizes various features such as optimization parameters and regularization that we may not be using.
Now, let’s summarize our findings on logistic regression and move on to discussing artificial neural networks.
from sklearn import linear_model logreg = linear_model.LogisticRegression(random_state = 42,max_iter= 150) print("test accuracy: {} ".format(logreg.fit(x_train.T, y_train.T).score(x_test.T, y_test.T))) print("train accuracy: {} ".format(logreg.fit(x_train.T, y_train.T).score(x_train.T, y_train.T)))
test accuracy: 0.967741935483871 train accuracy: 1.0
Summary and Questions in Minds
What we did at this first part:
- Initialize parameters weight and bias
- Forward propagation
- Loss function
- Cost function
- Backward propagation (gradient descent)
- Prediction with learnt parameters weight and bias
- Logistic regression with sklearn
Feel free to ask me any questions you may have up until now, as we are about to delve into constructing an artificial neural network using logistic regression.
Work For You: This is a great point to pause and put your knowledge into practice. Your task is to develop your own logistic regression method and classify two distinct sign language digits.
Artificial Neural Network (ANN)
What is neural network?
Neural networks can also be referred to as deep neural networks or deep learning.
Neural networks are essentially an extension of logistic regression, repeated at least twice.
In logistic regression, there are input and output layers, while neural networks have at least one hidden layer between the input and output layers.
What is deep neural network?
The term “deep” in neural networks refers to the number of hidden layers it contains. The depth of a network is relative, with the number of hidden layers determining its depth. Years ago, networks with two or three hidden layers were considered deep due to hardware limitations.
However, recent advancements have seen networks with hundreds or even thousands of layers. So, when asking about the depth of a neural network, it’s best to simply inquire, “How deep?
What are neural network layers?
As you can observe, there is a hidden layer situated between the input and output layers. This hidden layer consists of 3 nodes.
If you’re wondering why I chose 3 nodes, the answer is simple – there is no specific reason, I just made the choice. The number of nodes is a hyperparameter, similar to the learning rate. Therefore, we will discuss hyperparameters towards the end of the artificial neural network.
The input and output layers remain unchanged. They are the same as in logistic regression.
In the image, you will notice a tanh function that may be unfamiliar to you. It is an activation function, similar to the sigmoid function.
The tanh activation function is preferred over sigmoid for hidden units because the mean of its output is closer to zero, which helps center the data for the next layer.
Additionally, the tanh activation function enhances non-linearity, resulting in better learning for our model.
As you can see, there are two parts highlighted in purple. Both parts resemble logistic regression. The only difference lies in the activation function, inputs, and outputs.
Let’s take a look at a 2-layer neural network:
- In logistic regression: input => output
- In a 2-layer neural network: input => hidden layer => output. You can think of the hidden layer as the output of part 1 and the input of part 2.
That’s all. We will follow a similar path to logistic regression for the 2-layer neural network.
2-Layer Neural Network
- Size of layers and initializing parameters weights and bias
- Forward propagation
- Loss function and Cost function
- Backward propagation
- Update Parameters
- Prediction with learnt parameters weight and bias
- Create Model
Size of layers and initializing parameters weights and bias
For x_train that has 348 sample x(348):
- z[1](348)=W[1]x(348)+b[1](348)
- a[1](348)=tanh(z[1](348))
- z[2](348)=W[2]a[1](348)+b[2](348)
- y^(348)=a[2](348)=σ(z[2](348))
In logistic regression, we set the weights to 0.01 and the bias to 0 initially. However, now we initialize the weights randomly. This is because if we set all the parameters to zero in each neuron of the first hidden layer, they will all perform the same computation.
As a result, even after multiple iterations of gradient descent, each neuron in the layer will be computing the same things as the other neurons. To avoid this, we initialize the weights randomly. Additionally, we make sure that the initial weights are small.
If they are too large, the inputs of the tanh function will also be very large, which in turn causes the gradients to be close to zero. This can significantly slow down the optimization algorithm.
It is still acceptable to have a bias of zero initially:
# intialize parameters and layer sizes def initialize_parameters_and_layer_sizes_NN(x_train, y_train): parameters = {"weight1": np.random.randn(3,x_train.shape[0]) * 0.1, "bias1": np.zeros((3,1)), "weight2": np.random.randn(y_train.shape[0],3) * 0.1, "bias2": np.zeros((y_train.shape[0],1))} return parameters
Forward propagation
Forward propagation is very similar to logistic regression, with the only distinction being the utilization of the tanh function and performing the entire process twice.
Fortunately, numpy already includes the tanh function, so there’s no need for us to implement it ourselves.
def forward_propagation_NN(x_train, parameters): Z1 = np.dot(parameters["weight1"],x_train) +parameters["bias1"] A1 = np.tanh(Z1) Z2 = np.dot(parameters["weight2"],A1) + parameters["bias2"] A2 = sigmoid(Z2) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache
Loss function and Cost function
Loss and cost functions are same with logistic regression.
# Compute cost def compute_cost_NN(A2, Y, parameters): logprobs = np.multiply(np.log(A2),Y) cost = -np.sum(logprobs)/Y.shape[1] return cost
Backward propagation
As you may already be aware, backward propagation refers to the concept of taking derivatives. If you’re interested in learning more about it (since it can be a bit confusing to explain without a conversation), I recommend watching a video on YouTube.
Nevertheless, the underlying logic remains the same, so let’s proceed with writing the code.
# Backward Propagation def backward_propagation_NN(parameters, cache, X, Y): dZ2 = cache["A2"]-Y dW2 = np.dot(dZ2,cache["A1"].T)/X.shape[1] db2 = np.sum(dZ2,axis =1,keepdims=True)/X.shape[1] dZ1 = np.dot(parameters["weight2"].T,dZ2)*(1 - np.power(cache["A1"], 2)) dW1 = np.dot(dZ1,X.T)/X.shape[1] db1 = np.sum(dZ1,axis =1,keepdims=True)/X.shape[1] grads = {"dweight1": dW1, "dbias1": db1, "dweight2": dW2, "dbias2": db2} return grads
kaggle projects
github machine learning projects
github machine learning project
machine learning projects github
Update Parameters
We also work extensively with logistic regression, which involves updating parameters.
# update parameters def update_parameters_NN(parameters, grads, learning_rate = 0.01): parameters = {"weight1": parameters["weight1"]-learning_rate*grads["dweight1"], "bias1": parameters["bias1"]-learning_rate*grads["dbias1"], "weight2": parameters["weight2"]-learning_rate*grads["dweight2"], "bias2": parameters["bias2"]-learning_rate*grads["dbias2"]} return parameters
Prediction with learnt parameters weight and bias
Lets write predict method that is like logistic regression:
# prediction def predict_NN(parameters,x_test): # x_test is a input for forward propagation A2, cache = forward_propagation_NN(x_test,parameters) Y_prediction = np.zeros((1,x_test.shape[1])) # if z is bigger than 0.5, our prediction is sign one (y_head=1), # if z is smaller than 0.5, our prediction is sign zero (y_head=0), for i in range(A2.shape[1]): if A2[0,i]<= 0.5: Y_prediction[0,i] = 0 else: Y_prediction[0,i] = 1 return Y_prediction
Create Model
# 2 - Layer neural network def two_layer_neural_network(x_train, y_train,x_test,y_test, num_iterations): cost_list = [] index_list = [] #initialize parameters and layer sizes parameters = initialize_parameters_and_layer_sizes_NN(x_train, y_train) for i in range(0, num_iterations): # forward propagation A2, cache = forward_propagation_NN(x_train,parameters) # compute cost cost = compute_cost_NN(A2, y_train, parameters) # backward propagation grads = backward_propagation_NN(parameters, cache, x_train, y_train) # update parameters parameters = update_parameters_NN(parameters, grads) if i % 100 == 0: cost_list.append(cost) index_list.append(i) print ("Cost after iteration %i: %f" %(i, cost)) plt.plot(index_list,cost_list) plt.xticks(index_list,rotation='vertical') plt.xlabel("Number of Iterarion") plt.ylabel("Cost") plt.show() # predict y_prediction_test = predict_NN(parameters,x_test) y_prediction_train = predict_NN(parameters,x_train) # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(y_prediction_train - y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(y_prediction_test - y_test)) * 100)) return parameters parameters = two_layer_neural_network(x_train, y_train,x_test,y_test, num_iterations=2500)
Cost after iteration 0: 0.335035 Cost after iteration 100: 0.341398 Cost after iteration 200: 0.342808 Cost after iteration 300: 0.330091 Cost after iteration 400: 0.318796 Cost after iteration 500: 0.293973 Cost after iteration 600: 0.260056 Cost after iteration 700: 0.224854 Cost after iteration 800: 0.193199 Cost after iteration 900: 0.167102 Cost after iteration 1000: 0.145568 Cost after iteration 1100: 0.131181 Cost after iteration 1200: 0.115718 Cost after iteration 1300: 0.103623 Cost after iteration 1400: 0.093212 Cost after iteration 1500: 0.084084 Cost after iteration 1600: 0.076276 Cost after iteration 1700: 0.069709 Cost after iteration 1800: 0.064087 Cost after iteration 1900: 0.059070 Cost after iteration 2000: 0.054392 Cost after iteration 2100: 0.050004 Cost after iteration 2200: 0.046068 Cost after iteration 2300: 0.042514 Cost after iteration 2400: 0.039062
![](https://www.kaggleusercontent.com/kf/14675591/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..q6auFBLqJFZgVlweTMA4Vw.nMan1Z4CTXdF-tmFqdKDtBcdeilB3Ld7O3IAlpC92zFnyQUmn04BP8RXTyQYtsLvCcmTYeGDfKlqhjUQsAjhLRWyp_NN1Ppd-FdlB7ZwZvG-TaLgjrWuiTvPvQIlXGz2Y80yEcseDCDzHuUG7Wi7ko3Ow619l18vOBmKz92LunpNB8hdsxS6JK6xOuaX4-W39QCzqH82zxE-Yt-rz5XSTo3ub43C2WdW5J00Mt4e1gEB2wLXHqGVCkQ8_0dH3-u4o-n9WP62LPQGsHzriieySZg3Xyn58Oo2P8-3HrNJgs7avvHSvpKFbXSpa4vnhBjhsAR4c2GiQz-ZtmyL0689NZ0TlJO3xUq-ICc8RWFnG4Ldu2pq0wCJ0I3SIXvEt6e5bVc7NZofGx34nnrQ1DXOJ7Fcjth2fClJEvzBRaygPbnrbvDAS3jjP_rHRc6vZtKUIa3UbGLI-DNjcVLIzxBXJWKVYASAQg3PHTdQqaoNGN7uOBefHPr8mQ3heHIhy2GCCjRnSBl8N8tG6OVfW-Xikz6z8pj5uw0ae2JfUeRI9DMwWpfqeDxKawXVXl1q5IeYBfO1dQn3d7p4NDHwasubTAbpebs4oP6WsxEos1lIeUwIcHWg55qn4Cy3qcn2QTjeecJ1Zts0G76hw-Pho8OKsrfJI5JqOVqAmItf1BSoEOY.VDMS-qtDJ64CYA21ES7Xsw/__results___files/__results___52_1.png)
train accuracy: 99.71264367816092 % test accuracy: 96.7741935483871 %
deep learning projects for beginners
easy deep learning projects python
So far, we have successfully built a 2-layer neural network and learned how to implement it.
We have covered important aspects such as the size of layers, initializing parameter weights and bias, forward propagation, loss function and cost function, backward propagation, updating parameters, and making predictions using the learned parameters weight and bias.
Now, let’s take our learning a step further and understand how to implement an L-layer neural network with Keras.
L Layer Neural Network
What happens if number of hidden layer increase?
Earlier layers are capable of detecting basic features. When the model combines these basic features in later layers of the neural network, it can learn more complex functions. For instance, consider our sign example.
The first hidden layer, for instance, learns edges or simple shapes like lines. As the number of layers increases, the layers begin to learn more intricate concepts such as convex shapes or distinctive features like fingers.
deep learning projects for beginners
easy deep learning projects python
Now, let’s build our model:
There are several hyperparameters that need to be selected, such as the learning rate, number of iterations, number of hidden layers, number of hidden units, and the type of activation functions.
These hyperparameters can be chosen through intuition if you have spent a significant amount of time in the world of deep learning.
However, if you haven’t spent much time, the best approach is to search online, although it’s not mandatory. You will need to experiment with hyperparameters to find the optimal ones.
In this tutorial, our model will consist of 2 hidden layers with 8 and 4 nodes, respectively. Increasing the number of hidden layers and nodes can significantly increase training time.
We will use ReLU as the activation function for the first hidden layer, ReLU for the second hidden layer, and Sigmoid for the output layer.
The number of iterations will be set to 100.
The process remains the same as in the previous sections, but as you grasp the fundamentals of deep learning, we can simplify our work by utilizing the Keras library for more complex neural networks.
First, let’s reshape our x_train, x_test, y_train, and y_test:
# reshaping x_train, x_test, y_train, y_test = x_train.T, x_test.T, y_train.T, y_test.T
deep learning projects for beginners
easy deep learning projects python
Implementing with keras library
Let’s review some key parameters from the Keras library:
- units: The output dimensions of a node.
- kernel_initializer: Used to initialize weights.
- activation: The activation function, in our case, we use ReLU.
- input_dim: The input dimension, which is the number of pixels in our images (4096 pixels).
- optimizer: We use the Adam optimizer. Adam is one of the most effective optimization algorithms for training neural networks due to its relatively low memory requirements and effectiveness even with minimal hyperparameter tuning.
- loss: The cost function, specifically the cross-entropy cost function, defined as:
( J = -\frac{1}{m} \sum_{i=0}^{m} \left[ y^{(i)} \log(a^{2}) + (1 – y^{(i)}) \log(1 – a^{2}) \right] ) - metrics: This is set to accuracy.
- cross_val_score: Used for cross-validation. For more information on cross-validation, check out my machine learning tutorial here.
- epochs: The number of iterations.
# Evaluating the ANN from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score from keras.models import Sequential # initialize neural network library from keras.layers import Dense # build our layers library def build_classifier(): classifier = Sequential() # initialize neural network classifier.add(Dense(units = 8, kernel_initializer = 'uniform', activation = 'relu', input_dim = x_train.shape[1])) classifier.add(Dense(units = 4, kernel_initializer = 'uniform', activation = 'relu')) classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid')) classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) return classifier classifier = KerasClassifier(build_fn = build_classifier, epochs = 100) accuracies = cross_val_score(estimator = classifier, X = x_train, y = y_train, cv = 3) mean = accuracies.mean() variance = accuracies.std() print("Accuracy mean: "+ str(mean)) print("Accuracy variance: "+ str(variance))
deep learning projects for beginners
easy machine learning projects python
Using TensorFlow backend.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch 1/100 232/232 [==============================] - 0s 2ms/step - loss: 0.6932 - acc: 0.5776 Epoch 2/100 232/232 [==============================] - 0s 100us/step - loss: 0.6927 - acc: 0.6336 Epoch 3/100 232/232 [==============================] - 0s 99us/step - loss: 0.6923 - acc: 0.5517 Epoch 4/100 232/232 [==============================] - 0s 123us/step - loss: 0.6920 - acc: 0.7069 Epoch 5/100 232/232 [==============================] - 0s 103us/step - loss: 0.6905 - acc: 0.5431 Epoch 6/100 232/232 [==============================] - 0s 98us/step - loss: 0.6890 - acc: 0.5431 Epoch 7/100 232/232 [==============================] - 0s 97us/step - loss: 0.6861 - acc: 0.5431 Epoch 8/100 232/232 [==============================] - 0s 97us/step - loss: 0.6814 - acc: 0.5431 Epoch 9/100 232/232 [==============================] - 0s 102us/step - loss: 0.6767 - acc: 0.6034 Epoch 10/100 232/232 [==============================] - 0s 97us/step - loss: 0.6701 - acc: 0.5517 Epoch 11/100 232/232 [==============================] - 0s 99us/step - loss: 0.6590 - acc: 0.5474 Epoch 12/100 232/232 [==============================] - 0s 94us/step - loss: 0.6488 - acc: 0.6810 Epoch 13/100 232/232 [==============================] - 0s 90us/step - loss: 0.6283 - acc: 0.6810 Epoch 14/100 232/232 [==============================] - 0s 92us/step - loss: 0.6081 - acc: 0.7931 Epoch 15/100 232/232 [==============================] - 0s 94us/step - loss: 0.5805 - acc: 0.8276 Epoch 16/100 232/232 [==============================] - 0s 92us/step - loss: 0.5498 - acc: 0.8750 Epoch 17/100 232/232 [==============================] - 0s 95us/step - loss: 0.5123 - acc: 0.8621 Epoch 18/100 232/232 [==============================] - 0s 88us/step - loss: 0.4840 - acc: 0.8664 Epoch 19/100 232/232 [==============================] - 0s 92us/step - loss: 0.4437 - acc: 0.8362 Epoch 20/100 232/232 [==============================] - 0s 94us/step - loss: 0.4443 - acc: 0.8319 Epoch 21/100 232/232 [==============================] - 0s 97us/step - loss: 0.3941 - acc: 0.8836 Epoch 22/100 232/232 [==============================] - 0s 100us/step - loss: 0.3835 - acc: 0.8448 Epoch 23/100 232/232 [==============================] - 0s 99us/step - loss: 0.3368 - acc: 0.9052 Epoch 24/100 232/232 [==============================] - 0s 92us/step - loss: 0.3168 - acc: 0.9095 Epoch 25/100 232/232 [==============================] - 0s 93us/step - loss: 0.3061 - acc: 0.9052 Epoch 26/100 232/232 [==============================] - 0s 90us/step - loss: 0.3046 - acc: 0.9181 Epoch 27/100 232/232 [==============================] - 0s 89us/step - loss: 0.2829 - acc: 0.9009 Epoch 28/100 232/232 [==============================] - 0s 94us/step - loss: 0.2630 - acc: 0.9267 Epoch 29/100 232/232 [==============================] - 0s 93us/step - loss: 0.2448 - acc: 0.9224 Epoch 30/100 232/232 [==============================] - 0s 91us/step - loss: 0.2472 - acc: 0.9138 Epoch 31/100 232/232 [==============================] - 0s 91us/step - loss: 0.2291 - acc: 0.9310 Epoch 32/100 232/232 [==============================] - 0s 91us/step - loss: 0.2268 - acc: 0.9267 Epoch 33/100 232/232 [==============================] - 0s 97us/step - loss: 0.2357 - acc: 0.9138 Epoch 34/100 232/232 [==============================] - 0s 93us/step - loss: 0.2072 - acc: 0.9267 Epoch 35/100 232/232 [==============================] - 0s 96us/step - loss: 0.1998 - acc: 0.9267 Epoch 36/100 232/232 [==============================] - 0s 90us/step - loss: 0.2005 - acc: 0.9181 Epoch 37/100 232/232 [==============================] - 0s 98us/step - loss: 0.1879 - acc: 0.9267 Epoch 38/100 232/232 [==============================] - 0s 95us/step - loss: 0.1814 - acc: 0.9267 Epoch 39/100 232/232 [==============================] - 0s 91us/step - loss: 0.1765 - acc: 0.9267 Epoch 40/100 232/232 [==============================] - 0s 90us/step - loss: 0.1712 - acc: 0.9310 Epoch 41/100 232/232 [==============================] - 0s 95us/step - loss: 0.1701 - acc: 0.9440 Epoch 42/100 232/232 [==============================] - 0s 92us/step - loss: 0.1817 - acc: 0.9224 Epoch 43/100 232/232 [==============================] - 0s 96us/step - loss: 0.1589 - acc: 0.9353 Epoch 44/100 232/232 [==============================] - 0s 95us/step - loss: 0.1570 - acc: 0.9483 Epoch 45/100 232/232 [==============================] - 0s 87us/step - loss: 0.1517 - acc: 0.9440 Epoch 46/100 232/232 [==============================] - 0s 94us/step - loss: 0.1478 - acc: 0.9353 Epoch 47/100 232/232 [==============================] - 0s 96us/step - loss: 0.1456 - acc: 0.9440 Epoch 48/100 232/232 [==============================] - 0s 91us/step - loss: 0.1421 - acc: 0.9353 Epoch 49/100 232/232 [==============================] - 0s 92us/step - loss: 0.1492 - acc: 0.9440 Epoch 50/100 232/232 [==============================] - 0s 95us/step - loss: 0.1448 - acc: 0.9353 Epoch 51/100 232/232 [==============================] - 0s 93us/step - loss: 0.1316 - acc: 0.9483 Epoch 52/100 232/232 [==============================] - 0s 93us/step - loss: 0.1312 - acc: 0.9440 Epoch 53/100 232/232 [==============================] - 0s 95us/step - loss: 0.1335 - acc: 0.9440 Epoch 54/100 232/232 [==============================] - 0s 94us/step - loss: 0.1295 - acc: 0.9526 Epoch 55/100 232/232 [==============================] - 0s 91us/step - loss: 0.1240 - acc: 0.9440 Epoch 56/100 232/232 [==============================] - 0s 94us/step - loss: 0.1237 - acc: 0.9569 Epoch 57/100 232/232 [==============================] - 0s 100us/step - loss: 0.1197 - acc: 0.9483 Epoch 58/100 232/232 [==============================] - 0s 97us/step - loss: 0.1159 - acc: 0.9569 Epoch 59/100 232/232 [==============================] - 0s 96us/step - loss: 0.1139 - acc: 0.9526 Epoch 60/100 232/232 [==============================] - 0s 90us/step - loss: 0.1165 - acc: 0.9655 Epoch 61/100 232/232 [==============================] - 0s 92us/step - loss: 0.1237 - acc: 0.9483 Epoch 62/100 232/232 [==============================] - 0s 92us/step - loss: 0.1106 - acc: 0.9655 Epoch 63/100 232/232 [==============================] - 0s 96us/step - loss: 0.1187 - acc: 0.9397 Epoch 64/100 232/232 [==============================] - 0s 96us/step - loss: 0.1180 - acc: 0.9569 Epoch 65/100 232/232 [==============================] - 0s 101us/step - loss: 0.1085 - acc: 0.9612 Epoch 66/100 232/232 [==============================] - 0s 92us/step - loss: 0.1063 - acc: 0.9655 Epoch 67/100 232/232 [==============================] - 0s 91us/step - loss: 0.1029 - acc: 0.9569 Epoch 68/100 232/232 [==============================] - 0s 88us/step - loss: 0.1209 - acc: 0.9612 Epoch 69/100 232/232 [==============================] - 0s 94us/step - loss: 0.1062 - acc: 0.9612 Epoch 70/100 232/232 [==============================] - 0s 95us/step - loss: 0.0982 - acc: 0.9569 Epoch 71/100 232/232 [==============================] - 0s 91us/step - loss: 0.0930 - acc: 0.9655 Epoch 72/100 232/232 [==============================] - 0s 94us/step - loss: 0.0973 - acc: 0.9741 Epoch 73/100 232/232 [==============================] - 0s 90us/step - loss: 0.1010 - acc: 0.9526 Epoch 74/100 232/232 [==============================] - 0s 96us/step - loss: 0.0981 - acc: 0.9569 Epoch 75/100 232/232 [==============================] - 0s 96us/step - loss: 0.0864 - acc: 0.9655 Epoch 76/100 232/232 [==============================] - 0s 94us/step - loss: 0.0863 - acc: 0.9784 Epoch 77/100 232/232 [==============================] - 0s 90us/step - loss: 0.0952 - acc: 0.9612 Epoch 78/100 232/232 [==============================] - 0s 96us/step - loss: 0.0983 - acc: 0.9612 Epoch 79/100 232/232 [==============================] - 0s 91us/step - loss: 0.0981 - acc: 0.9655 Epoch 80/100 232/232 [==============================] - 0s 94us/step - loss: 0.1025 - acc: 0.9526 Epoch 81/100 232/232 [==============================] - 0s 93us/step - loss: 0.1302 - acc: 0.9526 Epoch 82/100 232/232 [==============================] - 0s 88us/step - loss: 0.1833 - acc: 0.9224 Epoch 83/100 232/232 [==============================] - 0s 90us/step - loss: 0.1000 - acc: 0.9612 Epoch 84/100 232/232 [==============================] - 0s 92us/step - loss: 0.0815 - acc: 0.9655 Epoch 85/100 232/232 [==============================] - 0s 93us/step - loss: 0.0770 - acc: 0.9655 Epoch 86/100 232/232 [==============================] - 0s 97us/step - loss: 0.0785 - acc: 0.9698 Epoch 87/100 232/232 [==============================] - 0s 94us/step - loss: 0.0794 - acc: 0.9698 Epoch 88/100 232/232 [==============================] - 0s 97us/step - loss: 0.0740 - acc: 0.9784 Epoch 89/100 232/232 [==============================] - 0s 98us/step - loss: 0.0754 - acc: 0.9655 Epoch 90/100 232/232 [==============================] - 0s 119us/step - loss: 0.0774 - acc: 0.9741 Epoch 91/100 232/232 [==============================] - 0s 95us/step - loss: 0.0721 - acc: 0.9741 Epoch 92/100 232/232 [==============================] - 0s 99us/step - loss: 0.0700 - acc: 0.9741 Epoch 93/100 232/232 [==============================] - 0s 91us/step - loss: 0.0687 - acc: 0.9741 Epoch 94/100 232/232 [==============================] - 0s 89us/step - loss: 0.0693 - acc: 0.9698 Epoch 95/100 232/232 [==============================] - 0s 90us/step - loss: 0.0716 - acc: 0.9741 Epoch 96/100 232/232 [==============================] - 0s 93us/step - loss: 0.0735 - acc: 0.9741 Epoch 97/100 232/232 [==============================] - 0s 95us/step - loss: 0.0831 - acc: 0.9569 Epoch 98/100 232/232 [==============================] - 0s 92us/step - loss: 0.1025 - acc: 0.9526 Epoch 99/100 232/232 [==============================] - 0s 88us/step - loss: 0.1329 - acc: 0.9440 Epoch 100/100 232/232 [==============================] - 0s 96us/step - loss: 0.0785 - acc: 0.9828 116/116 [==============================] - 0s 379us/step Epoch 1/100 232/232 [==============================] - 0s 1ms/step - loss: 0.6933 - acc: 0.5216 Epoch 2/100 232/232 [==============================] - 0s 95us/step - loss: 0.6924 - acc: 0.5216 Epoch 3/100 232/232 [==============================] - 0s 98us/step - loss: 0.6918 - acc: 0.5216 Epoch 4/100 232/232 [==============================] - 0s 91us/step - loss: 0.6909 - acc: 0.5216 Epoch 5/100 232/232 [==============================] - 0s 96us/step - loss: 0.6893 - acc: 0.5216 Epoch 6/100 232/232 [==============================] - 0s 93us/step - loss: 0.6871 - acc: 0.5216 Epoch 7/100 232/232 [==============================] - 0s 93us/step - loss: 0.6839 - acc: 0.5216 Epoch 8/100 232/232 [==============================] - 0s 93us/step - loss: 0.6791 - acc: 0.5216 Epoch 9/100 232/232 [==============================] - 0s 106us/step - loss: 0.6734 - acc: 0.5216 Epoch 10/100 232/232 [==============================] - 0s 96us/step - loss: 0.6647 - acc: 0.5216 Epoch 11/100 232/232 [==============================] - 0s 96us/step - loss: 0.6517 - acc: 0.5216 Epoch 12/100 232/232 [==============================] - 0s 93us/step - loss: 0.6404 - acc: 0.5259 Epoch 13/100 232/232 [==============================] - 0s 93us/step - loss: 0.6199 - acc: 0.7155 Epoch 14/100 232/232 [==============================] - 0s 89us/step - loss: 0.6147 - acc: 0.5302 Epoch 15/100 232/232 [==============================] - 0s 92us/step - loss: 0.5973 - acc: 0.8233 Epoch 16/100 232/232 [==============================] - 0s 92us/step - loss: 0.5964 - acc: 0.5690 Epoch 17/100 232/232 [==============================] - 0s 103us/step - loss: 0.5628 - acc: 0.7759 Epoch 18/100 232/232 [==============================] - 0s 91us/step - loss: 0.5481 - acc: 0.7974 Epoch 19/100 232/232 [==============================] - 0s 91us/step - loss: 0.5309 - acc: 0.7198 Epoch 20/100 232/232 [==============================] - 0s 88us/step - loss: 0.5185 - acc: 0.7802 Epoch 21/100 232/232 [==============================] - 0s 94us/step - loss: 0.5061 - acc: 0.8362 Epoch 22/100 232/232 [==============================] - 0s 91us/step - loss: 0.5036 - acc: 0.9009 Epoch 23/100 232/232 [==============================] - 0s 96us/step - loss: 0.4834 - acc: 0.7802 Epoch 24/100 232/232 [==============================] - 0s 94us/step - loss: 0.4735 - acc: 0.8922 Epoch 25/100 232/232 [==============================] - 0s 97us/step - loss: 0.4597 - acc: 0.8836 Epoch 26/100 232/232 [==============================] - 0s 100us/step - loss: 0.4477 - acc: 0.8405 Epoch 27/100 232/232 [==============================] - 0s 99us/step - loss: 0.4367 - acc: 0.8922 Epoch 28/100 232/232 [==============================] - 0s 93us/step - loss: 0.4344 - acc: 0.9397 Epoch 29/100 232/232 [==============================] - 0s 93us/step - loss: 0.4149 - acc: 0.9181 Epoch 30/100 232/232 [==============================] - 0s 98us/step - loss: 0.4159 - acc: 0.8750 Epoch 31/100 232/232 [==============================] - 0s 91us/step - loss: 0.3966 - acc: 0.8922 Epoch 32/100 232/232 [==============================] - 0s 91us/step - loss: 0.3881 - acc: 0.9397 Epoch 33/100 232/232 [==============================] - 0s 96us/step - loss: 0.3823 - acc: 0.9181 Epoch 34/100 232/232 [==============================] - 0s 112us/step - loss: 0.3717 - acc: 0.9310 Epoch 35/100 232/232 [==============================] - 0s 92us/step - loss: 0.3666 - acc: 0.9483 Epoch 36/100 232/232 [==============================] - 0s 92us/step - loss: 0.3643 - acc: 0.9526 Epoch 37/100 232/232 [==============================] - 0s 92us/step - loss: 0.3517 - acc: 0.9353 Epoch 38/100 232/232 [==============================] - 0s 97us/step - loss: 0.3691 - acc: 0.9440 Epoch 39/100 232/232 [==============================] - 0s 94us/step - loss: 0.3448 - acc: 0.9569 Epoch 40/100 232/232 [==============================] - 0s 100us/step - loss: 0.3403 - acc: 0.9224 Epoch 41/100 232/232 [==============================] - 0s 119us/step - loss: 0.3274 - acc: 0.9569 Epoch 42/100 232/232 [==============================] - 0s 97us/step - loss: 0.3266 - acc: 0.9569 Epoch 43/100 232/232 [==============================] - 0s 94us/step - loss: 0.3248 - acc: 0.9440 Epoch 44/100 232/232 [==============================] - 0s 91us/step - loss: 0.3041 - acc: 0.9698 Epoch 45/100 232/232 [==============================] - 0s 89us/step - loss: 0.3081 - acc: 0.9483 Epoch 46/100 232/232 [==============================] - 0s 94us/step - loss: 0.3046 - acc: 0.9569 Epoch 47/100 232/232 [==============================] - 0s 92us/step - loss: 0.2985 - acc: 0.9655 Epoch 48/100 232/232 [==============================] - 0s 93us/step - loss: 0.2941 - acc: 0.9526 Epoch 49/100 232/232 [==============================] - 0s 94us/step - loss: 0.2860 - acc: 0.9655 Epoch 50/100 232/232 [==============================] - 0s 94us/step - loss: 0.2779 - acc: 0.9741 Epoch 51/100 232/232 [==============================] - 0s 95us/step - loss: 0.2698 - acc: 0.9784 Epoch 52/100 232/232 [==============================] - 0s 90us/step - loss: 0.2696 - acc: 0.9526 Epoch 53/100 232/232 [==============================] - 0s 97us/step - loss: 0.2780 - acc: 0.9698 Epoch 54/100 232/232 [==============================] - 0s 96us/step - loss: 0.2894 - acc: 0.9698 Epoch 55/100 232/232 [==============================] - 0s 92us/step - loss: 0.2959 - acc: 0.9181 Epoch 56/100 232/232 [==============================] - 0s 93us/step - loss: 0.2558 - acc: 0.9828 Epoch 57/100 232/232 [==============================] - 0s 90us/step - loss: 0.2502 - acc: 0.9569 Epoch 58/100 232/232 [==============================] - 0s 92us/step - loss: 0.2419 - acc: 0.9698 Epoch 59/100 232/232 [==============================] - 0s 92us/step - loss: 0.2362 - acc: 0.9828 Epoch 60/100 232/232 [==============================] - 0s 92us/step - loss: 0.2320 - acc: 0.9698 Epoch 61/100 232/232 [==============================] - 0s 94us/step - loss: 0.2427 - acc: 0.9784 Epoch 62/100 232/232 [==============================] - 0s 89us/step - loss: 0.2248 - acc: 0.9828 Epoch 63/100 232/232 [==============================] - 0s 93us/step - loss: 0.2230 - acc: 0.9612 Epoch 64/100 232/232 [==============================] - 0s 94us/step - loss: 0.2168 - acc: 0.9828 Epoch 65/100 232/232 [==============================] - 0s 93us/step - loss: 0.2085 - acc: 0.9828 Epoch 66/100 232/232 [==============================] - 0s 92us/step - loss: 0.2067 - acc: 0.9828 Epoch 67/100 232/232 [==============================] - 0s 94us/step - loss: 0.2072 - acc: 0.9784 Epoch 68/100 232/232 [==============================] - 0s 95us/step - loss: 0.1976 - acc: 0.9828 Epoch 69/100 232/232 [==============================] - 0s 92us/step - loss: 0.2068 - acc: 0.9828 Epoch 70/100 232/232 [==============================] - 0s 91us/step - loss: 0.2018 - acc: 0.9569 Epoch 71/100 232/232 [==============================] - 0s 94us/step - loss: 0.1956 - acc: 0.9741 Epoch 72/100 232/232 [==============================] - 0s 93us/step - loss: 0.1886 - acc: 0.9655 Epoch 73/100 232/232 [==============================] - 0s 92us/step - loss: 0.1777 - acc: 0.9914 Epoch 74/100 232/232 [==============================] - 0s 91us/step - loss: 0.1770 - acc: 0.9871 Epoch 75/100 232/232 [==============================] - 0s 96us/step - loss: 0.1728 - acc: 0.9828 Epoch 76/100 232/232 [==============================] - 0s 94us/step - loss: 0.1663 - acc: 0.9914 Epoch 77/100 232/232 [==============================] - 0s 92us/step - loss: 0.1638 - acc: 0.9871 Epoch 78/100 232/232 [==============================] - 0s 93us/step - loss: 0.1675 - acc: 0.9741 Epoch 79/100 232/232 [==============================] - 0s 93us/step - loss: 0.1979 - acc: 0.9569 Epoch 80/100 232/232 [==============================] - 0s 90us/step - loss: 0.1629 - acc: 0.9784 Epoch 81/100 232/232 [==============================] - 0s 159us/step - loss: 0.1518 - acc: 0.9871 Epoch 82/100 232/232 [==============================] - 0s 95us/step - loss: 0.1483 - acc: 0.9871 Epoch 83/100 232/232 [==============================] - 0s 95us/step - loss: 0.1459 - acc: 0.9914 Epoch 84/100 232/232 [==============================] - 0s 97us/step - loss: 0.1396 - acc: 0.9957 Epoch 85/100 232/232 [==============================] - 0s 96us/step - loss: 0.1390 - acc: 0.9957 Epoch 86/100 232/232 [==============================] - 0s 90us/step - loss: 0.1346 - acc: 0.9914 Epoch 87/100 232/232 [==============================] - 0s 92us/step - loss: 0.1296 - acc: 0.9914 Epoch 88/100 232/232 [==============================] - 0s 92us/step - loss: 0.1274 - acc: 0.9957 Epoch 89/100 232/232 [==============================] - 0s 93us/step - loss: 0.1256 - acc: 0.9957 Epoch 90/100 232/232 [==============================] - 0s 95us/step - loss: 0.1243 - acc: 0.9914 Epoch 91/100 232/232 [==============================] - 0s 91us/step - loss: 0.1233 - acc: 0.9914 Epoch 92/100 232/232 [==============================] - 0s 92us/step - loss: 0.1181 - acc: 0.9914 Epoch 93/100 232/232 [==============================] - 0s 96us/step - loss: 0.1146 - acc: 0.9957 Epoch 94/100 232/232 [==============================] - 0s 96us/step - loss: 0.1170 - acc: 0.9871 Epoch 95/100 232/232 [==============================] - 0s 94us/step - loss: 0.1191 - acc: 0.9828 Epoch 96/100 232/232 [==============================] - 0s 95us/step - loss: 0.1275 - acc: 0.9828 Epoch 97/100 232/232 [==============================] - 0s 92us/step - loss: 0.1169 - acc: 0.9914 Epoch 98/100 232/232 [==============================] - 0s 89us/step - loss: 0.1067 - acc: 0.9957 Epoch 99/100 232/232 [==============================] - 0s 88us/step - loss: 0.1047 - acc: 0.9957 Epoch 100/100 232/232 [==============================] - 0s 87us/step - loss: 0.0993 - acc: 0.9957 116/116 [==============================] - 0s 539us/step Epoch 1/100 232/232 [==============================] - 0s 2ms/step - loss: 0.6932 - acc: 0.5086 Epoch 2/100 232/232 [==============================] - 0s 93us/step - loss: 0.6931 - acc: 0.4914 Epoch 3/100 232/232 [==============================] - 0s 91us/step - loss: 0.6927 - acc: 0.6121 Epoch 4/100 232/232 [==============================] - 0s 96us/step - loss: 0.6925 - acc: 0.5000 Epoch 5/100 232/232 [==============================] - 0s 94us/step - loss: 0.6911 - acc: 0.6336 Epoch 6/100 232/232 [==============================] - 0s 90us/step - loss: 0.6893 - acc: 0.8103 Epoch 7/100 232/232 [==============================] - 0s 98us/step - loss: 0.6862 - acc: 0.5776 Epoch 8/100 232/232 [==============================] - 0s 103us/step - loss: 0.6838 - acc: 0.7069 Epoch 9/100 232/232 [==============================] - 0s 91us/step - loss: 0.6778 - acc: 0.7155 Epoch 10/100 232/232 [==============================] - 0s 96us/step - loss: 0.6687 - acc: 0.8147 Epoch 11/100 232/232 [==============================] - 0s 101us/step - loss: 0.6556 - acc: 0.8966 Epoch 12/100 232/232 [==============================] - 0s 100us/step - loss: 0.6382 - acc: 0.8922 Epoch 13/100 232/232 [==============================] - 0s 99us/step - loss: 0.6146 - acc: 0.8879 Epoch 14/100 232/232 [==============================] - 0s 97us/step - loss: 0.6000 - acc: 0.8190 Epoch 15/100 232/232 [==============================] - 0s 95us/step - loss: 0.5776 - acc: 0.8017 Epoch 16/100 232/232 [==============================] - 0s 92us/step - loss: 0.5356 - acc: 0.8750 Epoch 17/100 232/232 [==============================] - 0s 94us/step - loss: 0.4986 - acc: 0.8966 Epoch 18/100 232/232 [==============================] - 0s 93us/step - loss: 0.4724 - acc: 0.8578 Epoch 19/100 232/232 [==============================] - 0s 95us/step - loss: 0.4440 - acc: 0.8578 Epoch 20/100 232/232 [==============================] - 0s 97us/step - loss: 0.4458 - acc: 0.8319 Epoch 21/100 232/232 [==============================] - 0s 91us/step - loss: 0.3933 - acc: 0.8664 Epoch 22/100 232/232 [==============================] - 0s 116us/step - loss: 0.3564 - acc: 0.8922 Epoch 23/100 232/232 [==============================] - 0s 131us/step - loss: 0.3441 - acc: 0.8707 Epoch 24/100 232/232 [==============================] - 0s 93us/step - loss: 0.3155 - acc: 0.9052 Epoch 25/100 232/232 [==============================] - 0s 94us/step - loss: 0.3094 - acc: 0.8879 Epoch 26/100 232/232 [==============================] - 0s 97us/step - loss: 0.2829 - acc: 0.9181 Epoch 27/100 232/232 [==============================] - 0s 103us/step - loss: 0.2644 - acc: 0.9224 Epoch 28/100 232/232 [==============================] - 0s 100us/step - loss: 0.2584 - acc: 0.9138 Epoch 29/100 232/232 [==============================] - 0s 124us/step - loss: 0.2414 - acc: 0.9310 Epoch 30/100 232/232 [==============================] - 0s 96us/step - loss: 0.2257 - acc: 0.9397 Epoch 31/100 232/232 [==============================] - 0s 90us/step - loss: 0.2143 - acc: 0.9310 Epoch 32/100 232/232 [==============================] - 0s 99us/step - loss: 0.2004 - acc: 0.9138 Epoch 33/100 232/232 [==============================] - 0s 99us/step - loss: 0.2025 - acc: 0.9397 Epoch 34/100 232/232 [==============================] - 0s 104us/step - loss: 0.1985 - acc: 0.9310 Epoch 35/100 232/232 [==============================] - 0s 97us/step - loss: 0.1886 - acc: 0.9397 Epoch 36/100 232/232 [==============================] - 0s 96us/step - loss: 0.1730 - acc: 0.9353 Epoch 37/100 232/232 [==============================] - 0s 93us/step - loss: 0.1677 - acc: 0.9310 Epoch 38/100 232/232 [==============================] - 0s 95us/step - loss: 0.1580 - acc: 0.9440 Epoch 39/100 232/232 [==============================] - 0s 93us/step - loss: 0.1524 - acc: 0.9440 Epoch 40/100 232/232 [==============================] - 0s 104us/step - loss: 0.1484 - acc: 0.9526 Epoch 41/100 232/232 [==============================] - 0s 94us/step - loss: 0.1426 - acc: 0.9440 Epoch 42/100 232/232 [==============================] - 0s 94us/step - loss: 0.1441 - acc: 0.9483 Epoch 43/100 232/232 [==============================] - 0s 92us/step - loss: 0.1296 - acc: 0.9612 Epoch 44/100 232/232 [==============================] - 0s 94us/step - loss: 0.1255 - acc: 0.9569 Epoch 45/100 232/232 [==============================] - 0s 95us/step - loss: 0.1387 - acc: 0.9526 Epoch 46/100 232/232 [==============================] - 0s 101us/step - loss: 0.1390 - acc: 0.9440 Epoch 47/100 232/232 [==============================] - 0s 96us/step - loss: 0.1135 - acc: 0.9655 Epoch 48/100 232/232 [==============================] - 0s 95us/step - loss: 0.1088 - acc: 0.9612 Epoch 49/100 232/232 [==============================] - 0s 90us/step - loss: 0.1073 - acc: 0.9698 Epoch 50/100 232/232 [==============================] - 0s 95us/step - loss: 0.1043 - acc: 0.9655 Epoch 51/100 232/232 [==============================] - 0s 93us/step - loss: 0.1024 - acc: 0.9698 Epoch 52/100 232/232 [==============================] - 0s 91us/step - loss: 0.1008 - acc: 0.9741 Epoch 53/100 232/232 [==============================] - 0s 93us/step - loss: 0.1043 - acc: 0.9698 Epoch 54/100 232/232 [==============================] - 0s 96us/step - loss: 0.1016 - acc: 0.9612 Epoch 55/100 232/232 [==============================] - 0s 90us/step - loss: 0.0916 - acc: 0.9784 Epoch 56/100 232/232 [==============================] - 0s 94us/step - loss: 0.1260 - acc: 0.9526 Epoch 57/100 232/232 [==============================] - 0s 92us/step - loss: 0.1009 - acc: 0.9655 Epoch 58/100 232/232 [==============================] - 0s 94us/step - loss: 0.0881 - acc: 0.9741 Epoch 59/100 232/232 [==============================] - 0s 95us/step - loss: 0.0937 - acc: 0.9828 Epoch 60/100 232/232 [==============================] - 0s 104us/step - loss: 0.0834 - acc: 0.9698 Epoch 61/100 232/232 [==============================] - 0s 91us/step - loss: 0.0816 - acc: 0.9784 Epoch 62/100 232/232 [==============================] - 0s 90us/step - loss: 0.0798 - acc: 0.9741 Epoch 63/100 232/232 [==============================] - 0s 97us/step - loss: 0.0893 - acc: 0.9698 Epoch 64/100 232/232 [==============================] - 0s 116us/step - loss: 0.0772 - acc: 0.9784 Epoch 65/100 232/232 [==============================] - 0s 97us/step - loss: 0.1034 - acc: 0.9569 Epoch 66/100 232/232 [==============================] - 0s 91us/step - loss: 0.1033 - acc: 0.9612 Epoch 67/100 232/232 [==============================] - 0s 90us/step - loss: 0.1200 - acc: 0.9483 Epoch 68/100 232/232 [==============================] - 0s 89us/step - loss: 0.0801 - acc: 0.9612 Epoch 69/100 232/232 [==============================] - 0s 93us/step - loss: 0.0764 - acc: 0.9871 Epoch 70/100 232/232 [==============================] - 0s 93us/step - loss: 0.0821 - acc: 0.9698 Epoch 71/100 232/232 [==============================] - 0s 117us/step - loss: 0.0796 - acc: 0.9741 Epoch 72/100 232/232 [==============================] - 0s 91us/step - loss: 0.0799 - acc: 0.9698 Epoch 73/100 232/232 [==============================] - 0s 94us/step - loss: 0.0744 - acc: 0.9784 Epoch 74/100 232/232 [==============================] - 0s 94us/step - loss: 0.0780 - acc: 0.9655 Epoch 75/100 232/232 [==============================] - 0s 94us/step - loss: 0.0804 - acc: 0.9741 Epoch 76/100 232/232 [==============================] - 0s 91us/step - loss: 0.1012 - acc: 0.9569 Epoch 77/100 232/232 [==============================] - 0s 91us/step - loss: 0.0958 - acc: 0.9612 Epoch 78/100 232/232 [==============================] - 0s 93us/step - loss: 0.0615 - acc: 0.9784 Epoch 79/100 232/232 [==============================] - 0s 106us/step - loss: 0.0567 - acc: 0.9828 Epoch 80/100 232/232 [==============================] - 0s 99us/step - loss: 0.0602 - acc: 0.9828 Epoch 81/100 232/232 [==============================] - 0s 97us/step - loss: 0.0712 - acc: 0.9784 Epoch 82/100 232/232 [==============================] - 0s 92us/step - loss: 0.0648 - acc: 0.9828 Epoch 83/100 232/232 [==============================] - 0s 92us/step - loss: 0.0547 - acc: 0.9828 Epoch 84/100 232/232 [==============================] - 0s 91us/step - loss: 0.0525 - acc: 0.9871 Epoch 85/100 232/232 [==============================] - 0s 95us/step - loss: 0.0543 - acc: 0.9828 Epoch 86/100 232/232 [==============================] - 0s 92us/step - loss: 0.0539 - acc: 0.9914 Epoch 87/100 232/232 [==============================] - 0s 94us/step - loss: 0.0531 - acc: 0.9828 Epoch 88/100 232/232 [==============================] - 0s 91us/step - loss: 0.0511 - acc: 0.9828 Epoch 89/100 232/232 [==============================] - 0s 91us/step - loss: 0.0540 - acc: 0.9871 Epoch 90/100 232/232 [==============================] - 0s 97us/step - loss: 0.0547 - acc: 0.9784 Epoch 91/100 232/232 [==============================] - 0s 93us/step - loss: 0.0470 - acc: 0.9914 Epoch 92/100 232/232 [==============================] - 0s 96us/step - loss: 0.0546 - acc: 0.9784 Epoch 93/100 232/232 [==============================] - 0s 95us/step - loss: 0.0610 - acc: 0.9871 Epoch 94/100 232/232 [==============================] - 0s 97us/step - loss: 0.0687 - acc: 0.9784 Epoch 95/100 232/232 [==============================] - 0s 97us/step - loss: 0.0686 - acc: 0.9784 Epoch 96/100 232/232 [==============================] - 0s 91us/step - loss: 0.0512 - acc: 0.9828 Epoch 97/100 232/232 [==============================] - 0s 94us/step - loss: 0.0415 - acc: 0.9871 Epoch 98/100 232/232 [==============================] - 0s 94us/step - loss: 0.0441 - acc: 0.9871 Epoch 99/100 232/232 [==============================] - 0s 93us/step - loss: 0.0538 - acc: 0.9828 Epoch 100/100 232/232 [==============================] - 0s 90us/step - loss: 0.0549 - acc: 0.9828 116/116 [==============================] - 0s 696us/step Accuracy mean: 0.9482758586434112 Accuracy variance: 0.012191495742907735
deep learning project ideas
deep learning projects ideas
deep learning projects
Artificial Neural Network with Pytorch library
Pytorch, similar to keras, is a framework that simplifies the implementation and construction of deep learning blocks.
You can find a tutorial on Artificial Neural Networks using Pytorch here: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers.
deep learning project ideas
deep learning projects ideas
Convolutional Neural Network with Pytorch library
Pytorch is a framework similar to keras that simplifies the process of implementing and building deep learning components.
Check out this tutorial on Convolutional Neural Networks using Pytorch: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers.
deep learning projects
project deep learning
Recurrent Neural Network with Pytorch library
Pytorch is a popular framework similar to keras. It simplifies the process of implementing and building deep learning blocks.
You can check out this link for a Recurrent Neural Network example using Pytorch: https://www.kaggle.com/kanncaa1/recurrent-neural-network-with-pytorch.
deep learning projects for beginners
easy machine learning projects python
Conclusion
This tutorial is brief, but if you would like more in-depth information on certain concepts, feel free to leave a comment.
deep learning project
data to data
public datasets
deep learning projects
project deep learning
If you have any trouble understanding anything related to Python or machine learning, please take a look at my other tutorials.
- Data Science: https://www.kaggle.com/kanncaa1/data-sciencetutorial-for-beginners
- Machine learning: https://www.kaggle.com/kanncaa1/machine-learning-tutorial-for-beginners
Now I hope you have a better understanding and can learn about deep learning. However, we don’t have to write lengthy codes every time we want to build a deep learning model.
That’s why there are deep learning frameworks available to help us build models quickly and easily:
- Artificial Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
- Convolutional Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
- Recurrent Neural Network: https://www.kaggle.com/kanncaa1/recurrent-neural-network-with-pytorch.
deep learning projects github
deep learning project github
deep learning project ideas
deep learning projects ideas
3 Comments
Lesson 2: Best Pytorch Tutorial For Deep Learning · May 29, 2024 at 1:16 pm
[…] Lesson 1: Best Deep Learning Tutorial for Beginners 2024 […]
Lesson 3: Best Transformers And BERT Tutorial With DL/NLP · May 29, 2024 at 2:20 pm
[…] Lesson 1: Best Deep Learning Tutorial for Beginners 2024 […]
Best Simple LSTM Introduction With Deep Learning 2024 · June 2, 2024 at 2:58 pm
[…] Lesson 1: Best Deep Learning Tutorial for Beginners 2024 […]