Hyperparameter Tuning: Neural Networks 101 | by Egor Howell | Nov, 2023

How you can improve the “” and “” of neural through tuning hyperparameters

Egor Howell
Towards Data Science
Neural-network icons created by Vectors Tank — Flaticon. neural-network icons. https://www.flaticon.com/free-icons/neural

In my previous post, we discussed how neural networks predict and learn from the data. There are two processes responsible for this: the forward pass and backward pass, also known as backpropagation. You can learn more about it here:

This post will dive into how we can optimise this “learning” and “training” process to increase the of our model. The areas we will cover are computational and hyperparameter tuning and how to it in PyTorch!

But, before all that good stuff, let’s quickly jog our about neural networks!

Neural networks are large mathematical expressions that try to find the “right” function that can map a set of inputs to their corresponding outputs. An example of a neural network is depicted below:

A basic two-hidden multi-layer perceptron. Diagram by author.

Each hidden-layer neuron carries out the following computation:

The process carried out inside each neuron. Diagram by author.
  • Inputs: These are the features of our dataset.
  • Weights: Coefficients that the inputs. The goal of the algorithm is to find the most optimal coefficients through gradient descent.
  • Linear Weighted Sum: Sum up the of the inputs and weights and add a /offset term, b.
  • Hidden Layer: Multiple neurons are stored to learn patterns in the dataset. The superscript refers to the layer and the subscript to the number of neuron in that layer.

Source link