The MNIST database contains handwritten digits, commonly used for training various image processing systems.
Each image in the MNIST dataset is 28x28 pixels, where the pixel values range from 0 to 255, representing black and white. The images are flattened to 784 (28x28) pixels to be used as input to a neural network.
- Input Layer: Each row represents one image. By taking the transpose of the image matrix, each column represents one image. The input layer size is 784, corresponding to the 784 pixels of each image.
Hidden Layer
-
Hidden Layer: The hidden layer has 10 neurons. The input to the hidden layer is calculated as follows:
$$\text{hidden\_layer}[1](10 \times m) = \text{Weight}[1](10 \times 784) \times \text{input\_layer}[0] + \text{bias}[1](10 \times m)$$ where (m) is the number of images.
- Output Layer: The output layer also has 10 neurons, corresponding to the digits 0-9. The output is obtained using an activation function:
$$\text{output\_Layer}[1] = g(\text{hidden\_layer}[1]) = \text{ReLU}(\text{hidden\_layer}[1])$$
Activation functions introduce non-linearity to the model, allowing the network to capture complex patterns.
The sigmoid function outputs a value between 0 and 1:
- Behavior: Values close to 1 indicate an active neuron, and values close to 0 indicate an inactive neuron. The sigmoid function pushes the input values to the ends of the curve (0 or 1), with significant changes in output for inputs around zero.
The tanh function outputs a value between -1 and 1:
- Behavior: The tanh function is a scaled version of the sigmoid function. It pushes the input values to -1 and 1. The gradient of the tanh function is four times greater than that of the sigmoid, resulting in stronger gradients and faster learning.
- Gradient: Tanh has a higher gradient than sigmoid, leading to larger weight updates.
- Output Symmetry: Tanh's output is symmetric around zero, which can lead to faster convergence during training. Rectified Linear Activation Function
The sigmoid and hyperbolic tangent activation functions cannot be used in networks with many layers due to the vanishing gradient problem. The rectified linear activation function overcomes the vanishing gradient problem, allowing models to learn faster and perform better. The ReLU (Rectified Linear Unit) activation function is defined as follows:
In mathematical notation, it can be expressed as:
This function outputs the input value ( x ) if ( x ) is greater than 0, and 0 otherwise. It's a simple yet effective activation function widely used in neural networks, especially in deep learning models.
The softmax function is a function that turns a vector of K real values into a vector of K real values that sum to 1. The input values can be positive, negative, zero, or greater than one, but the softmax transforms them into values between 0 and 1, so that they can be interpreted as probabilities. If one of the inputs is small or negative, the softmax turns it into a small probability, and if an input is large, then it turns it into a large probability, but it will always remain between 0 and 1.The softmax function is a commonly used activation function, especially in the output layer of a neural network for multi-class classification problems. It converts raw scores or logits into probabilities that sum up to 1. The softmax function is defined as follows:
Given a vector ( \mathbf{z} = (z_1, z_2, ..., z_n) ) of raw scores (logits), the softmax function ( \text{softmax}(z_i) ) for each element ( z_i ) is calculated as:
This function exponentiates each element of the input vector ( \mathbf{z} ), then divides each exponentiated value by the sum of all exponentiated values in the vector, ensuring that the resulting values form a probability distribution that sums to 1.
In vectorized form, the softmax function for a vector ( \mathbf{z} ) can be written as:
Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.
- output_Layer}[2] is the output after the softmax activation function.
These equations represent the gradients of the loss function with respect to the parameters of the neural network, which are used to update the weights and biases during training via gradient descent or other optimization algorithms. Adding alpha hyper paramter rate