AI has been impacting our lives nowadays, from image classification to ChatGPT, the evolution is very fast! Most of the AI models are implemented by using deep learning techniques, which use a huge amount of data to train the model to get better results for prediction.
A large deep learning model is very complex, but a small model is relatively understandable and can do a lot of things, like image classification, sales forecasting, recommendation, etc.
Perceptron
The basic idea of deep learning consists of perceptrons, which are used to calculate from the input data by a linear equation to a prediction output. For instance, an output of 0.3 could be a probability of an image of a cat. Even a single can make a prediction. To simplify, we can create a perceptron with 2 inputs and 1 output, and it can form an equation as like the following diagram:
Where are the input data, and are the weights and the weight of bias can be updated by training to output a more accurate value by the equation. The bias can be also written in .
How can it be used to predict? Let’s say there is a group of data with kilograms and heights for identifying either cats or mice, it can be plotted as a graph and the equation can draw a line to split the data to separate the two groups, which is a task can be performed by the perceptron: if then it is a cat, else it is a mouse.
Loss function
Continue to the model, we can define:
to mean it is a cat if or it is not a cat if . (in our case it is a mouse). But how do we get the weight values for the perceptron to form the equation to split the data? We could assume that when an output from gets bigger, it is more like a cat. We can use that to evolve the model. But there is a problem when some outputs are large and some outputs are small. We need a function for normalizing, one commonly used is the sigmoid function:
The output of the sigmoid function presents probability. Now we can use sigmoid to apply to the perception to output a probability of beginning a cat as:
For the probability of the model correction prediction of items:
Where is the expected result either 1 or 0 which means it is whether a cat or a mouse. If the result from the model is opposite to , then the probability is . But there is a problem if there are a lot of predicted outputs, the will become very small. To solve it, we can take to so that:
It will result in a negative value since every sigmoid output is greater than 0 and lesser than 1. We can take a negative value of it, and also take an average of it:
The value from this formula can indicate how well perform of a model. If the value is high that means the model didn’t predict well; if the value is low then the model predicted well. This formula is Binary Cross Entropy a kind of cross-entropy loss function for measuring loss, which can also help for optimize models.
Training
Now we have a loss function to know how good of a model. And it can also help to optimize a model. How is it possible? What need to do is try to adjust the weights of the model to make the loss function evaluated near zero. We can update the model’s weights to form to result in outputs always higher or equal to 0.5 to mean it is a cat for higher heights and kilograms as inputs, and vice versa it is a mouse. But how much should we adjust the weight? The answer is by using the derivative. For example, consider a function , its derivative is . By adding the derivative with a different of values, . The derivative indicates the direction and the magnitude to change and we can take the oppositive of the derivative as (in this case ) to optimize to find the minimum point of the function. (This is called steepest descent or gradient descent, there are other ways to update weights).
Now we can use a derivative to calculate the gradient of the loss function. To simplify, let:
Take its partial derivative for the weight:
The derivative of the sigmoid function:
And partial derivative of for , where ,
Back to the loss function
The weight and bias will be updated as:
should mean as the average of the batch multiples the learning rate, but just using one constant for that for convenience.
Sample app demo
Here is a sample app to demonstrate how it works. You can think the blue points are cats and the orange points are mice as the coordinates represent heights and kilograms. But you can also ignore the analogy to try the model, it is still able to split the points if they are aligned in a proportion way.