
上QQ阅读APP看书,第一时间看更新
Training a perceptron
So far, we have a clear grasp of how data actually propagates through our perceptron. We also briefly saw how the errors of our model can be propagated backwards. We use a loss function to compute a loss value at each training iteration. This loss value tells us how far our model's predictions lie from the actual ground truth. But what then?