
Steps ahead
Congratulations! In just a few pages, we have already come a long way. Now you know how neural networks learn, and have an idea of the higher-level mathematical constructs that permit it to learn from data. We saw how a single neuron, namely the perceptron, is configured. We saw how this neural unit transforms its input features as data propagates forward through it. We also understood the notion of representing non-linearity through activation functions, and how multiple neurons may be organized in a layer, allowing each individual neuron in the layer to represent different patterns from our data. These learned patterns are updated at each training iteration, for each neuron. We know now that this is done by computing the loss between our predictions and actual output values, and adjusting the weights of each neuron in the model, until we find an ideal configuration.
In fact, modern neural networks employ various types of neurons, configured in diverse ways, for different predictive tasks. While the underlining learning architecture of neural networks always remains the same, the specific configuration of neurons, in terms of their number, inter-connectivity, activation functions used, etc. are elements which define the different types of neural network architectures you may come across. For the time being, we leave you with a comprehensive illustration generously provided by the Asimov institute.
In the following diagram, you can see some prominent types of neurons, or cells, along with their configurations that form some of the most commonly used state of the art neural networks, which you will also throughout the course of this book:
