Hands-On Neural Networks with Keras
上QQ阅读APP看书,第一时间看更新

Avoiding random memorization

Another answer can be to manipulate not only the overall number of neurons, but also the degree of interconnectivity among those neurons. We can do this through techniques such as dropout regularization and weighted parameter, as we will see soon enough. So far, we have already seen the various computations that can be performed through each neuron as data propagates through a network. We also saw how the brain leverages hundreds of millions of densely interconnected neurons to get the job done. But, naturally, we can't just scale up our networks by arbitrarily adding more and more neurons. Long story short, simulating a neural structure close to the brain is likely to require thousands of petaflops (a unit of computing speed equal to one thousand million million (1015) floating-point operations per second) of computing power. Maybe this will be possible in the near future, with the aforementioned paradigm of massively parallelized computing, along with other advances in software and hardware technologies. For now, though, we have to think of clever ways to train our network so that it can find the most efficient representations without wasting precious computational resources.