Growing and Pruning in Sequential Learning Neural networks

 

Artificial Neural Networks ( ANNs)have gained much popularity in recent times due to their ability to solve many complex problems directly from the input-output data and their inherently simple and parallel topological structure. Although several learning algorithms have been proposed in the literature for training ANNs, selection of a particular learning for an application is often difficult as it needs to meet the required accuracy and speed for that application. Sequential learning is generally preferred to batch learning as they are computational efficient and also avoid retraining when ever new data is received.

A significant contribution to sequential learning was made by Platt through the development of a growing network called Resource Allocation Network (RAN) in which the hidden neurons are added sequentially based on the ‘novelty’ in the new data. A significant improvement to RAN was made by Yingwei et al by introducing a pruning strategy based on the relative contribution of the hidden neurons to the network out put. The resulting network is a sequential growing and pruning network and produces a highly parsimonious structure.

This talk gives an exposition of these sequential learning schemes and more importantly their practical applications in the areas of signal processing ( magnetic recording), communication ( equalization) , control ( Flight control) and computer networks ( ATM traffic).