Seminar
The StoneWeierstrass Theorem and Neural Networks

Hold Date 
20170822 12:00～20170822 13:00 


Place 
Lecture Room S W1C503, West Zone 1, Ito campus, Kyushu University 


Object person 



Speaker 
Hien Nguyen (Department of Mathematics and Statistics, La Trobe University) 

Abstract:
Neural networks have become a ubiquitous tool in modern artificial intelligence, data analytics, and machine learning. A major key to the success of neural networks have been their ability to learn arbitrarily complex functions using simple architectures. The StoneWeierstrass theorem extends upon the famous Weierstrass approximation theorem. The StoneWeierstrass theorem states that any subalgebra of functions that can separate points is uniformly dense in the class of continuous functions on compact sets. Using the StoneWeierstrass theorem, Cotter (1990, IEEE T Neural Networks) demonstrated that many common architectures could be proved uniformly dense. In a similar way, Sandberg (2001, Circuit Systems Signal Processing) used the StoneWeierstrass theorem to prove the uniform denseness of Gaussian radial basis networks, another very popular architecture.
In this talk, we will introduce the StoneWeierstrass theorem and present its application to some of the proofs in Cotter (1990) and the proof of Sandberg (2001). Furthermore, we demonstrate how the StoneWeierstrass theorem can be used to prove denseness in the more modern class of networks: the mixture of experts models of Jacobs et al. (1991, Neural Computation). These results come from our recent works Nguyen et al. (2016, Neural Computation) and Nguyen (2017, ArXiv:1704.00946).