Towards Practical Neural Network Meta-Modeling

This thesis largely expounds the work presented in [Designing Neural Network Architectures Using Reinforcement Learning] and in [Practical Neural Network Performance Prediction for Early Stopping]. We present all the material described in these papers, as well as some updated results. Notably, after re-analyzing the MetaQNN models, we found that MetaQNN was actually able to achieve 4.7% error on CIFAR-10, a new record for models with only standard convolution and pooling layers. We also present some brief work on visualizing varying architectures and an improved algorithm for speeding up Hyperband.