A side-by-bide comparison of R’s Shiny and Python’s Dash for building a simple web app. We’ll also discuss some of the unseen differences between the two that are important to consider before building a large scale app and deploying it.
In our last post we did a total overhaul of our model, using a more appropriate neural network type and a more powerful framework. We simplified the problem by doing a binary classification and only using two classes: our normal and our ceiling effects plots. We were able to get fantastic validation accuracy, but never checked accuracy on a test set, and never considered alternate metrics of evaluating model performance ("accuracy" is not always the most informative metric).
In this post, well create our final model that predicts all four classes, we'll evaluate its accuracy on a set of data held out from any training or validation, and look at a metric other than accuracy to give us more information about our model performance.
Our last classifier was very poor--it operated at chance--a coin flip would have had the same predictive power. A few things may have been going on that caused us to find no signal. It could be that we down sampled our images too much and lost useful information, it could be that our neural network was poorly configured (it was), it could be we were using the wrong type of neural network (we were), or it could be all of these. To address all of these issues, we'll spend a little more time constructing model this time--examining the underlying construct of the data itself, what we really want to learn from it, and how best to model that.
We’ll be using keras to interface with TensorFlow, and a type of Recurrent Neural Network (RNN) called a Long Short-Term Memory Network (LSTM).
Machine Learning On a 'Real' Problem
In this post we'll go through the process of turning a bunch of .png files into a training set and training a simple neural network. Future posts will dive into improving the model, using more advanced neural networks (convolutional NN vs. the multi-layer perceptron used here), and optimizing the model for a specific business aim (reducing false negatives at the expense of a few more false positives).