Hi,
Last post we classified MNIST using our classifier. I was wondering how we could evaluate our algorithm with “real” data. What if we could evaluate it against others algorithms from other people?
I guess most of you guys already know Kaggle, right? Kaggle is a website where you will find tons of material and competition (with money prizes sometimes) where you can practice machine learning. It’s a must for those who wants to learn more about it.
Currently there is a competition to classify the MNIST dataset and since we have already done a classifier for that, why not test it and try to improve it?
Kaggle competition
As mentioned before, Kaggle is a website where you can learn machine learning, share technics, insights, ask questions, etc. it’s really the place to find information.
You must first create an account and then subscribe for the “Digit Recognizer” competition. Today, we have 1871 people participating and the final result will be in 2020. To have a taste of how your algorithm is performing, the site will give right away the results over 25% of the testing data.
Dataset
Kaggle give us 42k images for testing and we need to classify 28k images. For the testing images, the first column is the label and the others 784 columns are the pixels of the 28×28 image.
For both files, the first line is the header of the columns, so, needed only for our understanding but not for our algorithm. The file given is in .csv format which is easy to import in Octave.
After classify all 28k images, we need to create as well a file to be submitted to evaluation which is described on the competition’s page. The site will give us a partial results over 25% of the testing data. The final result will be in 2020!
Results
Before getting the results, I would like to share the approach I used to train my algorithm. I used part of the training dataset for training and the rest to simulate the testing with the real dataset from Kaggle. This is what I got from Octave:
Hits: 26715, Miss: 5285. Total: 32000
Multi-class Logistic Regression accuracy: 83.48%
The results that I got from my dataset was really close from what Kaggle gave me as you can see below:
- Test using learning rate of 0.6, 200 iteration and used all 42k images for training and got 0.77828.
- Test using first 5k images for training, learning rate: 0.3 and 100 iterations and got 0.70785.
- Last test using 10k images for training, learning rate: 0.6 and 200 iterations and got 0.83485.
So as you can see, you can tune your algorithm using several parameters during the training. This is the trick part. In the next post we can try to improve our algorithm using very known technics to avoid overfitting.
As usual, the code is in my github.
Bye bye
Leave a Reply