Archive for the Machine Learning Category

MNIST with Kaggle Kernel

Hi

 

Today, I’d like to share a quick post. As you already know, Kaggle is the place to study and learn machine learning. Kaggle’s users share their solutions and insights for several problems in ML. How can they share so many information in an easy and practical way? Users share their knowledge through kernels, how it looks like? Let’s take a look?

Continue reading MNIST with Kaggle Kernel

MNIST – Regularized Logistic Regression

Hello guys

 

Sometimes when we train our algorithm, it becomes too specific to our dataset which is not good. Why? Because the algorithm must be able to classify correctly data never seem before too. So today, I’ll show you a way to try to improve the accuracy of our algorithm.

Continue reading MNIST – Regularized Logistic Regression

Multi-Class classification with Logistic Regression

Hi,

Until now our algorithm was able to perform binary classification, in other words it could only classify one thing among several other stuffs.  I was wondering whether it would be nice to improve our algorithm to be a multi-class classifier and classify images with it.

Continue reading Multi-Class classification with Logistic Regression

Logistic Regression – Hands on

Hello!

Today we’ll get hands dirty and test logistic regression algorithm. For this post, we are going to use the very known iris flower data set.This dataset has three classes of flowers which can be classified accordingly to its sepal width/length and petal width/length. From the dataset source “One class is linearly separable from the other 2 […]” which makes this dataset handy for our purposes of binary classification.

Continue reading Logistic Regression – Hands on

Logistic Regression

Hi folks,
Yeah, things are getting more interesting, huh? In the last posts we covered linear regression where we fit a straight line to represent the best way possible a set of points. This is a simple and powerful way to predict values. But sometimes instead of predicting a value, we want to classify them.

Why we would do that? I know that is easy to understand but for those who didn’t catch it, why this is interesting?

Continue reading Logistic Regression

Linear Regression with Normal Equation

Hello,

In this post we’ll show another way different to solve the problem of error minimization in linear regression. Instead of using gradient descent, we could solve this linear system using matrices.

Continue reading Linear Regression with Normal Equation

Gradient descent tricks

Hi people,

The last two posts were about linear regression. I explained a little about the theory and I left an example to test the algorithm which actually works but could be improved. How can we do this?

Continue reading Gradient descent tricks

Multiple Linear Regression

Hi folks,

In the last post we talked about simple linear regression, where we calculated the “trend line” of a set of points for a single variable. But what if we have more than a single variable? How can we solve it? Continue reading Multiple Linear Regression

Simple Linear Regression

Hi there,

After almost two years of personal hard work, I’m back to share a little about linear regression and gradient descent.

What is linear regression?

Continue reading Simple Linear Regression

Baye’s theorem

Hello there!

More than 6 months from last post, for a good reason. We had our third kid and now, time is more than ever a scarce resource. So, let’s stop talking and go direct to the point!

Today I’ll explain a little about Baye’s theorem and why it’s important to know it.  Baye’s theorem is named after, Thomas Bayes an English statistician. This theorem is a solution to solve a problem of inverse conditional probability. I’ll give you an example to have a better understanding:

Continue reading Baye’s theorem