The Shape of Data

In the last post, I introduced the Support Vector Machine (SVM) algorithm, which attempts to find a line/plane/hyperplane that separates the two classes of points in a given data set. This algorithm adapts elements of linear regression, a statistical tool (namely, approximating the data set with a relatively simple model) to the classification problem, which has its roots in computer science. As we saw, one of the problems with SVM is that the output ultimately depends on a relatively small number of data points, namely the support vectors. In this post, I’ll discuss logistic regression, an algorithm that attempts to fix this problem by adapting the regression more directly to the problem of classification.

Recall that in linear regression, we used lines in the plane (or planes/hyperplanes in higher dimensions) to describe different probability distributions. We then chose the line such that the corresponding distribution assigned the maximal probability to…

View original post 1,017 more words

Advertisements