Udacity – Intro to Artificial Intelligence – Unsupervised Learning – Expectation Maximization Clustering

The Expectation Maximization is somewhat similar to K-means, with this core difference:

In the corresponding step:

  • k-means uses “hard correspondence” – estimated centerpoint A only compares with the data points in cluster A in the revision of new estimated centerpoint A location. It does not compare with data points from other clusters (e.g. cluster B, etc.)
  • Expectation Maximization uses “soft correspondence” – estimated centerpoint A compares with the data points in cluster A and other clusters in the revision of new estimated centerpoint A location. It does compare with data points from other clusters (e.g. cluster B, etc.).

This video by Udacity summarizes this very nicely.