<p>This is a variation of k-means clustering where instead of calculating the mean for each cluster to determine its centroid we are going to calculate the median instead.</p>
<p>This has the effect of minimizing error over all the clusters with respect to the Manhattan norm as opposed to the Euclidean squared norm which is minimized in K-means</p>
<h3>Algorithm</h3>
<p>Given an initial set of $k$ medians, the algorithm proceeds by alternating between two steps.</p>
<p><strong>Assignment step</strong>: Assign each observation to the cluster whose median has the leas Manhattan distance.</p>
<ul>
<li>Intuitively this is finding the nearest median</li>
</ul>
<p><strong>Update Step</strong>: Calculate the new medians to be the centroids of the observations in the new clusters</p>
<p>The algorithm is known to have converged when assignments no longer change. There is no guarantee that the optimum is found using this algorithm. </p>
<p>The result depends on the initial clusters. It is common to run this multiple times with different starting conditions.</p>