mirror of
https://github.com/Brandon-Rozek/website.git
synced 2024-11-22 00:06:29 -05:00
116 lines
5.1 KiB
HTML
116 lines
5.1 KiB
HTML
|
<!DOCTYPE html>
|
||
|
<html>
|
||
|
<head>
|
||
|
<meta charset="utf-8" />
|
||
|
<meta name="author" content="Fredrik Danielsson, http://lostkeys.se">
|
||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||
|
<meta name="robots" content="noindex" />
|
||
|
<title>Brandon Rozek</title>
|
||
|
<link rel="stylesheet" href="themes/bitsandpieces/styles/main.css" type="text/css" />
|
||
|
<link rel="stylesheet" href="themes/bitsandpieces/styles/highlightjs-github.css" type="text/css" />
|
||
|
</head>
|
||
|
<body>
|
||
|
|
||
|
<aside class="main-nav">
|
||
|
<nav>
|
||
|
<ul>
|
||
|
<li class="menuitem ">
|
||
|
<a href="index.html%3Findex.html" data-shortcut="">
|
||
|
Home
|
||
|
</a>
|
||
|
</li>
|
||
|
<li class="menuitem ">
|
||
|
<a href="index.html%3Fcourses.html" data-shortcut="">
|
||
|
Courses
|
||
|
</a>
|
||
|
</li>
|
||
|
<li class="menuitem ">
|
||
|
<a href="index.html%3Flabaide.html" data-shortcut="">
|
||
|
Lab Aide
|
||
|
</a>
|
||
|
</li>
|
||
|
<li class="menuitem ">
|
||
|
<a href="index.html%3Fpresentations.html" data-shortcut="">
|
||
|
Presentations
|
||
|
</a>
|
||
|
</li>
|
||
|
<li class="menuitem ">
|
||
|
<a href="index.html%3Fresearch.html" data-shortcut="">
|
||
|
Research
|
||
|
</a>
|
||
|
</li>
|
||
|
<li class="menuitem ">
|
||
|
<a href="index.html%3Ftranscript.html" data-shortcut="">
|
||
|
Transcript
|
||
|
</a>
|
||
|
</li>
|
||
|
</ul>
|
||
|
</nav>
|
||
|
</aside>
|
||
|
<main class="main-content">
|
||
|
<article class="article">
|
||
|
<h1>Centroid-based Clustering</h1>
|
||
|
<p>In centroid-based clustering, clusters are represented by some central vector which may or may not be a member of the dataset. In practice, the number of clusters is fixed to $k$ and the goal is to solve some sort of optimization problem.</p>
|
||
|
<p>The similarity of two clusters is defined as the similarity of their centroids.</p>
|
||
|
<p>This problem is computationally difficult so there are efficient heuristic algorithms that are commonly employed. These usually converge quickly to a local optimum.</p>
|
||
|
<h2>K-means clustering</h2>
|
||
|
<p>This aims to partition $n$ observations into $k$ clusters in which each observation belongs to the cluster with the nearest mean which serves as the centroid of the cluster.</p>
|
||
|
<p>This technique results in partitioning the data space into Voronoi cells.</p>
|
||
|
<h3>Description</h3>
|
||
|
<p>Given a set of observations $x$, k-means clustering aims to partition the $n$ observations into $k$ sets $S$ so as to minimize the within-cluster sum of squares (i.e. variance). More formally, the objective is to find
|
||
|
$$
|
||
|
argmin<em>s{\sum</em>{i = 1}^k{\sum_{x \in S_i}{||x-\mu<em>i||^2}}}= argmin</em>{s}{\sum_{i = 1}^k{|S_i|Var(S_i)}}
|
||
|
$$
|
||
|
where $\mu_i$ is the mean of points in $S_i$. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster
|
||
|
$$
|
||
|
argmin<em>s{\sum</em>{i = 1}^k{\frac{1}{2|S<em>i|}\sum</em>{x, y \in S_i}{||x-y||^2}}}
|
||
|
$$</p>
|
||
|
<h3>Algorithm</h3>
|
||
|
<p>Given an initial set of $k$ means, the algorithm proceeds by alternating between two steps.</p>
|
||
|
<p><strong>Assignment step</strong>: Assign each observation to the cluster whose mean has the least squared euclidean distance.</p>
|
||
|
<ul>
|
||
|
<li>Intuitively this is finding the nearest mean</li>
|
||
|
<li>Mathematically this means partitioning the observations according to the Voronoi diagram generated by the means</li>
|
||
|
</ul>
|
||
|
<p><strong>Update Step</strong>: Calculate the new means to be the centroids of the observations in the new clusters</p>
|
||
|
<p>The algorithm is known to have converged when assignments no longer change. There is no guarantee that the optimum is found using this algorithm. </p>
|
||
|
<p>The result depends on the initial clusters. It is common to run this multiple times with different starting conditions.</p>
|
||
|
<p>Using a different distance function other than the squared Euclidean distance may stop the algorithm from converging.</p>
|
||
|
<h3>Initialization methods</h3>
|
||
|
<p>Commonly used initialization methods are Forgy and Random Partition.</p>
|
||
|
<p><strong>Forgy Method</strong>: This method randomly chooses $k$ observations from the data set and uses these are the initial means</p>
|
||
|
<p>This method is known to spread the initial means out</p>
|
||
|
<p><strong>Random Partition Method</strong>: This method first randomly assigns a cluster to each observation and then proceeds to the update step. </p>
|
||
|
<p>This method is known to place most of the means close to the center of the dataset.</p>
|
||
|
</article>
|
||
|
</main>
|
||
|
|
||
|
<script src="themes/bitsandpieces/scripts/highlight.js"></script>
|
||
|
<script src="themes/bitsandpieces/scripts/mousetrap.min.js"></script>
|
||
|
<script type="text/x-mathjax-config">
|
||
|
MathJax.Hub.Config({
|
||
|
tex2jax: {
|
||
|
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
|
||
|
processEscapes: true
|
||
|
}
|
||
|
});
|
||
|
</script>
|
||
|
|
||
|
<script type="text/javascript"
|
||
|
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
|
||
|
</script>
|
||
|
<script>
|
||
|
hljs.initHighlightingOnLoad();
|
||
|
|
||
|
document.querySelectorAll('.menuitem a').forEach(function(el) {
|
||
|
if (el.getAttribute('data-shortcut').length > 0) {
|
||
|
Mousetrap.bind(el.getAttribute('data-shortcut'), function() {
|
||
|
location.assign(el.getAttribute('href'));
|
||
|
});
|
||
|
}
|
||
|
});
|
||
|
</script>
|
||
|
|
||
|
</body>
|
||
|
</html>
|