VIDHYAI
HomeBlogTutorialsNewsAboutContact
VIDHYAI

Your Gateway to AI Knowledge

CONTENT

  • Blog
  • Tutorials
  • News

COMPANY

  • About
  • Contact

LEGAL

  • Privacy Policy
  • Terms of Service
  • Disclaimer
Home
Tutorials
Machine Learning
Unsupervised Learning
Clustering Algorithms
K-Means Clustering
Back to Clustering Algorithms
Progress1/3 lessons (33%)
Lesson 1

K-Means Clustering

Learn how K-Means Clustering groups similar data points into meaningful clusters. This guide covers the algorithm’s workflow, distance metrics, choosing optimal K, and practical applications in customer segmentation and pattern discovery.

10 min read9 views

Introduction to K-Means Clustering

K-Means clustering is a fundamental unsupervised machine learning algorithm that automatically groups similar data points together without requiring labeled training data. Unlike supervised learning, where you train models on labeled examples, K-Means discovers hidden patterns and natural groupings within your dataset.

The algorithm gets its name from its goal: partitioning data into K clusters, where each cluster is represented by its mean (centroid). K-Means clustering is widely used across industries for customer segmentation, image compression, document classification, and anomaly detection.


How K-Means Clustering Works

The K-Means algorithm follows an iterative process to find the optimal cluster assignments. Understanding this process is essential for applying the algorithm effectively.

The K-Means Algorithm Steps

  1. Initialize centroids: Randomly select K data points as initial cluster centers
  2. Assign points to clusters: Calculate the distance from each data point to all centroids and assign each point to the nearest centroid
  3. Update centroids: Recalculate the centroid of each cluster as the mean of all points assigned to it
  4. Repeat: Continue steps 2 and 3 until centroids no longer change significantly or a maximum number of iterations is reached

Visual Understanding

Imagine you have a room full of people, and you want to divide them into three groups based on their height and weight. K-Means would:

  • Start by randomly picking three people as group representatives
  • Assign everyone to the representative they're most similar to
  • Find the "average person" in each group (new representative)
  • Reassign people based on the new representatives
  • Repeat until groups stabilize

Real-World Applications of K-Means

K-Means clustering solves numerous practical problems across various domains:

  • Customer Segmentation: Retailers group customers by purchasing behavior to create targeted marketing campaigns
  • Image Compression: Reducing the number of colors in an image by clustering similar pixel values
  • Document Clustering: Organizing news articles or research papers into topic-based groups
  • Geographic Analysis: Identifying optimal locations for stores or service centers
  • Fraud Detection: Grouping transactions to identify unusual patterns

Implementing K-Means in Python

Let's implement K-Means clustering using scikit-learn with a practical example.

Setting Up the Environment

First, import the necessary libraries for our clustering task:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs

This code imports NumPy for numerical operations, Matplotlib for visualization, and scikit-learn's KMeans implementation along with a function to generate sample data.

Creating Sample Data

# Generate sample data with 3 natural clusters
X, y_true = make_blobs(n_samples=300, centers=3, 
                        cluster_std=0.60, random_state=42)

print(f"Dataset shape: {X.shape}")

The make_blobs function creates 300 data points distributed around 3 cluster centers. This simulates real-world data where natural groupings exist.

Training the K-Means Model

# Create and fit the K-Means model
kmeans = KMeans(n_clusters=3, random_state=42, n_init=10)
kmeans.fit(X)

# Get cluster assignments and centroids
labels = kmeans.labels_
centroids = kmeans.cluster_centers_

print(f"Cluster labels: {np.unique(labels)}")
print(f"Centroid positions:\n{centroids}")

This code creates a K-Means model with 3 clusters, fits it to our data, and extracts the cluster assignments for each point along with the final centroid positions.

Making Predictions on New Data

# Predict cluster for new data points
new_points = np.array([[0, 0], [4, 4], [-3, 2]])
predictions = kmeans.predict(new_points)

print(f"New points belong to clusters: {predictions}")

Once trained, the K-Means model can assign new data points to the most appropriate cluster based on their proximity to cluster centroids.


Choosing the Optimal Number of Clusters

One of the most critical decisions in K-Means clustering is selecting the right value for K. Two popular methods help determine the optimal number of clusters.

The Elbow Method

The Elbow Method plots the within-cluster sum of squares (inertia) against different values of K:

inertias = []
k_range = range(1, 11)

for k in k_range:
    kmeans = KMeans(n_clusters=k, random_state=42, n_init=10)
    kmeans.fit(X)
    inertias.append(kmeans.inertia_)

# The "elbow" point suggests optimal K
print("Inertia values:", inertias[:5])

The optimal K is typically found at the "elbow" point where adding more clusters provides diminishing returns in reducing inertia.

Silhouette Score

The Silhouette Score measures how similar points are to their own cluster compared to other clusters:

from sklearn.metrics import silhouette_score

for k in range(2, 7):
    kmeans = KMeans(n_clusters=k, random_state=42, n_init=10)
    labels = kmeans.fit_predict(X)
    score = silhouette_score(X, labels)
    print(f"K={k}, Silhouette Score: {score:.3f}")

Silhouette scores range from -1 to 1, where higher values indicate better-defined clusters. This metric helps validate your choice of K.


Limitations and Best Practices

Limitations of K-Means

  • Requires specifying K: You must decide the number of clusters beforehand
  • Sensitive to initialization: Random starting points can lead to different results
  • Assumes spherical clusters: Struggles with elongated or irregular cluster shapes
  • Sensitive to outliers: Extreme values can significantly affect centroid positions

Best Practices

  • Scale your features: Always standardize data before applying K-Means
  • Run multiple initializations: Use n_init parameter to run the algorithm multiple times
  • Validate results: Use both the Elbow Method and Silhouette Score
  • Consider alternatives: For non-spherical clusters, explore DBSCAN or hierarchical clustering
from sklearn.preprocessing import StandardScaler

# Always scale features before clustering
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

Feature scaling ensures that all variables contribute equally to the distance calculations, preventing features with larger ranges from dominating the clustering process.


Key Takeaways

  • K-Means clustering is an unsupervised algorithm that groups similar data points into K clusters
  • The algorithm iteratively assigns points to clusters and updates centroids until convergence
  • Choosing the right K value is crucial—use the Elbow Method and Silhouette Score
  • Always scale your features and run multiple initializations for reliable results
  • K-Means works best with spherical, well-separated clusters of similar sizes
Back to Clustering Algorithms

Next Lesson

Hierarchical Clustering

Related Lessons

1

DBSCAN - Density-Based Clustering

Explore DBSCAN, a powerful clustering algorithm that identifies dense regions and detects noise. Learn how it discovers clusters of any shape and performs well with outliers and complex datasets.

2

Hierarchical Clustering

Understand Hierarchical Clustering, a method that builds clusters step‑by‑step to reveal data structure. This lesson explains dendrograms, linkage methods, and how to identify natural groupings without predefining the number of clusters.

In this track (3)

1K-Means Clustering2Hierarchical Clustering3DBSCAN - Density-Based Clustering