Visualizes min-heap operations including insertions and extractions.
A Min Heap is a complete binary tree where every parent node has a value less than or equal to its children. The smallest element always resides at the root, providing O(1) access to the minimum. Heaps were introduced by J.W.J. Williams in 1964 as part of the Heap Sort algorithm, and Robert W. Floyd subsequently improved the heap construction algorithm to run in O(n) time. Today, heaps are one of the most widely used data structures in computer science, serving as the standard implementation of priority queues.
Insertion (bubble up / sift up):
Extraction (bubble down / sift down):
Building a heap from an array: Rather than inserting elements one at a time (which takes O(n log n)), Floyd's algorithm builds a heap in O(n) time by calling sift-down on each non-leaf node, starting from the last non-leaf and working backward to the root. This is more efficient because most nodes are near the bottom and require very few swaps.
A key insight is that a complete binary tree maps perfectly to an array with no wasted space. For a node at index i (zero-indexed), its left child is at 2i + 1, its right child is at 2i + 2, and its parent is at floor((i - 1) / 2). This eliminates the need for pointers, making heap operations cache-friendly and memory-efficient. The array representation is one reason heaps are so fast in practice.
| Operation | Time |
|---|---|
| Insert | O(log n) |
| Extract min | O(log n) |
| Peek min | O(1) |
| Build heap | O(n) |
| Search | O(n) |
The logarithmic operations arise because the tree height is always O(log n) due to the complete tree property. Searching for an arbitrary element requires O(n) because the heap property only constrains parent-child relationships, not sibling or cousin relationships.
A Max Heap is the mirror image: every parent is greater than or equal to its children, and the root holds the maximum. The algorithms are identical except comparisons are reversed. In many languages, the standard library provides a min heap (like Python's heapq module), and a max heap is simulated by negating values.
Min heaps power priority queues, which are ubiquitous in computer science. Dijkstra's shortest path algorithm uses a min heap to always expand the closest unvisited vertex. The A* search algorithm, widely used in game AI and robotics for pathfinding, relies on a min heap to prioritize nodes by their estimated total cost. Huffman coding, a foundational compression algorithm, repeatedly extracts the two smallest-frequency nodes from a min heap to build an optimal prefix code. Operating system schedulers use heaps to manage process priorities. Event-driven simulations use heaps to process events in chronological order. The "top K" problem -- finding the K largest elements from a stream -- is efficiently solved with a min heap of size K: each new element replaces the root if it is larger, maintaining a running set of the K largest values in O(n log K) time.
Use a min heap whenever you need fast access to the smallest element and efficient insertion. It is the ideal data structure for priority queues, scheduling systems, graph algorithms that process nodes by cost, and streaming problems that track extremes. If you need fast lookup of arbitrary elements, consider a balanced BST or hash table instead.