This blog discusses the design, implementation and time complexity analysis of bubble, selection and insertion sort algorithms. These are one of the fundamental sorting algorithms to learn problem-solving using incremental approach.

**Input:** An array X[] of n integers

**Output:** A permutation of input such that X[0] <= X[1] <= X[2] .... <= X[n-2] <= X[n-1]

Bubble sort is a simple and inefficient sorting algorithm. It is generally one of the basic algorithms taught in programming to develop intuition about the working of algorithms. Sometimes bubble sort is also referred to as "Sinking sort"!

If we look at elements in the sorted array: Largest element is at (n - 1)th index, 2nd largest is at (n - 2)th index, and so on. So one basic idea would be to traverse the array and place largest element at (n - 1)th index. Again, traverse the array and place second largest element at (n - 2)th index and...so on.

- After 1st iteration, largest element bubbles up towards (n - 1)th index. Similarly, after 2nd iteration, 2nd largest element bubbles up towards (n - 2)th index. And so on!
- In general, at any ith iteration, we place
**ith**maximum element at**(n - i)th**index. This process will go on till whole array becomes sorted.

We run two nested loops:

- At each stage of the outer loop, we place one input value to its correct position in the sorted output. So outer loop will run n times from i = 0 to n - 1.
- After ith iteration of the outer loop, i maximum elements will get placed from index (n - i) to (n - 1). At (i + 1)th iteration, we place (i + 1)th maximum element at (n - i - 1)th index. So we need to run an inner loop from n - i - 1 time.

```
for (int i = 0; i < n; i = i + 1)
{
for (int j = 0; j < n - i - 1; j = j + 1)
{
........
}
}
```

Now critical question is: What would be the logic inside inner loop? The idea would be to start from the first index, comparing adjacent pairs of elements from j = 0 to n - i - 2 and swapping their positions if they exist in the wrong order. In other words, at each step of inner loop, if X[j] > X[j + 1], we swap X[j] and X[j + 1].

```
for (int j = 0; j < n - i - 1; j = j + 1)
{
if (X[j] > X[j + 1])
{
temp = A[j]
X[j] = X[j + 1]
X[j + 1] = temp
}
}
```

```
void bubbleSort(int X[], int n)
{
for (int i = 0; i < n; i = i + 1)
{
for (int j = 0; j < n - i - 1; j = j + 1)
{
if (X[j] > X[j + 1])
{
int temp = A[j]
X[j] = X[j + 1]
X[j + 1] = temp
}
}
}
}
```

**Critical question:** Does this algorithm work perfect if we run the outer loop n - 1 time, i.e. from i = 0 to n - 2? Think!

We run two nested loops where comparison and swapping are key operations. Regardless of the input, comparison operations will execute every time. On another side, swapping depend upon the order of input, i.e., swapping happens only when comparison X[j] > X[j + 1] is true. So comparison would be the critical operation that decides time complexity in the worst case.

At ith iteration of outer loop, inner loop will run (n - i - 1) times. So total count of nested loop iterations = Sum of (n - i - 1) from i = 0 to n - 1 = (n - 1) + (n - 2) + . . . + 2 + 1 = n(n - 1)/2 = O(n^2)

**Bubble sort worst case time complexity**

When input is sorted in decreasing order, comparison X[j] > X[j + 1] becomes true for each iteration of the inner loop, and swapping operation will execute every time. So, reverse sorted array will be the scenario of worst-case input in bubble sort.

- Total count of comparison operations = Total count of nested loop iterations = O(n^2)
- Total count of swapping operations = Total count of comparison operations = O(n^2)
- Time complexity of bubble sort in the worst case = O(n^2) + O(n^2) = O(n^2)

**Bubble sort best case time complexity**

When input is already sorted in increasing order, then comparison X[j] > X[j + 1] becomes false for each iteration of the inner loop, and swapping operation will not execute at all. So, sorted array is the scenario of best case input in bubble sort.

- Total count of comparison operations = Total count of nested loop iterations = O(n^2)
- Total count of swapping operations = 0
- Time complexity of bubble sort in the best case = O(n^2)

In both worst and best cases, bubble sort runs in O(n^2) time complexity. We use constant extra space, so space complexity of bubble sort = O(1).

Even if array is sorted, above algorithm always works in **O(n^2)** time. From another perspective: If array gets sorted in the middle of nested loop, there is no mechanism to stop the loop, and loop will continue to run unnecessarily. We can use this insight to optimize bubble sort further. The idea would be to stop the algorithm if inner loop didn't cause any swap! The critical question is: How do we do it? Let's think!

If no element gets swapped inside inner loop, it means array is finally sorted, and we should stop the bubble sort algorithms. We can detect such scenario using a boolean variable.

- Before starting the inner loop, we declare a flag
**elementSwapped**equal to false. - When any swap happens inside the inner loop, we set elementSwapped to true.
- By the end of inner loop, if elementSwapped is still false, it means: No swap happens inside inner loop, and given array is already sorted. So we stop the sorting process by breaking the loop. Otherwise, If elementSwapped is true, we should go to the next iteration of outer loop.

```
void optimizedBubbleSort(int X[], int n)
{
bool elementSwapped
for (int i = 0; i < n; i = i + 1)
{
elementSwapped = false
for (int j = 0; j < n - i - 1; j = j + 1)
{
if (X[j] > X[j + 1])
{
int temp = A[j]
X[j] = X[j + 1]
X[j + 1] = temp
elementSwapped = true
}
}
if (elementSwapped == false)
break
}
}
```

In the above implementation, if array is already sorted, bubble sort makes no swaps, and algorithm will terminate after a single pass. So, O(n) is the best-case running time for optimized version of bubble sort. But there is no improvement in worst-case time complexity which is still O(n^2) for reverse sorted array.

- What would be the average case time complexity of bubble sort?
- What is the idea of "stability" in the case of sorting algorithms? Is bubble sort a stable sorting algorithm?
- Can we modify above pseudo-code to sort elements in decreasing order?
- Can we think to implement bubble sort recursively?
- How can we sort linked lists and strings using bubble sort algorithm?

The idea of selection sort is simple: We traverse the whole array to find smallest element. Once we find it, we swap smallest element with the first element of array. Then we again look for smallest element in the remaining array (excluding first element) and swap it with second element. This process goes on till whole array gets sorted.

We run two nested loops: Outer loop from **i = 0 to n - 2** and inner loop from **j = i to n - 1**. In other words, at each iteration of outer loop, we run inner loop to find index of smallest element in the unsorted subarray and swap it with value at index i.

So selection algorithm maintain two subarrays at ith iteration of outer loop:

- The sorted subarray X[0…i - 1].
- The unsorted subarray X[i…n - 1].

We build partially sorted array incrementally by finding min value from the unsorted part and adding it to the sorted part. At each step of iteration, sorted subarray size will grow by 1, and unsorted subarray size will decrease by 1.

```
void selectionSort(int X[], int n)
{
for (int i = 0; i < n - 1; i = i + 1)
{
int minIndex = i
for (int j = i; j < n; j = j + 1)
{
if (X[j] < X[minIndex])
minIndex = j
}
swap(X[minIndex], X[i])
}
}
```

We run two nested loops where comparison, swapping, and update of minIndex are key operations. Regardless of the input, comparison and swapping operations are going to execute every time, but update of minIndex depends on the order of input values. Think!

At ith iteration of outer loop, inner loop will run (n - i) times. So total count of nested loop iterations = Sum of (n - i) from i = 0 to n - 1 = n + (n - 1) + (n - 2) + . . . + 2 + 1 = n(n + 1)/2 = O(n^2)

**Selection sort worst case time complexity**

When input is sorted in decreasing order, comparison X[j] < X[minIndex] becomes true every time, and value of minIndex will get updated every time. So reverse sorted array is a worst-case input for selection sort.

- Count of comparison operations = Total count of nested loop iterations = O(n^2)
- We are performing one swap on each iteration of outer loop. Total count of swapping operations = n * O(1) = O(n)
- Total update operation of minIndex = Total count of loop operations = O(n^2)
- Time complexity of selection sort in the worst case = O(n^2) +O(n) + O(n^2) = O(n^2)

**Selection sort best case time complexity**

When input is already sorted, comparison X[j] < X[minIndex] becomes false every time, and value of minIndex will not get updated. So sorted array is a best-case input for selection sort.

- Count of comparison operations = Count of nested loop iterations = O(n^2)
- Total count of swap operations = O(n)
- Total update operation of minIndex = O(1)
- Time complexity of selection sort in the best case = O(n^2) +O(n) + O(1) = O(n^2)

So in both worst and best cases, selection sort works in O(n^2) time complexity. We use constant extra space, so space complexity = O(1).

- What would be the average case time complexity of selection sort?
- Is selection sort a stable sorting algorithm?
- The above pseudo-code sort the array in increasing order. How would you modify the above pseudocode to sort elements in decreasing order?
- How do we implement the selection sort recursively?
- How can we sort linked lists and strings using selection sort?

Insertion sort works the way we sort a hand of playing cards: We start with an empty left hand and all remaining cards on the table. Then we remove one card at a time from the table [unsorted array] and insert it into the correct position in the left hand [sorted array]. To find the correct place for the card, we compare it with each card already in hand, from right to left.

The core idea is: If the first few array elements are sorted, we can easily insert the new element into its correct position in the sorted part. We need to compare the new element with each element in the sorted part from right to left.

So we run two nested loops: Outer loop from i = 1 to n - 1 to pick element X[i] and inner loop from j = i - 1 to 0 to find the index to insert X[i] in the partially sorted array X[0…i - 1]. During this process, we move elements greater than X[i] to one position ahead of their current position. Note: We are starting from i = 1 because i = 0 is a case of a single element that is already sorted.

In general, insertion sort maintains two subarrays at the ith iteration of outer loop: sorted subarray X[0…i - 1] and unsorted subarray X[i…n - 1]. We incrementally build the partial solution by choosing first value X[i] from unsorted part and inserting it into the sorted part X[0…i - 1]. In other words, sorted subarray size at each outer loop iteration will increase by 1, and the unsorted subarray size will decrease by 1.

We run an outer loop from i = 1 to n - 1. At each step of outer loop:

- We store X[i] in a variable
**currValue.** -
Now we run inner loop from j = i - 1 to find the correct index to insert currValue in the partially sorted array X[0…i - 1]. For this, whenever X[j] > currValue, we shift X[j] to one right at index j + 1 and decrement j by 1. In simple words, inner loop will run till j >= 0 && X[j] > currValue.

`int currValue = X[i] int j = i - 1 while (j >= 0 && X[j] > currValue) { X[j + 1] = X[j] j = j - 1 }`

- Whenever inner loop exit due to condition X[j] < currValue or j < 0, j + 1 will be the correct position of currValue. So we insert currValue at (j + 1)th index i.e. X[j + 1] = currValue.
- Now we move to the next iteration of outer loop and continue a similar process to insert remaining elements from the unsorted port into the partially sorted part. After completing the outer loop, X[] contains the sorted output.

```
void insertionSort(int X[], int n)
{
for (int i = 1; i < n; i = i + 1)
{
int currValue = X[i]
int j = i - 1
while (j >= 0 && X[j] > currValue)
{
X[j + 1] = X[j]
j = j - 1
}
X[j + 1] = currValue
}
}
```

Image source : CLRS

We are running two nested loops where comparison and shifting are critical operations. Regardless of the input, outer loop will run n - 1 times. But inner loop iteration count depends on the comparison X[j] > currValue. So critical question is: What would be the worst and best case scenario? Let's think!

**Insertion sort worst case time complexity**

This is situation of a reverse sorted array, where outer loop will run n - 1 time, and at each ith iteration of outer loop, inner loop will run i time. Why? Because condition X[j] > currValue will be true for every value from j = i - 1 to 0. In another way: To insert ith value on its correct position, we shift all values from j = i - 1 to 0 to one right.

So at ith iteration of outer loop: Count of comparison operations = i and Count of shifting operations = i

- Total comparison operations = 1 + 2 + 3 + ….+ n - 2 + n - 1 = n(n - 1)/2 = O(n²)
- Total shifting operations = 1 + 2 + 3 + ….+ n - 2 + n - 1 = n(n - 1)/2 = O(n²)
- So worst-case time complexity of insertion sort = O(n²) + O(n²) = O(n²)

**Insertion sort best case time complexity**

This is situation of a sorted array, where outer loop will run n - 1 time, and inner loop will run only once. Why? Because every time we enter into the inner loop, condition X[j] > currValue will be false and we exit the inner loop. In another way: ith value is already there at correct position. So we need to perform only one comparison at each iteration of outer loop.

- Total comparison operations = n - 1 = O(n)
- Total shifting operations = 0 = O(1)
- So best case time complexity of insertion sort = O(n) + O(1) = O(n)

- Due to the best-case scenario, insertion sort is one of the fastest sorting algorithms for the partially or almost sorted array, i.e., the time complexity is O(kn), where each element is no more than k places away from its sorted position.
- Insertion sort is one of the fastest sorting algorithms for small input sizes.
- Insertion sort is a stable sorting algorithm.
- We can optimize insertion sort further using binary search.
- Insertion sort is an efficient sorting algorithm than selection and bubble sort.
- The average case time complexity of the insertion sort is closer to the worst-case time complexity, i.e. O(n²).
- The above pseudo-code sort the array in increasing order. How do we modify the above pseudo-code to sort the elements in decreasing order?
- How can we implement the Insertion sort recursively?
- How can we sort linked lists and strings using selection sort?

- Concept of stability in sorting algorithms
- What is comparison-based sorting? Explore the design, code, and analysis of merge sort, quicksort, heap sort, shell sort, BST sort, etc.
- Comparison of various sorting algorithms
- Linear time sorting algorithms: Counting sort, Radix sort, Bucket sort
- Problem-solving using sorting
- Real-life applications of sorting

- Sort an array of 0s, 1s, and 2s
- Sort an array in a waveform
- Sort an array according to the order defined by another array
- Sort the even positioned elements in the ascending order and the odd positioned elements in the descending order.
- Count swaps required to sort an array using insertion sort
- Merge two sorted arrays, find the intersection of two sorted array

- Algorithms by CLRS
- The Algorithm Design Manual by Steven Skiena

**Enjoy learning, Enjoy algorithms!**

Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.