Why do we study the design, code, and analysis of the sorting algorithms? Here are some critical reasons:
This blog will analyze and compare various sorting algorithms based on different parameters like time complexity, space complexity, in-place, stability, online vs. offline, problem-solving approaches, etc.
In comparison-based sorting algorithms, we compare elements to determine the order of elements in the final sorted output. All comparison-based sorting algorithms have a complexity lower bound of nlogn. (Think!) We have already seen that any comparison-based sorting algorithm must take O(n log n) time to sort an array of n elements in the worst case.
As we have seen, the worst-case time complexities of the above sorting algorithms can be categorized into two parts: O(n^2) and O(nlogn).
In addition to the comparison operations, we also perform other types of operations in these sorting algorithms. But the count of these operations would always be less than the count of comparison operations. That's why comparison operation is the deciding factor of the time complexity.
There are sorting algorithms that run faster than O(nlogn) time complexity, but they require special assumptions about the input sequence to determine the sorted order of elements. These sorting algorithms use operations other than comparisons to determine the sorted order and work in O(n) time complexity. So, O(nlogn) lower bound does not apply to these sorting algorithms.
Examples of sorting algorithms that run in linear time are counting sort, radix sort, bucket sort, etc. Counting sort and radix sort assume that the input consists of integers in a small range. At the same time, bucket sort assumes that the input is generated by a random process that distributes elements uniformly over the interval.
Special conditions with linear time sorting algorithms
Here is the comparison of time and space complexities of some popular sorting algorithms:
A sorting algorithm is In-place if it does not use extra space to manipulate the input but may require a small though nonconstant extra space for its operation. Or we can say a sorting algorithm sorts in place if only a constant number of input array elements are ever stored outside the array.
A sorting algorithm is stable if it does not change the order of elements with the same value.
The algorithm that accepts a new element while the sorting process is going on is called the online sorting algorithm. An online algorithm can process its input piece-by-piece in a serial order. In other words, online sorting algorithms can sort the data not without having the entire input available from the beginning.
For example, insertion sort considers one input element per iteration and produces a partially sorted solution without considering future elements. So maintaining a sorted list in the insertion sort, we can place each input item in its proper place as we receive the input. So insertion sort is an online sorting algorithm.
In contrast, an offline algorithm needs the complete input data in the memory from the beginning, or it requires all items to be in memory before sorting can begin. For example, The selection sort algorithm sorts an array by repeatedly finding the minimum element from the unsorted part and putting it at the beginning. Which always requires access to the entire input, so it is an offline algorithm.
Enjoy learning, Enjoy sorting!
This blog highlights some popular problem-solving strategies for solving problems in DSA. Learning to apply these strategies could be one of the best milestones for the learners in mastering data structure and algorithms. Later we will write a separate blog on each problem-solving approach. Enjoy learning, Enjoy algorithms!
Algorithmic thinking is a method for solving data structure and algorithms problems based on a clear definition of the steps logically and repeatedly. The best idea would be to develop this skill independently from learning programming with proper practice and visualisation. This could help us learn several problem-solving strategies in coding.
In recursive DFS traversals of a binary tree, we have three basic elements to traverse— root, left subtree, and right subtree. The traversal order depends on the order in which we process the root node. Here recursive code is simple and easy to visualize — only one function parameter and 3–4 lines of code. So critical question would be — How can we convert it into iterative code using stack? To simulate the recursive traversal into an iterative traversal, we need to understand the flow of recursive calls.
Divide and conquer and dynamic programming are popular problem-solving approaches in data structure and algorithms. Both approaches look similar in one way: They use a similar idea to break problems into subproblems and combine their solutions to obtain the solution to the original problem. But there are a lot of differences between both approaches.
Subscribe to get free weekly content on data structure and algorithms, machine learning, system design, oops design and mathematics.