The drawback of arrays is that we cannot adjust their size in the middle of the code execution. Our code will fail to add the (n +1)st element if we have allocated the array size equal to n. We can compensate for this by allocating a huge array, but this could waste a significant amount of space. So what would be an appropriate solution to this problem? Think!
To solve the above problem, we can use the idea of the dynamic array where we can increase the array size dynamically as we need them. Suppose we start with an array size 1 and double its size whenever we run out of space. This process involves allocating a new array of double size, copying the data of the old array to the lower half of the new one, and deleting the space used by the old array. The extra operation involves in this process is recopying the old data on each expansion.
The crucial question is: how many times does an element need to be recopied after n number of insertions? Let's visualize.
This process of doubling continues if we insert 8th, 16th, 32nd ... elements. In general, after i number of doubling, the array size would be 2^i, and there are 2^(i-1) elements in the array. So there is no need to resize the array after 2^(i-1) + 1 to 2^i - 1 insertion. But after the 2^i th insertion, the array gets full, and we need to double the array size to 2^(i + 1) and copy the 2^i elements in the new array.
From the above observations, after i number of doubling, if there are n elements in the array, then 2^i = n => i = logn. So for n insertion operations, we double the array size logn times.
Total number of copy operations after logn number of doubling
= 1 + 2 + 4 + ..... n/4 + n/2 + n
= n + n/2 + n/4 + ..... + 4 + 2 + 1
= n (1 + 1/2 + 1/4 + ..... + 4/n + 2/n + 1/n)
~ n (1 + 1/2 + 1/4 + ..... + 4/n + 2/n + 1/n + .... till infinity)
~ n/(1 - 1/2) ~ 2n
Note: In the above calcualtion, we have used the idea of infinite geometric series => 1 + 1/r + 1/r^2 + 1/r^4 + ...till infinity = 1/(1 - r)
The primary thing lost using dynamic arrays is guaranteeing that each array access takes constant time in the worst case. Now all the queries will be fast, except for few queries which trigger array doubling. One idea is clear: expanding the array size by a constant factor 2 ensures that inserting n elements takes O(n) time overall, meaning that each insertion takes amortized constant time.
For the implementation of the dynamic array in C or C++ or Java or some other programming language, let's define three things:
This is required when after some sequence of insertion, allocated array space gets full with the defined capacity. So we perform this operation when currSize == capacity during the insertion.
void reSizeArray()
{
if (currSize == capacity)
{
int temp[2 * capacity]
for (int i = 0; i < capacity; i = i + 1)
temp[i] = A[i]
free(A)
A = temp
capacity = 2 * capacity
}
}
void insertAtEnd(int x)
{
if (currSize == capacity)
resizeArray()
A[currSize] = x
currSize = currSize + 1
}
void insertAtMiddle(int index, int x)
{
if (currSize == capacity)
reSizeArray()
for (int i = currSize - 1; i > index; i = i - 1)
A[i] = A[i - 1]
A[index] = x
currSize = currSize + 1
}
After some sequence of deletion, the currSize of the array may be significantly less in comparison to the current capacity. So dynamic array can also deallocate some of the space if its size drops below a certain threshold, such as 1/4th of the current capacity. This would help in reducing the wastage of extra space.
We perform this operation when currSize == capacity/4 during the insertion.
void shrinkArray()
{
if (currSize == capacity/4)
{
int temp[capacity/2]
for (int i = 0; i < currSize; i = i + 1)
temp[i] = A[i]
capacity = capacity/2
free(A)
A = temp
}
}
We need to check the current size of the array. if(currSize == 0), then array is empty or there will no element in the array. if(currSize > 0), then we remove the element at index currSize - 1 and reduce currSize by 1.
void deleteAtEnd()
{
if (currSize == 0)
print("dynamic array is empty!")
else
{
A[currSize - 1] = INT_MIN
currSize = currSize - 1
if (currSize == capacity/4)
shrinkArray()
}
}
void deleteAtMiddle(int index)
{
if (currSize == 0)
print("dynamic array is empty!")
else
{
for (int i = index; i < currSize - 1; i = i + 1)
A[i] = A[i + 1]
A[currSize - 1] = INT_MIN
currSize = currSize - 1
if (currSize == capacity/4)
shrinkArray()
}
}
public class DynamicArray
{
int A[]
int currSize
int capacity
public:
DynamicArray()
{
A = new int[1]
currSize = 0
capacity = 1
}
void insertAtEnd(int x)
{
if (currSize == capacity)
resizeArray()
A[currSize] = x
currSize = currSize + 1
}
void reSizeArray()
{
if (currSize == capacity)
{
int temp[2 * capacity]
for (int i = 0; i < capacity; i = i + 1)
temp[i] = A[i]
free(A)
A = temp
capacity = 2 * capacity
}
}
void shrinkArray()
{
if (currSize == capacity/4)
{
int temp[capacity/2]
for(int i = 0; i < currSize; i = i + 1)
temp[i] = A[i]
capacity = capacity/2
free(A)
A = temp
}
}
void insertAtMiddle(int index, int x)
{
if (currSize == capacity)
reSizeArray()
for (int i = currSize - 1; i > index; i = i - 1)
A[i] = A[i - 1]
A[index] = x
currSize = currSize + 1
}
void deleteAtEnd()
{
if (currSize == 0)
print("dynamic array is empty!")
else
{
A[currSize - 1] = INT_MIN
currSize = currSize - 1
if (currSize == capacity/4)
shrinkArray()
}
}
void deleteAtMiddle(int index)
{
if (currSize == 0)
print("dynamic array is empty!")
else
{
for (int i = index; i < currSize - 1; i = i + 1)
A[i] = A[i + 1]
A[currSize - 1] = INT_MIN
currSize = currSize - 1
if (currSize == capacity/4)
shrinkArray()
}
}
}
Analyze and compare the time complexities of the above dynamic array operations with the normal array.
Dynamic array is supplied with standard libraries in many modern programming languages. In C++, it is called a vector, and in Java, it has two implementations: ArrayList and Vector.
Enjoy learning, Enjoy algorithms!
In recursive DFS traversals of a binary tree, we have three basic elements to traverse— root, left subtree, and right subtree. The traversal order depends on the order in which we process the root node. Here recursive code is simple and easy to visualize — only one function parameter and 3–4 lines of code. So critical question would be — How can we convert it into iterative code using stack? To simulate the recursive traversal into an iterative traversal, we need to understand the flow of recursive calls.
Write a program to detect the loop in a linked list. A linked list with a cycle causes iteration over the list to fail because the iteration will never reach the end of the list. Therefore, it is desirable to be able to detect that a linked list has no cycle before trying an iteration. So, we are going to discuss various algorithms to detect a loop in a singly linked list. This is also one of the best-linked list interview problems.
Level order traversal accesses nodes in level by level order. This is also called breadth-first search traversal or BFS traversal. Here we start from the root node and process it, then process all the nodes at the first level, then process all the nodes at the second level, and so on. In other words, we explore all nodes at the current level before moving on to the nodes at the next level.
Subscribe to get free weekly content on data structure and algorithms, machine learning, system design, oops and math. enjoy learning!