The drawback of regular arrays is that we cannot adjust their size in the middle of the code execution. In other words, it will fail to add the (n + 1)th element if we allocate an array size equal to n. One idea would be to allocate a large array, which could waste a significant amount of memory. So what is an appropriate solution to this problem? Think! We solve this problem using the idea of the dynamic array where we can **increase the array size dynamically** as we need them.

Suppose we start with an array size 1 and double its size whenever we run out of space. This process involves **allocating a new array of double size, copying the data of the old array to the lower half of the new one, and deleting the space used by the old array.** The extra overhead involved in this process is recopying the old data to the new array on each doubling. The crucial question is: how many times does an element need to be recopied after n number of insertions? Let's visualize.

- After the first insertion, we double the array size and copy the first element into the new array. Now array size is 2, and there is one element in the array. So after the second insertion, we need to double the array size to 4 and copy the two elements in the new array.
- Now array size is 4, and there are two elements in the array, so there is no need to resize the array after the 3rd insertion. But after the 4th insertion, the array will be full, and we need to double the array size to 8 and copy the four elements in the new array.

This doubling process continues if we insert the **8th**, **16th**, **32nd** ... elements. In general, after the **i** number of doubling, the new array size would be **2^i**, and there will be **2^(i-1)** elements in the array. So there is no need to resize the array from **2^(i-1) + 1** to **2^i - 1** insertion. After the **2^i** th insertion, the array will get full, and we need to double the array size to **2^(i + 1)** and copy the **2^i** elements in the new array.

From the above observations, after i number of doubling, if there are n elements in the array, then 2^i = n => i = logn. So after **n** insertion operations, we need to double the array size **logn** times. Think! So total number of copy operations after **logn** number of doubling = 1 + 2 + 4 + ..... n/4 + n/2 + n

= n + n/2 + n/4 + ..... + 4 + 2 + 1

= n (1 + 1/2 + 1/4 + ..... + 4/n + 2/n + 1/n)

≤ n (1 + 1/2 + 1/4 + ..... + 4/n + 2/n + 1/n + .... till infinity)

≤ n * 1/(1 - 1/2) ≤ 2n

**Note:** The sum of infinite geometric series: 1 + 1/r + 1/r^2 + 1/r^4 + ...till infinity = 1/(1 - r)

So for **logn** number of insertions (at each doubling point), we need to perform at max 2n copy operations. On another side, there is no need to perform copy operations for remaining **n - logn** insertions. Total copy operation per insertion = 2n / n = 2. So on average, each n element moves only two times, which is constant.

- Expanding the array size by a factor of 2 ensures that inserting n elements takes O(n) time overall, meaning that each insertion takes O(1) constant time average, almost equivalent to the usual array situation.
- Almost all the insertion queries will be fast, except for the
**logn**number of queries, which trigger array doubling. In other words, the frequency of the best-case scenario is very high (n - logn), and the frequency of the worst-case will be very low (logn). That's why the average case lies closer to the best case. Think!

For implementing a dynamic array as an **abstract data type** in C++ or Java, or some other programming language, we need to define these things:

**A[]**: Pointer to an array**currSize**: Number of elements in the dynamic array A[]. It is initialized with the value 0 at the start.**capacity**: Total number of elements that a dynamic array A[] can hold before resizing. It is initialized with the value 1 at the start.**Dynamic array constructor**: This gets called when we create an object of a dynamic array in our code. It initializes the value of array pointer A[], currSize and capacity.

Now we define and implement dynamic array operations to work on the data. We can also define some other operations based on need.

**void insertAtEnd(int x):**Insert an element to the end index**void reSizeArray():**Increase the array capacity by double**void shrinkArray():**Decrease the array capacity by half**void insertAtMiddle(int index, int x):**Insert an element to some middle index**void deleteAtEnd():**Delete an element from the end index**void deleteAtMiddle(int index):**Delete an element from some middle index

To work with the oops principles, we declare array pointer A, currSize, and capacity as a **private** member of the class. This is encapsulation, where we hide the internal details of an object. Similarly, we declare a constructor and operations as **public** members of the class. This is an abstraction where we expose only relevant details to the outside world.

Note: For better abstraction, we can declare reSizeArray() and shrinkArray() methods as private members of the class, because both operations will be used internally by the insertion and deletion methods. So there is no need to provide public access to these methods. Think!

```
public class DynamicArray
{
Private
int A[]
int currSize
int capacity
void reSizeArray()
{
//Implementation code
}
void shrinkArray()
{
// Implementation code
}
public:
DynamicArray()
{
A = new int[1]
currSize = 0
capacity = 1
}
void insertAtEnd(int x)
{
// Implementation code
}
void insertAtMiddle(int index, int x)
{
// Implementation code
}
void deleteAtEnd()
{
// Implementation code
}
void deleteAtMiddle(int index)
{
// Implementation code
}
...//More dynamic array operations...
}
```

This operation is required when the allocated array space gets full with the defined **capacity** after some sequence of insertion. So we call this method when **currSize == capacity** during the insertion.

- We double the array capacity and declare a new array temp[] of size 2*capacity.
- We run a loop from i = 0 to capacity - 1 and copy all the elements from the original array A[] to the new array temp[].
- We free the space used by A[] and initialize it with temp[].
- Finally, we double the value of the capacity.

**Pseudocode implementation**

```
void resizeArray()
{
if (currSize == capacity)
{
int temp[2 * capacity]
for (int i = 0; i < capacity; i = i + 1)
temp[i] = A[i]
free(A)
A = temp
capacity = 2 * capacity
}
}
```

**If(currSize < capacity),** then we easily insert the element at index A[currSize] and increase currSize by 1. Otherwise, the array is full of its capacity. So we call the resizeArray() operation to double its size, add element x at index A[currSize] and increase currSize by 1.

**Pseudocode implementation**

```
void insertAtEnd(int x)
{
if (currSize == capacity)
resizeArray()
A[currSize] = x
currSize = currSize + 1
}
```

**If(currSize == capacity)**, then we resize the array by double and move elements from index to currSize - 1 to one upward. This process will create a space for the newly arriving element x. Now we insert the element x at A[index] and increase currSize by 1. Note: If(currSize < capacity), we don't need to resize the array.

**Pseudocode implementation**

```
void insertAtMiddle(int index, int x)
{
if (currSize == capacity)
reSizeArray()
for (int i = currSize; i > index; i = i - 1)
A[i] = A[i - 1]
A[index] = x
currSize = currSize + 1
}
```

After some sequence of deletion, the **currSize** of the array may be significantly less in comparison to the current **capacity**. So dynamic array can also deallocate some of the space if its size drops below a certain threshold, such as 1/4th of the current capacity. This would help in reducing the wastage of extra space.

We perform this operation when **currSize == capacity/4** during the deletion.

- We declare a new array temp[] of size capacity/2.
- Now we run a loop from i = 0 to currSize -1 and copy all the elements from the original array A[] to the temp[] array.
- We update the new capacity equal to capacity/2 and free the space used by A[].
- Finally, we initialize A[] with the new array temp[].

**Pseudocode implementation**

```
void shrinkArray()
{
if (currSize == capacity/4)
{
int temp[capacity/2]
for (int i = 0; i < currSize; i = i + 1)
temp[i] = A[i]
capacity = capacity/2
free(A)
A = temp
}
}
```

We need to check the current size of the array. **if(currSize == 0)**, then array is empty or there will no element in the array. Otherwise, we remove the element at index currSize - 1 and reduce currSize by 1. During this process, **if (currSize == capacity/4)**, then we call the shrinkArray() method to reduce the dynamic array size by half.

**Pseudocode implementation**

```
void deleteAtEnd()
{
if (currSize == 0)
print("dynamic array is empty!")
else
{
A[currSize - 1] = INT_MIN
currSize = currSize - 1
if (currSize == capacity/4)
shrinkArray()
}
}
```

We need to check the current size of the array. **if(currSize == 0)**, array is empty or there will no element in the array. Otherwise, we move all the elements from currSize - 1 to index by 1 downward, set the value at A[currSize - 1] equal to 0, and decrement currSize by 1. During this process, **if (currSize == capacity/4)**, then we call the shrinkArray() method to reduce the dynamic array size by half.

**Pseudocode implementation**

```
void deleteAtMiddle(int index)
{
if (currSize == 0)
print("dynamic array is empty!")
else
{
for (int i = index; i < currSize - 1; i = i + 1)
A[i] = A[i + 1]
A[currSize - 1] = INT_MIN
currSize = currSize - 1
if (currSize == capacity/4)
shrinkArray()
}
}
```

- Analyze and compare the time complexities of the above dynamic array operations with the normal array.
- Dynamic array is supplied with standard libraries in many modern programming languages. In C++, it is called a vector, and in Java, it has two implementations: ArrayList and Vector.
- The growth factor for the dynamic array depends on several things like a time-memory trade-off, algorithms used inside the memory allocator, etc.
- For growth factor k, the average time per insertion operation is about k/(k−1), where the maximum number of wasted space will be (k−1)n.
- For the simplicity of the analysis, we have used the growth factor k = 2. But if the memory allocator uses a first-fit allocation algorithm, then growth factor values such as k = 2 can cause dynamic array expansion to run out of memory even though a significant amount of memory may still be available.
- There have been several discussions on the ideal values of the growth factor. Some experts suggest the golden ratio value as a growth factor, but library implementation of different programming languages uses different values. Explore and think!

**Enjoy learning, Enjoy algorithms!**