๐ŸŒ
Codecademy
codecademy.com โ€บ article โ€บ time-complexity-of-merge-sort
Time Complexity of Merge Sort: A Detailed Analysis | Codecademy
The table highlights how Merge Sort consistently delivers strong performance with a time complexity of O(n log n) in all cases. However, its higher space complexity of O(n) can be a drawback compared to in-place sorting algorithms like Quick ...
worst-case optimal stable divide and conquer comparison sorting algorithm
Merge-sort-example-300px.gif
In computer science, merge sort (also commonly spelled as mergesort or merge-sort) is an efficient and general purpose comparison-based sorting algorithm. Most implementations of merge sort are stable, which means that the โ€ฆ Wikipedia
Factsheet
Data structure Array
Worst-case performance
Factsheet
Data structure Array
Worst-case performance
๐ŸŒ
Wikipedia
en.wikipedia.org โ€บ wiki โ€บ Merge_sort
Merge sort - Wikipedia
2 weeks ago - These numbers are equal to or slightly smaller than (n โŒˆlg nโŒ‰ โˆ’ 2โŒˆlg nโŒ‰ + 1), which is between (n lg n โˆ’ n + 1) and (n lg n + n + O(lg n)). Merge sort's best case takes about half as many iterations as its worst case. For large n and a randomly ordered input list, merge sort's expected ...
Discussions

algorithm - Question about Time Complexity analysis of Merge Sort - Stack Overflow
I have tried simply solving the ... worst, best and average time complexity of this complexity. ... The fun thing about merge sort is that for a given size input it always does the same sequence of comparisons. The initial order is irrelevant. Best case, worst case, and average ... More on stackoverflow.com
๐ŸŒ stackoverflow.com
algorithm - Merge sort time and space complexity - Stack Overflow
Space complexity: O(N). Hint: Big O(x) time means, x is the smallest time for which we can surely say with proof that it will never exceed x in average case ... a) Yes, of course, parallelizing merge sort can be very beneficial. More on stackoverflow.com
๐ŸŒ stackoverflow.com
Merge sort vs Quicksort Applications
Hey Ronav, Great post, I agree with a lot of your thoughts. I wanted to add on to this post and discuss insertion sort briefly -- even though it's not ideal (it has an average time complexity of O(n^2), and best case time complexity of O(n)), insertion sort is very stable and preserves the relative order of elements. Also, if you have an array that is mostly sorted, insertion sort is quite efficient and we don't need to wait for all the data to come in before sorting it. However, on average, when it comes to larger or more complex datasets, quicksort and merge sort are certainly better. More on reddit.com
๐ŸŒ r/cs2c
3
5
April 3, 2024
Why is the worst case space complexity of Quicksort O(n)?
Quicksort is a recursive algorithm in its nature. The space complexity is not coming from needing an additional array (the array can get partitioned in-place), but from the call stack of each recursive function call. In the worst case, that is, the array is already sorted in reverse order, the intuitive implementation of quicksort will call itself n times, thereby needing an O(n) large call stack. Note that there is a method that can achieve O(log n) space complexity even in the worst case, but the reason for the space complexity is the same -- it comes from the call stack, and that method uses a clever trick to limit the number of recursion calls. More on reddit.com
๐ŸŒ r/ECE
8
6
January 27, 2024
๐ŸŒ
Scaler
scaler.com โ€บ home โ€บ topics โ€บ what is the โ€‹โ€‹time complexity of merge sort?
What is the โ€‹โ€‹Time Complexity of Merge Sort? - Scaler Topics
April 20, 2024 - This happens when all elements of the first array are less than the elements of the second array. The best case time complexity of merge sort is ... The average case scenario occurs when the elements are jumbled (neither in ascending nor descending order). This depends on the number of comparisons.
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ dsa โ€บ merge-sort
Merge Sort - GeeksforGeeks
O(n) represents the time taken to merge the two sorted halves ... Best Case: O(n log n), When the array is already sorted or nearly sorted. Average Case: O(n log n), When the array is randomly ordered.
Published ย  October 3, 2025
๐ŸŒ
Alma Better
almabetter.com โ€บ bytes โ€บ articles โ€บ merge-sort-time-complexity
What is the Time Complexity of Merge Sort Algorithm?
June 12, 2024 - Despite the array being sorted, merge sort still performs the division and merging steps, which means the best case time complexity remains O(nlogn). The average case time complexity of merge sort reflects its performance for a random array.
๐ŸŒ
Stack Overflow
stackoverflow.com โ€บ questions โ€บ 76444702 โ€บ question-about-time-complexity-analysis-of-merge-sort
algorithm - Question about Time Complexity analysis of Merge Sort - Stack Overflow
... The fun thing about merge sort is that for a given size input it always does the same sequence of comparisons. The initial order is irrelevant. Best case, worst case, and average case are all O(n log n).
Find elsewhere
Top answer
1 of 8
121

MergeSort time Complexity is O(nlgn) which is a fundamental knowledge. Merge Sort space complexity will always be O(n) including with arrays. If you draw the space tree out, it will seem as though the space complexity is O(nlgn). However, as the code is a Depth First code, you will always only be expanding along one branch of the tree, therefore, the total space usage required will always be bounded by O(3n) = O(n).

2023 October 24th update: There's a question on how I came up with 3n upper bound. My explanation in the comment and re-pasted here. The mathematical proof for 3n is extremely similar to why the time complexity of buildHeap from an unsorted array is upper bounded by 2n number of swaps, which takes O(2n) = O(n) time. In this case, there's always only 1 additional branch. Hence, think of it as doing the buildHeap again for 1 additional branch. Hence, it will be bounded by another n, having a total upper bound of 3n, which is O(3n) = O(n). note that in this case, we're using the similar mathematics from buildHeap(inputArray) time complexity to prove the space complexity of single threaded mergeSort instead of time complexity. I can write up a full rigorous math proof for this when I have time.

  • How can building a heap be O(n) time complexity?

For example, if you draw the space tree out, it seems like it is O(nlgn)

                             16                                 | 16
                            /  \                              
                           /    \
                          /      \
                         /        \
                        8          8                            | 16
                       / \        / \
                      /   \      /   \
                     4     4    4     4                         | 16
                    / \   / \  / \   / \
                   2   2 2   2.....................             | 16
                  / \  /\ ........................
                 1  1  1 1 1 1 1 1 1 1 1 1 1 1 1 1              | 16

where height of tree is O(logn) => Space complexity is O(nlogn + n) = O(nlogn). However, this is not the case in the actual code as it does not execute in parallel. For example, in the case where N = 16, this is how the code for mergesort executes. N = 16.

                           16
                          /
                         8
                        /
                       4
                     /
                    2
                   / \
                  1   1

notice how number of space used is 32 = 2n = 2*16 < 3n

Then it merge upwards

                           16
                          /
                         8
                        /
                       4
                     /  \
                    2    2
                        / \                
                       1   1

which is 34 < 3n. Then it merge upwards

                           16
                          /
                         8
                        / \
                       4   4
                          /
                         2
                        / \ 
                       1   1

36 < 16 * 3 = 48

then it merge upwards

                           16
                          / \
                         8  8
                           / \
                          4   4
                             / \
                            2   2
                                /\
                               1  1

16 + 16 + 14 = 46 < 3*n = 48

in a larger case, n = 64

                     64
                    /  \
                   32  32
                       / \
                      16  16
                          / \
                         8  8
                           / \
                          4   4
                             / \
                            2   2
                                /\
                               1  1

which is 643 <= 3n = 3*64

You can prove this by induction for the general case.

Therefore, space complexity is always bounded by O(3n) = O(n) even if you implement with arrays as long as you clean up used space after merging and not execute code in parallel but sequential.

Example of my implementation is given below:

templace<class X> 
void mergesort(X a[], int n) // X is a type using templates
{
    if (n==1)
    {
        return;
    }
    int q, p;
    q = n/2;
    p = n/2;
    //if(n % 2 == 1) p++; // increment by 1
    if(n & 0x1) p++; // increment by 1
        // note: doing and operator is much faster in hardware than calculating the mod (%)
    X b[q];

    int i = 0;
    for (i = 0; i < q; i++)
    {
        b[i] = a[i];
    }
    mergesort(b, i);
    // do mergesort here to save space
    // http://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity/28641693#28641693
    // After returning from previous mergesort only do you create the next array.
    X c[p];
    int k = 0;
    for (int j = q; j < n; j++)
    {
        c[k] = a[j];
        k++;
    }
    mergesort(c, k);
    int r, s, t;
    t = 0; r = 0; s = 0;
    while( (r!= q) && (s != p))
    {
        if (b[r] <= c[s])
        {
            a[t] = b[r];
            r++;
        }
        else
        {
            a[t] = c[s];
            s++;
        }
        t++;
    }
    if (r==q)
    {
        while(s!=p)
        {
            a[t] = c[s];
            s++;
            t++;
        }
    }
    else
    {
        while(r != q)
        {
            a[t] = b[r];
            r++;
            t++;
        }
    }
    return;
}
2 of 8
39

a) Yes - in a perfect world you'd have to do log n merges of size n, n/2, n/4 ... (or better said 1, 2, 3 ... n/4, n/2, n - they can't be parallelized), which gives O(n). It still is O(n log n). In not-so-perfect-world you don't have infinite number of processors and context-switching and synchronization offsets any potential gains.

b) Space complexity is always ฮฉ(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter.

edit if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.

c) Lists only need some pointers changed during the merge process. That requires constant additional memory.

d) That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.

๐ŸŒ
Quora
quora.com โ€บ What-is-best-average-worst-case-time-complexities-of-merge-and-quick-sorts
What is best, average, worst case time complexities of merge and quick sorts? - Quora
Answer (1 of 4): Merge Sort : Worst, Average and Best Case - O(n*log(n)) Quick Sort : Worst case - O(n^2) Average, Best Case - O(n*log(n)) In terms of space complexity, quick sort is space constant, merge depends on the structure provided as input.
๐ŸŒ
W3Schools
w3schools.com โ€บ dsa โ€บ dsa_timecomplexity_mergesort.php
DSA Merge Sort Time Complexity
The number of splitting operations \((n-1)\) can be removed from the Big O calculation above because \( n \cdot \log_{2}n\) will dominate for large \( n\), and because of how we calculate time complexity for algorithms. The figure below shows how the time increases when running Merge Sort on an array with \(n\) values. The difference between best and worst case scenarios for Merge Sort is not as big as for many other sorting algorithms.
๐ŸŒ
Quora
quora.com โ€บ What-is-the-time-complexity-of-Merge-Sort-and-why-does-it-have-this-complexity
What is the time complexity of Merge Sort and why does it have this complexity? - Quora
Answer (1 of 2): The split step of Merge Sort will take O(n) instead of O(log(n)). If we have the runtime function of split step: T(n) = 2T(n/2) + O(1) with T(n) is the runtime for input size n, 2 is the number of new problems and n/2 is the size of each new problem, O(1) is the constant time t...
๐ŸŒ
NVIDIA Developer
developer.nvidia.com โ€บ blog โ€บ merge-sort-explained-a-data-scientists-algorithm-guide
Merge Sort Explained: A Data Scientistโ€™s Algorithm Guide | NVIDIA Technical Blog
June 12, 2023 - The time complexity of the merge sort algorithm remains O(n log n) for best, worst, and average scenarios, making it suitable for sorting large lists and linked lists where stability is important.
๐ŸŒ
Scaler
scaler.in โ€บ home โ€บ what is the โ€‹โ€‹time complexity of merge sort?
What is the โ€‹โ€‹Time Complexity of Merge Sort? - Scaler Blog
October 7, 2024 - This depends on the number of comparisons. The average case time complexity of merge sort is $O(n*logn)$.
๐ŸŒ
Programiz PRO
programiz.pro โ€บ resources โ€บ dsa-merge-sort-complexity
Exploring time and space complexity of Merge sort
December 19, 2024 - Merge Sort is a comparison-based divide-and-conquer sorting algorithm that works by recursively dividing the array into halves, sorting each half, and then merging them back together. It consistently performs with a time complexity of O(n log n) in the best, worst, and average cases.
๐ŸŒ
Youcademy
youcademy.org โ€บ merge-sort-time-space-complexity
Time and Space Complexity of Merge Sort
Merge Sort has a time complexity of O(n log n) in all cases: best, average, and worst. This makes it highly efficient compared to algorithms like Bubble Sort (O(nยฒ)) for large datasets.
๐ŸŒ
DigitalOcean
digitalocean.com โ€บ community โ€บ tutorials โ€บ merge-sort-algorithm-java-c-python
Merge Sort Algorithm - Java, C, and Python Implementation | DigitalOcean
August 3, 2022 - The list of size N is divided into a max of Logn parts, and the merging of all sublists into a single list takes O(N) time, the worst-case run time of this algorithm is O(nLogn) Best Case Time Complexity: O(n*log n) Worst Case Time Complexity: O(n*log n) Average Time Complexity: O(n*log n) ...
๐ŸŒ
Hero Vired
herovired.com โ€บ learning-hub โ€บ blogs โ€บ space-complexity-of-merge-sort
Time and Space Complexity of Merge Sort - Hero Vired
June 27, 2024 - The time complexity is still O(n log n) in this typical situation. Splitting and merging are done in each phase, along with an equal number of swaps and comparisons. When the array is sorted in reverse order, the worst-case situation occurs.
๐ŸŒ
HappyCoders.eu
happycoders.eu โ€บ algorithms โ€บ merge-sort
Merge Sort โ€“ Algorithm, Source Code, Time Complexity
June 12, 2025 - We have now executed the merge ... of previously O(n). The total complexity of the sorting algorithm is, therefore, O(nยฒ log n) โ€“ instead of O(n log n)....
๐ŸŒ
guvi.in
studytonight.com โ€บ data-structures โ€บ merge-sort
merge sort time complexity
Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst, average and best) as merge sort always divides the array in two halves and takes linear time to merge two halves.