🌐
Codecademy
codecademy.com › article › time-complexity-of-merge-sort
Time Complexity of Merge Sort: A Detailed Analysis | Codecademy
The table highlights how Merge Sort consistently delivers strong performance with a time complexity of O(n log n) in all cases. However, its higher space complexity of O(n) can be a drawback compared to in-place sorting algorithms like Quick ...
🌐
GeeksforGeeks
geeksforgeeks.org › dsa › time-and-space-complexity-analysis-of-merge-sort
Time and Space Complexity Analysis of Merge Sort - GeeksforGeeks
March 14, 2024 - . . = 2k * T(N/2k) + k * N * constant · It can be divided maximum until there is one element left. So, then N/2k = 1. k = log2N · T(N) = N * T(1) + N * log2N * constant = N + N * log2N · Therefore the time complexity is O(N * log2N).
Discussions

algorithm - Merge sort time and space complexity - Stack Overflow
Merge sort which works on divide ... single elements are merged in sorted order. for this dividing and merge it has time complexity of - O(nlogn) where n is the number of elements in array. The complexity of all the three cases that is best, worst and average is nlo... More on stackoverflow.com
🌐 stackoverflow.com
What is the worst case Time Complexity for only the Divide Portion of the Merge Sort Algorithm? - Software Engineering Stack Exchange
Bring the best of human thought and AI automation together at your work. Explore Stack Internal ... Please, consider the below merge sort algorithm. Here we start with a divide portion which splits the array into halves and we do recursive operation on each half separately. I have ignored the merge portion of the algorithm to reduce complexity... More on softwareengineering.stackexchange.com
🌐 softwareengineering.stackexchange.com
May 17, 2020
algorithm - Best Case For Merge Sort - Stack Overflow
If the comparison yields <= then you can skip the merge phase for this pair of slices. With this modification, a fully sorted array will sort much faster, with a linear complexity, making it the best case, and a partially sorted array will behave better as well. More on stackoverflow.com
🌐 stackoverflow.com
Understanding merge sort Big O complexity
Splitting the array doesn't meaningfully change the runtime. To be specific, "splitting the array" is actually the act of determining the midpoint and then making the two recursive calls. It takes only constant time to determine where the middle of the array is ((low + mid) / 2). This doesn't change mergesort's runtime. A simple proof is that every "split" of the array is accompanied by a corresponding "merge" operation (since if you split the array into two pieces, you'll have to merge it back). So as an accounting trick, you can just book the cost of the split into the corresponding merge ==> this effectively makes splitting the array free, while making the merge operation slightly more expensive. (However, the merge operation will still be O(n), which is why it doesn't affect anything.) More on reddit.com
🌐 r/learnprogramming
3
4
June 26, 2022
worst-case optimal stable divide and conquer comparison sorting algorithm
Merge-sort-example-300px.gif
In computer science, merge sort (also commonly spelled as mergesort or merge-sort) is an efficient and general purpose comparison-based sorting algorithm. Most implementations of merge sort are stable, which means that the … Wikipedia
Factsheet
Data structure Array
Worst-case performance
Factsheet
Data structure Array
Worst-case performance
🌐
Wikipedia
en.wikipedia.org › wiki › Merge_sort
Merge sort - Wikipedia
2 weeks ago - These numbers are equal to or slightly smaller than (n ⌈lg n⌉ − 2⌈lg n⌉ + 1), which is between (n lg n − n + 1) and (n lg n + n + O(lg n)). Merge sort's best case takes about half as many iterations as its worst case. For large n and a randomly ordered input list, merge sort's expected ...
🌐
Scaler
scaler.com › home › topics › what is the ​​time complexity of merge sort?
What is the ​​Time Complexity of Merge Sort? - Scaler Topics
April 20, 2024 - Merge Sort time complexity is calculated using time per division stage. Since the merge process has linear time complexity, for n elements there will be ... The best case scenario occurs when the elements are already sorted in ascending order. If two sorted arrays of size n need to be merged, the minimum number of comparisons will be n.
🌐
W3Schools
w3schools.com › dsa › dsa_timecomplexity_mergesort.php
DSA Merge Sort Time Complexity
The number of splitting operations \((n-1)\) can be removed from the Big O calculation above because \( n \cdot \log_{2}n\) will dominate for large \( n\), and because of how we calculate time complexity for algorithms. The figure below shows how the time increases when running Merge Sort on an array with \(n\) values. The difference between best and worst case scenarios for Merge Sort is not as big as for many other sorting algorithms.
🌐
Alma Better
almabetter.com › bytes › articles › merge-sort-time-complexity
What is the Time Complexity of Merge Sort Algorithm?
June 12, 2024 - In Merge Sort, the best, average, and worst-case time complexities are all O(n log n).
🌐
GeeksforGeeks
geeksforgeeks.org › dsa › merge-sort
Merge Sort - GeeksforGeeks
Guaranteed worst-case performance: Merge sort has a worst-case time complexity of O(N logN) , which means it performs well even on large datasets.
Published   October 3, 2025
Find elsewhere
🌐
DigitalOcean
digitalocean.com › community › tutorials › merge-sort-algorithm-java-c-python
Merge Sort Algorithm - Java, C, and Python Implementation | DigitalOcean
August 3, 2022 - The list of size N is divided into a max of Logn parts, and the merging of all sublists into a single list takes O(N) time, the worst-case run time of this algorithm is O(nLogn) Best Case Time Complexity: O(n*log n) Worst Case Time Complexity: O(n*log n) Average Time Complexity: O(n*log n) ...
🌐
Enjoy Algorithms
enjoyalgorithms.com › blog › merge-sort-algorithm
Merge Sort Algorithm
Merge sort is one of the fastest comparison-based sorting algorithms, which works on the idea of a divide and conquer approach. The worst and best-case time complexity of the merge sort is O(nlogn), and the space complexity is O(n). It is also one of the best algorithms for sorting linked lists ...
🌐
Quora
quora.com › What-is-best-average-worst-case-time-complexities-of-merge-and-quick-sorts
What is best, average, worst case time complexities of merge and quick sorts? - Quora
Answer (1 of 4): Merge Sort : Worst, Average and Best Case - O(n*log(n)) Quick Sort : Worst case - O(n^2) Average, Best Case - O(n*log(n)) In terms of space complexity, quick sort is space constant, merge depends on the structure provided as input.
Top answer
1 of 8
121

MergeSort time Complexity is O(nlgn) which is a fundamental knowledge. Merge Sort space complexity will always be O(n) including with arrays. If you draw the space tree out, it will seem as though the space complexity is O(nlgn). However, as the code is a Depth First code, you will always only be expanding along one branch of the tree, therefore, the total space usage required will always be bounded by O(3n) = O(n).

2023 October 24th update: There's a question on how I came up with 3n upper bound. My explanation in the comment and re-pasted here. The mathematical proof for 3n is extremely similar to why the time complexity of buildHeap from an unsorted array is upper bounded by 2n number of swaps, which takes O(2n) = O(n) time. In this case, there's always only 1 additional branch. Hence, think of it as doing the buildHeap again for 1 additional branch. Hence, it will be bounded by another n, having a total upper bound of 3n, which is O(3n) = O(n). note that in this case, we're using the similar mathematics from buildHeap(inputArray) time complexity to prove the space complexity of single threaded mergeSort instead of time complexity. I can write up a full rigorous math proof for this when I have time.

  • How can building a heap be O(n) time complexity?

For example, if you draw the space tree out, it seems like it is O(nlgn)

                             16                                 | 16
                            /  \                              
                           /    \
                          /      \
                         /        \
                        8          8                            | 16
                       / \        / \
                      /   \      /   \
                     4     4    4     4                         | 16
                    / \   / \  / \   / \
                   2   2 2   2.....................             | 16
                  / \  /\ ........................
                 1  1  1 1 1 1 1 1 1 1 1 1 1 1 1 1              | 16

where height of tree is O(logn) => Space complexity is O(nlogn + n) = O(nlogn). However, this is not the case in the actual code as it does not execute in parallel. For example, in the case where N = 16, this is how the code for mergesort executes. N = 16.

                           16
                          /
                         8
                        /
                       4
                     /
                    2
                   / \
                  1   1

notice how number of space used is 32 = 2n = 2*16 < 3n

Then it merge upwards

                           16
                          /
                         8
                        /
                       4
                     /  \
                    2    2
                        / \                
                       1   1

which is 34 < 3n. Then it merge upwards

                           16
                          /
                         8
                        / \
                       4   4
                          /
                         2
                        / \ 
                       1   1

36 < 16 * 3 = 48

then it merge upwards

                           16
                          / \
                         8  8
                           / \
                          4   4
                             / \
                            2   2
                                /\
                               1  1

16 + 16 + 14 = 46 < 3*n = 48

in a larger case, n = 64

                     64
                    /  \
                   32  32
                       / \
                      16  16
                          / \
                         8  8
                           / \
                          4   4
                             / \
                            2   2
                                /\
                               1  1

which is 643 <= 3n = 3*64

You can prove this by induction for the general case.

Therefore, space complexity is always bounded by O(3n) = O(n) even if you implement with arrays as long as you clean up used space after merging and not execute code in parallel but sequential.

Example of my implementation is given below:

templace<class X> 
void mergesort(X a[], int n) // X is a type using templates
{
    if (n==1)
    {
        return;
    }
    int q, p;
    q = n/2;
    p = n/2;
    //if(n % 2 == 1) p++; // increment by 1
    if(n & 0x1) p++; // increment by 1
        // note: doing and operator is much faster in hardware than calculating the mod (%)
    X b[q];

    int i = 0;
    for (i = 0; i < q; i++)
    {
        b[i] = a[i];
    }
    mergesort(b, i);
    // do mergesort here to save space
    // http://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity/28641693#28641693
    // After returning from previous mergesort only do you create the next array.
    X c[p];
    int k = 0;
    for (int j = q; j < n; j++)
    {
        c[k] = a[j];
        k++;
    }
    mergesort(c, k);
    int r, s, t;
    t = 0; r = 0; s = 0;
    while( (r!= q) && (s != p))
    {
        if (b[r] <= c[s])
        {
            a[t] = b[r];
            r++;
        }
        else
        {
            a[t] = c[s];
            s++;
        }
        t++;
    }
    if (r==q)
    {
        while(s!=p)
        {
            a[t] = c[s];
            s++;
            t++;
        }
    }
    else
    {
        while(r != q)
        {
            a[t] = b[r];
            r++;
            t++;
        }
    }
    return;
}
2 of 8
39

a) Yes - in a perfect world you'd have to do log n merges of size n, n/2, n/4 ... (or better said 1, 2, 3 ... n/4, n/2, n - they can't be parallelized), which gives O(n). It still is O(n log n). In not-so-perfect-world you don't have infinite number of processors and context-switching and synchronization offsets any potential gains.

b) Space complexity is always Ω(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter.

edit if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.

c) Lists only need some pointers changed during the merge process. That requires constant additional memory.

d) That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.

🌐
Hero Vired
herovired.com › learning-hub › blogs › space-complexity-of-merge-sort
Time and Space Complexity of Merge Sort - Hero Vired
June 27, 2024 - In the best case, it has a time complexity of O(n). Thus, Merge Sort is preferable for larger data collections due to its O(n log n) time complexity.
🌐
Stack Exchange
softwareengineering.stackexchange.com › questions › 410234 › what-is-the-worst-case-time-complexity-for-only-the-divide-portion-of-the-merge
What is the worst case Time Complexity for only the Divide Portion of the Merge Sort Algorithm? - Software Engineering Stack Exchange
May 17, 2020 - function mergeSort(unsortedArray) ... ignores half of the array in every iteration the answer would easily be arrived at as log2 n. Now I would like to calculate the Worst Case Time Complexity for only the portion which splits ...
🌐
NVIDIA Developer
developer.nvidia.com › blog › merge-sort-explained-a-data-scientists-algorithm-guide
Merge Sort Explained: A Data Scientist’s Algorithm Guide | NVIDIA Technical Blog
June 12, 2023 - The time complexity of the merge sort algorithm remains O(n log n) for best, worst, and average scenarios, making it suitable for sorting large lists and linked lists where stability is important.
🌐
Reddit
reddit.com › r/learnprogramming › understanding merge sort big o complexity
r/learnprogramming on Reddit: Understanding merge sort Big O complexity
June 26, 2022 -

I'm going to be referring to this image. So the Big O of merge sort is nlogn.

So in the example I posted above, n = 8 so the Big O should be 8log(8) = 16. I think it's because in the first green level, we go through 8 items then merge and we do the same thing for the second green level so 8+8 = 16. But then I thought when we split the initial array(the purple steps) doesn't that add to the time complexity as well?

Top answer
1 of 3
1
Splitting the array doesn't meaningfully change the runtime. To be specific, "splitting the array" is actually the act of determining the midpoint and then making the two recursive calls. It takes only constant time to determine where the middle of the array is ((low + mid) / 2). This doesn't change mergesort's runtime. A simple proof is that every "split" of the array is accompanied by a corresponding "merge" operation (since if you split the array into two pieces, you'll have to merge it back). So as an accounting trick, you can just book the cost of the split into the corresponding merge ==> this effectively makes splitting the array free, while making the merge operation slightly more expensive. (However, the merge operation will still be O(n), which is why it doesn't affect anything.)
2 of 3
1
Big O notation does not care about the specific constant factor. This allows you to ignore a lot of implementation details when describing an algorithm and still be able to determine Big O complexity of it. As the other comment said, you can split an array by just change the starting and ending point, which would be constant. But even if you literally copy the entire array into 2 arrays (doing it the inefficient way), the Big O complexity is still O(nlog(n)). If you are still confused, this is a fully rigorous proof of time complexity of merge sort: The recurrence relation for Big O in merge sort is T(2k )<=2T(2k-1 )+C2k . That means to sort 2k items, you need to sort 2k-1 items twice, then add in an addition time for pre-processing and post-processing, which is at most constant multiple of 2k (that constant is C, and we don't care about specific value of C except that it's >=1, and by "constant" we mean it's independent from k). Set U(2k )=2U(2k-1 )+C2k and U(20 )=1. Then T(2k )<=U(2k ) so U is an upper bound of T. And you can compute U exactly: U(2k )=C2k +2U(2k-1 )=C2k +2C2k-1 +2U(2k -2 )=...=C2k +21 C2k-1 +...+2k-1 C21 +2k C20 +2k U(20 )=kC2k +2k so T(n)<=kC2k +2k ifn=2k . If n is not a power of 2, you can round up to the next power of 2 (so that 2k-1
🌐
Quora
quora.com › What-is-the-time-complexity-of-Merge-Sort-and-why-does-it-have-this-complexity
What is the time complexity of Merge Sort and why does it have this complexity? - Quora
Answer (1 of 2): The split step of Merge Sort will take O(n) instead of O(log(n)). If we have the runtime function of split step: T(n) = 2T(n/2) + O(1) with T(n) is the runtime for input size n, 2 is the number of new problems and n/2 is the ...
🌐
HappyCoders.eu
happycoders.eu › algorithms › merge-sort
Merge Sort – Algorithm, Source Code, Time Complexity
June 12, 2025 - Merge Sort has the advantage over Quicksort that, even in the worst case, the time complexity O(n log n) is not exceeded. Also, it is stable. These advantages are bought by poor performance and an additional space requirement in the order of ...
🌐
guvi.in
studytonight.com › data-structures › merge-sort
merge sort time complexity
Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst, average and best) as merge sort always divides the array in two halves and takes linear time to merge two halves.