sorting algorithm that, at each iteration, inserts the current input element into the suitable position between the already sorted elements
Insertion_sort.gif
eqqqfdjriz insertion sort ocw
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more โ€ฆ Wikipedia
Factsheet
Data structure Array
Worst-case performance
Factsheet
Data structure Array
Worst-case performance
๐ŸŒ
Wikipedia
en.wikipedia.org โ€บ wiki โ€บ Insertion_sort
Insertion sort - Wikipedia
1 month ago - The set of all worst case inputs consists of all arrays where each element is the smallest or second-smallest of the elements before it. In these cases every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. This gives insertion sort a quadratic running time (i.e., O(n2)).
Discussions

sorting - Time Complexity of Insertion Sort - Stack Overflow
Could anyone explain why insertion sort has a time complexity of ฮ˜(nยฒ)? I'm fairly certain that I understand time complexity as a concept, but I don't really understand how to apply it to this sor... More on stackoverflow.com
๐ŸŒ stackoverflow.com
algorithm - Why best case for insertion sort is O(n) & not O(n^2)? - Stack Overflow
Here in such scenario, the condition ... O(n) time complexity. ... Here's one way to implement insertion sort. Take an input list and an initially-empty output list. Iterate through the input list and place each item to its appropriate position on the output list. Find the appropriate position by walking through the output list, starting at the first element. Now, if your input is already sorted, then the insertion point will always be at the beginning or end of the output ... More on stackoverflow.com
๐ŸŒ stackoverflow.com
algorithm - Why is insertion sort ฮ˜(n^2) in the average case? - Stack Overflow
Insertion sort has a runtime that is ฮฉ(n) (when the input is sorted) and O(n2) (when the input is reverse sorted). On average, it runs in ฮ˜(n2) time. Why is this? Why isn't the average More on stackoverflow.com
๐ŸŒ stackoverflow.com
probability - Average time complexity of insertion sort in Rosen's Discrete Mathematics and Its Applications - Mathematics Stack Exchange
I came across the following average-case time complexity analysis for the insertion sort algorithm on page 483 of "Discrete Mathematics and its Application" by Kenneth Rosen: Average-Case More on math.stackexchange.com
๐ŸŒ math.stackexchange.com
June 3, 2023
๐ŸŒ
NVIDIA Developer
developer.nvidia.com โ€บ blog โ€บ insertion-sort-explained-a-data-scientists-algorithm-guide
Insertion Sort Explainedโ€“A Data Scientists Algorithm Guide | NVIDIA Technical Blog
August 21, 2022 - The best-case time complexity of insertion sort algorithm is O(n) time complexity. Meaning that the time taken to sort a list is proportional to the number of elements in the list; this is the case when the list is already in the correct order.
๐ŸŒ
W3Schools
w3schools.com โ€บ dsa โ€บ dsa_timecomplexity_insertionsort.php
DSA Insertion Sort Time Complexity
As you can see, the time used by Insertion Sort increases fast when the number of values is \(n\) increased. Use the simulation below to see how the theoretical time complexity \(O(n^2)\) (red line) compares with the number of operations of actual Insertion Sorts.
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ dsa โ€บ insertion-sort-algorithm
Insertion Sort Algorithm - GeeksforGeeks
Please refer Complexity Analysis of Insertion Sort for details. ... Simple and easy to implement. Stable sorting algorithm. Efficient for small lists and nearly sorted lists. Space-efficient as it is an in-place algorithm. Adoptive. the number of inversions is directly proportional to number of swaps. For example, no swapping happens for a sorted array and it takes O(n) time only.
Published ย  February 24, 2026
Find elsewhere
Top answer
1 of 6
8

At first, this does not seem to be a proper code for insertion sort. It seems that you are using bubble sort code in reverse manner.
In an insertion sort code you don't replace a small element with every large element that falls before it, rather we skim through all the large elements that fall before it and only when we are at the point where there are no elements left or there is no more large element, then we place the small element at that place and shift/move all other succeeding elements.

As a part for O(n) time:
Lets consider an array of five already sorted elements - arr[11,13,15,17,19]. We move from first position element to last position.
Step 1: Take element 11, as it is first element we keep it as it is.
Step 2: Take element 13, look for element that falls before it(that is element 11), as 13>11, therefore no further need for looking at elements that fall before 11.
Step 3: Take element 15, look for element that falls before it(that is element 13), as 15>13, therefore no further need for looking at elements that fall before 13.
Step 4: Take element 17, look for element that falls before it(that is element 15), as 17>15, therefore no further need for looking at elements that fall before 15.
Step 5: Take element 19, look for element that falls before it(that is element 17), as 19>17, therefore no further need for looking at elements that fall before 17.

As we see that for five already sorted elements we required only 5 comparisons, thus for 'n' sorted elements we require only O(n) comparisons.

I hope above example clarifies your doubt.

2 of 6
6

Your code always runs in O(n^2). You have to break the inner for loop at the time you have found the place where the element should be.

๐ŸŒ
Programiz PRO
programiz.pro โ€บ resources โ€บ dsa-insertion-sort-complexity
Exploring Time and Space Complexities of Insertion Sort
December 19, 2024 - Insertion sort is a comparison-based sorting algorithm that builds the sorted array one element at a time. It has a time complexity of O(n^2) in the worst and average cases, but O(n) in the best case.
๐ŸŒ
Medium
medium.com โ€บ @YodgorbekKomilo โ€บ insertion-sort-in-java-an-in-depth-look-at-the-algorithm-and-its-time-complexity-0a9bd9f44811
Insertion Sort in Java: An In-Depth Look at the Algorithm and Its Time Complexity | by Yodgorbek Komilov | Medium
September 22, 2024 - Insertion sort is a straightforward algorithm that can be efficient for small data sets or nearly sorted arrays. While its O(nยฒ) time complexity may not make it suitable for large data sets, understanding its mechanics is crucial for grasping ...
๐ŸŒ
freeCodeCamp
freecodecamp.org โ€บ news โ€บ most-asked-questions-about-insertion-sort-algorithm
Insertion Sort Algorithm - Most Asked Questions About Insertion Sort
November 7, 2024 - The best-case running time of an insertion sort algorithm is O(n). This occurs when the input array is sorted, and no elements must be moved. This results in n-1 comparisons, which is approximately equal to n. Therefore, the time complexity is O(n).
๐ŸŒ
Medium
jtkyaw.medium.com โ€บ calculate-worst-case-runtime-complexity-for-insertion-sort-aeee0b54c20a
Calculate Worst-Case Runtime Complexity for Insertion Sort | by Justin Thein Kyaw | Medium
January 14, 2025 - For example, the assignment key = A[j] is executed n - 1 times, so the total time for that step is cโ‚‚ ร— (n - 1). Letโ€™s write down a general formula that describes the runtime of Insertion Sort algorithm.
๐ŸŒ
HappyCoders.eu
happycoders.eu โ€บ algorithms โ€บ insertion-sort
Insertion Sort โ€“ Algorithm, Source Code, Time Complexity
June 12, 2025 - With n elements, that is, n-1 steps (since we start with the second element), we thus come to n-1 comparison operations. Therefore: The best-case time complexity of Insertion Sort is: O(n)
Top answer
1 of 4
62

To answer this question, let's first determine how we can evaluate the runtime of insertion sort. If we can find a nice mathematical expression for the runtime, we can then manipulate that expression to determine the average runtime.

The key observation we need to have is that the runtime of insertion sort is closely related to the number of inversions in the input array. An inversion in an array is a pair of elements A[i] and A[j] that are in the wrong relative order - that is, i < j, but A[j] < A[i]. For example, in this array:

0 1 3 2 4 5

There is one inversion: the 3 and 2 should be switched. In this array:

4 1 0 3 2

There are 6 inversions:

  • 4 and 1
  • 4 and 0
  • 4 and 3
  • 4 and 2
  • 1 and 0
  • 3 and 2

One important property of inversions is that a sorted array has no inversions in it, since every element should be smaller than everything coming after it and larger than everything coming before it.

The reason this is significant is that there is a direct link between the amount of work done in insertion sort and the number of inversions in the original array. To see this, let's review some quick pseudocode for insertion sort:

  • For i = 2 .. n: (Assuming 1-indexing)
    • Set j = i - 1.
    • While A[j] > A[j + 1]:
      • Swap A[j] and A[j + 1].
      • Set j = j - 1.

Normally, when determining the total amount of work done by a function like this, we could determine the maximum amount of work done by the inner loop, then multiply it by the number of iterations of the outer loop. This will give an upper bound, but not necessarily a tight bound. A better way to account for the total work done is to recognize that there are two different sources of work:

  • The outer loop, which counts 2, 3, ..., n, and
  • The inner loop, which performs swaps.

That outer loop always does ฮ˜(n) work. The inner loop, however, does an amount of work that's proportional to the total number of swaps made across the entire runtime of the algorithm. To see how much work that loop will do, we will need to determine how many total swaps are made across all iterations of the algorithm.

This is where inversions come in. Notice that when insertion sort runs, it always swaps adjacent elements in the array, and it only swaps the two elements if they form an inversion. So what happens to the total number of inversions in the array after we perform a swap? Well, graphically, we have this:

 [---- X ----] A[j] A[j+1] [---- Y ----]

Here, X is the part of the array coming before the swapped pair and Y is the part of the array coming after the swapped pair.

Let's suppose that we swap A[j] and A[j+1]. What happens to the number of inversions? Well, let's consider some arbitrary inversion between two elements. There are 6 possibilities:

  • Both elements are in X, or both elements are in Y, or one element is in X and one element is in Y. Then the inversion is still there, since we didn't move any of those elements.
  • One element is in X or Y and the other is either A[j] or A[j+1]. Then the inversion is still there, since the relative orderings of the elements haven't changed, even though their absolute positions might have.
  • One element is A[j] and the other A[j+1]. Then the inversion is removed after the swap.

This means that after performing a swap, we decrease the number of inversions by exactly one, because only the inversion of the adjacent pair has disappeared. This is hugely important for the following reason: If we start off with I inversions, each swap will decrease the number by exactly one. Once no inversions are left, no more swaps are performed. Therefore, the number of swaps equals the number of inversions!

Given this, we can accurately express the runtime of insertion sort as ฮ˜(n + I), where I is the number of inversions of the original array. This matches our original runtime bounds - in a sorted array, there are 0 inversions, and the runtime is ฮ˜(n + 0) = ฮ˜(n), and in a reverse-sorted array, there are n(n - 1)/2 inversions, and the runtime is ฮ˜(n + n(n-1)/2) = ฮ˜(n2). Nifty!

So now we have a super precise way of analyzing the runtime of insertion sort given a particular array. Let's see how we can analyze its average runtime. To do this, we'll need to make an assumption about the distribution of the inputs. Since insertion sort is a comparison-based sorting algorithm, the actual values of the input array don't actually matter; only their relative ordering actually matters. In what follows, I'm going to assume that all the array elements are distinct, though if this isn't the case the analysis doesn't change all that much. I'll point out where things go off-script when we get there.

To solve this problem, we're going to introduce a bunch of indicator variables of the form Xij, where Xij is a random variable that is 1 if A[i] and A[j] form an inversion and 0 otherwise. There will be n(n - 1)/2 of these variables, one for each distinct pair of elements. Note that these variables account for each possible inversion in the array.

Given these X's, we can define a new random variable I that is equal to the total number of inversions in the array. This will be given by the sum of the X's:

I = ฮฃ Xij

We're interested in E[I], the expected number of inversions in the array. Using linearity of expectation, this is

E[I] = E[ฮฃ Xij] = ฮฃ E[Xij]

So now if we can get the value of E[Xij], we can determine the expected number of inversions and, therefore, the expected runtime!

Fortunately, since all the Xij's are binary indicator variables, we have that

E[Xij] = Pr[Xij = 1] = Pr[A[i] and A[j] are an inversion]

So what's the probability, given a random input array with no duplicates, that A[i] and A[j] are an inversion? Well, half the time, A[i] will be less than A[j], and the other half of the time A[i] will be greater than A[j]. (If duplicates are allowed, there's a sneaky extra term to handle duplicates, but we'll ignore that for now). Consequently, the probability that there's an inversion between A[i] and A[j] is 1 / 2. Therefore:

E[I] = ฮฃE[Xij] = ฮฃ (1 / 2)

Since there are n(n - 1)/2 terms in the sum, this works out to

E[I] = n(n - 1) / 4 = ฮ˜(n2)

And so, on expectation, there will be ฮ˜(n2) inversions, so on expectation the runtime will be ฮ˜(n2 + n) = ฮ˜(n2). This explains why the average-case behavior of insertion sort is ฮ˜(n2).

Hope this helps!

2 of 4
2

For fun I wrote a program which ran through all data combinations for a vector of size n counting comparisons and found that the best case is n-1 (all sorted) and the worst is (n*(n-1))/2.

Some results for different n:

  n min     ave     max ave/(min+max) ave/max

  2   1     1         1        0.5000
  3   2     2.667     3        0.5334
  4   3     4.917     6        0.5463
  5   4     7.717    10        0.5512
  6   5    11.050    15        0.5525
  7   6    14.907    21        0.5521
  8   7    19.282    28        0.5509
  9   8    24.171    36        0.5493
 10   9    29.571    45        0.5476
 11  10    35.480    55        0.5458
 12  11    41.897    66        0.5441

It seems the average value follows min closer than it does max.

EDIT: some additional values

 13  12    48.820    78        0.5424        
 14  13    56.248    91        0.5408

EDIT: value for 15

 15  14    64.182   105        0.5393

EDIT: selected higher values

 16  15    72.619   120        -       0.6052
 32  31   275.942   496        -       0.5563
 64  63  1034.772  1953        -       0.5294
128 127  4186.567  8128        -       0.5151
256 255 16569.876 32640        -       0.5077

I recently wrote a program to compute the average number of comparisons for insertion sort for higher values of n. From these I have drawn the conclusion that as n approaches infinity the average case approaches the worst case divided by two.

๐ŸŒ
Brilliant
brilliant.org โ€บ wiki โ€บ insertion
Insertion Sort | Brilliant Math & Science Wiki
Insertion sort runs in \(O(n)\) time in its best case and runs in \(O(n^2)\) in its worst and average cases. Best Case Analysis: Insertion sort performs two operations: it scans through the list, comparing each pair of elements, and it swaps elements if they are out of order.
๐ŸŒ
Quora
quora.com โ€บ What-is-the-worst-case-running-time-of-insertion-sort
What is the worst-case running time of insertion sort? - Quora
Answer (1 of 3): The worst-case complexity of InsertionSort, an in-place sorting algorithm, requiring only 1 instance of the data types being sorted, is O(n^2). The worst arrangement for InsertionSort is reverse-ordered, where n(n+1)/2 comparisons ...
๐ŸŒ
Programiz
programiz.com โ€บ dsa โ€บ insertion-sort
Insertion Sort (With Code in Python/C++/Java/C)
Best Case Complexity: O(n) When the array is already sorted, the outer loop runs for n number of times whereas the inner loop does not run at all. So, there are only n number of comparisons.
๐ŸŒ
University of Maryland
math.umd.edu โ€บ ~immortal โ€บ CMSC351 โ€บ notes โ€บ insertionsort.pdf pdf
CMSC 351: InsertionSort Justin Wyss-Gallifent July 9, 2024 1
time complexity what is the worst-case O time complexity of your new ยท pseudocode? 6. Another way to think of InsertionSort is that for each index i we want ยท to know where to the left to insert it. Note that for each i the sublist ยท to the left is always sorted as the algorithm progresses.