Is there a sorting algorithm with linear time complexity and O(1) auxiliary space complexity? - Stack Overflow
Making a fast and stable sorting algorithm with O(1) memory
Big O Cheat Sheet: the time complexities of operations Python's data structures
What is the point of learning time complexity?
Which sorting algorithm has the best average-case time complexity?
What is the time complexity of Merge Sort?
What is the time complexity of Radix Sort?
Videos
If we are sorting only integers, we can use the in-situ variant of counting sort which has O(k) space complexity, which is independent from the variable n. In other words when we treat k as constant, the space complexity is O(1).
Alternatively, we can use in place radix sort with lg k phases of binary partition with O(lg k) space complexity (due to recursion). Or even less phases with the use of counting sort to determine the buckets boundaries for the n-way partition. These solutions sport time complexity of O(lg k * n), which when expressed only in terms of the variable n is O(n) (when k is considered constant).
Another possible approach to obtain O(n) step complexity and O(1) space complexity, when k is considered constant, is to use something which can be called subtraction sort, as described by the OP in their own answer, or elsewhere. It has step complexity O(sum(input)) which is better than O(kn) (and for certain specific inputs it is even better than binary-radix sort's O(lg k * n), e.g. for all inputs of the form [k, 0, 0, ... 0]) and space complexity O(1).
Yet another solution is to use bingo sort which has step complexity O(vn) where v <= k is the number of unique values in the input, and space complexity O(1).
Note that neither of these sorting solutions are stable, which matters if we sort something more than just integers (some arbitrary objects with integer keys).
There is also a cutting edge stable partition algorithm described in this paper with O(1) space complexity. Combining it with radix sort, one may construct a stable linear sort algorithm with constant space - O(lg k * n) step complexity and O(1) space complexity.
EDIT:
As per the request from the comment, I've tried to find a source for the "in-situ" variant of counting sort, but haven't found anything of good quality I could link to (it's really strange that there is no easily available description for such a basic algorithm). Therefore, I'm posting the algorithm here:
The regular counting sort (from Wikipedia)
count = array of k+1 zeros
for x in input do
count[key(x)] += 1
total = 0
for i in 0, 1, ... k do
count[i], total = total, count[i] + total
output = array of the same length as input
for x in input do
output[count[key(x)]] = x
count[key(x)] += 1
return output
It assumes that the input consists of some objects which can be identified by an integer key in the range 0 to k - 1. It uses O(n + k) extra space.
The trivial in-situ variant for integers
This variant requires the input to be pure integers, not arbitrary objects with integer keys. It simply reconstructs the input array from the count array.
count = array of k zeros
for x in input do
count[x] += 1
i = 0
for x in 0, 1, ... k - 1 do
for j in 1, 2, ... count[x] do
input[i], i = x, i + 1
return input
It uses O(k) extra space.
The complete in-situ variant for arbitrary objects with integer keys
This variant accepts arbitrary objects similarly to the regular variant. It uses swaps to place objects in appropriate places. After computing the count array in the two first loops it leaves it immutable, and uses another array called done to keep track of how many objects with a given key have been already placed in the right position.
count = array of k+1 zeros
for x in input do
count[key(x)] += 1
total = 0
for i in 0, 1, ... k do
count[i], total = total, count[i] + total
done = array of k zeros
for i in 0, 1, ... k - 1 do
current = count[i] + done[i]
while done[i] < count[i + 1] - count[i] do
x = input[current]
destination = count[key(x)] + done[key(x)]
if destination = current then
current += 1
else
swap(input[current], input[destination])
done[key(x)] += 1
return input
This variant is not stable, so it cannot be used as a subroutine in radix sort. It uses O(2k) = O(k) extra space.
I wanted to include an algorithm here which is an improvement of Mathphile's first answer. In that case the idea was to shave 1 off of each number in the unsorted suffix of the input (while swapping sorted numbers into the prefix). Whenever a number in the unsorted suffix hits 0 it means it is smaller than any other number in the unsorted suffix (because all numbers are being reduced at the same rate).
There is a major improvement possible: with no change to time complexity we can subtract numbers much bigger than 1 - in fact we can subtract a number equal to the smallest remaining unsorted item. This allows this sort to function well regardless of the numeric sizes of the array items, and on floating point values! A javascript implementation:
let subtractSort = arr => {
let sortedLen = 0;
let lastMin = 0; // Could also be `Math.min(...arr)`
let total = 0;
while (sortedLen < arr.length) {
let min = arr[sortedLen];
for (let i = sortedLen; i < arr.length; i++) {
if (arr[i]) {
arr[i] -= lastMin;
if (arr[i]) min = Math.min(min, arr[i]);
} else {
arr[i] = arr[sortedLen];
arr[sortedLen] = total;
sortedLen++;
}
}
total += lastMin;
lastMin = min;
}
return arr;
};
let examples = [
[ 3, 2, 5, 4, 8, 5, 7, 1 ],
[ 3000, 2000, 5000, 4000, 8000, 5000, 7000, 1000 ],
[ 0.3, 0.2, 0.5, 0.4, 0.8, 0.5, 0.7, 0.1 ],
[ 26573726573, 678687, 3490, 465684586 ]
];
for (let example of examples) {
console.log(`Unsorted: ${example.join(', ')}`);
console.log(`Sorted: ${subtractSort(example).join(', ')}`);
console.log('');
}
Note this sort only works with positive numbers. To work with negative numbers we would need to find the most negative item, subtract this negative value from every item in the array, sort the array, and finally add the most negative value back to every item - overall this doesn't increase time complexity.