🌐
GeeksforGeeks
geeksforgeeks.org › dsa › understanding-time-complexity-simple-examples
Time Complexity with Simple Examples - GeeksforGeeks
The Time Complexity is not equal ... executes. For example: Write code in C/C++ or any other language to find the maximum between N numbers, where N varies from 10, 100, 1000, and 10000....
Published   February 26, 2026
🌐
GeeksforGeeks
geeksforgeeks.org › dsa › time-complexity-and-space-complexity
Time and Space Complexity - GeeksforGeeks
July 31, 2025 - In order to calculate time complexity ... to understand the process of calculation: Suppose a problem is to find whether a pair (X, Y) exists in an array, A of N elements whose sum is Z....
Discussions

time complexity - Examples of Algorithms which has O(1), O(n log n) and O(log n) complexities - Stack Overflow
What are some algorithms which we use daily that has O(1), O(n log n) and O(log n) complexities? More on stackoverflow.com
🌐 stackoverflow.com
Time Complexity
Let's start with some basics. You have some input X={x0,x1,...,xn} and some algorithm F which, when run on X, produces some output F(X). You want to know how long it takes to run. The obvious answer is to grab a stopwatch and time it. This will tell you how long it takes to run for a specific input. You can run it with many different inputs and write down the results, and for most algorithms, you'll likely notice that the running time mainly depends on is the number of items in X. So you can imagine there being a function t=T(F,n) that returns the running time for the algorithm and input size. These calculations are likely to be finicky, because they have to account for all the particulars of the implementation. So for example, suppose you analyze a bubblesort algorithm, calculate the running time of each line, figure out how many times that line is executed, and come up with T(bubblesort,n)=0.00037n(0.00173n+0.000074)+0.000731. You look at several other sorting algorithms and come up with even more complex functions. And of course the constants are only valid for the particular computer you ran or analyzed the timings on. These functions are interesting, but not particularly handy for a programmer or mathematician who wants to be able to talk in a convenient way about the running times of algorithms in general. To make things easier, we make two simplifying assumptions: first, that the main thing we're interested in is how these T-functions behave when n is large, and second, that we're interested in how they behave relative to each other (ie, not on any particular computer). This brings us to big-O notation. A big-O term like O(n^(2)) represents a set of T-functions. For the first assumption above, we choose specifically those for which the term n^(2) dominates in the limit as n goes to infinity. Recall from calculus that in the limit of a polynomial, the highest power term dominates. And for the second assumption, we eliminate all multiplicative constants. This brings us to the formal definition: O(g(n))={f(n) : ∃c∃k∀n n>k⇒0≤f(n)≤cg(n)} This says that when we say f(n)∈O(g(n)), we mean that we can choose arbitrary constants c and k such that for all n>k, f(n) is always less than or equal to c * g(n). Less formally, for large n, f(n) differs from g(n) only by a multiplicative constant. What does this actually mean when we make claims about algorithms? Suppose we know some algorithm - let's call it slowsort - is O(n^(2)). This means that if you went to the trouble of measuring and defining a T-function for slowsort on your computer, that this T-function would be in the set of functions which differ only by a multiplicative constant from t=f(n^(2)) for large n. Importantly, this says very little about small values of n. If you're going to spend all your time sorting 10-item lists, then the big-O classification isn't useful to you. But it does tell you something about the behavior as you scale up the input size. Suppose we also know that another algorithm - let's say fastersort - is O(n log n). It's important to understand that this does not tell us that fastersort is actually faster or slower for any specific value of n we might have in mind. Perhaps fastersort takes an hour to organize its working memory before it even gets started. But in the limit, as n goes to infinity, there will eventually be some value of n above which the O(n log n) algorithm always dominates the O(n^2) algorithm. Hopefully this is reasonably clear. There are more complex situations, like when the running time doesn't just depend on a single value for the input size and we see things like O(m*n). But even then, the key points are that we're talking about what happens for arbitrarily large values, and that we're making a statement about what category the algorithm's T-function falls in to. (Note that my use of set theory here is at odds with many standard CS textbooks. Big-O notation was first used by Paul Bachmann in 1894, making it older than modern set theory. The original big-O notation would write f(n)=O(n^(2)) instead of f(n)∈O(n^(2)). In the original notation, the equals sign does not mean algebraic equality, but rather establishes a relationship of classification between the left and right sides. This leads to endless confusion in a modern context since people can't help reading = as algebraic equality.) More on reddit.com
🌐 r/AskComputerScience
6
8
August 7, 2023
Confusion Regarding Determining Time Complexity/BigO
In my mind, the coefficient would be kept as it does have a significant effect even as n grows, however I recall hearing that you are not meant to include it? Keep in mind that while people like to give intuitive explanations of big O, it has a specific formal definition. f(n) is O(g(n)) if lim f(n)/g(n) < inf as n -> inf. If f(n) = 3g(n) and g(n) != 0 then you get f(n)/g(n) = 3g(n)/g(n) = 3 which is less than infinity, and therefore f(n) is O(g(n)) despite the constant. If you want to try to develop some intuition for why things are defined this way, think about it in terms of scaling. Let's say you have 100,000 users and need to do some sort of operation on all of them. Doing 3 times as many operations isn't ideal, but it's manageable, and the tradeoff doesn't really change if you go to 1,000,000 users—you're still just doing 3 times as many operations. However, if you're instead of doing n times as many operations, the characteristics change a lot when you 10x your user base—you go from having a working solution to an infeasible one, rather than a working solution to a slightly more expensive but still working solution. Are you meant to add the time complexities of the functions? Do you simply use the greatest order time complexity? (Finite) addition and taking the max are the exact same operation when it comes to big O. If you're working on the math side of big O, this isn't difficult to prove and is a good exercise. My reasoning is that I'm pretty sure (n(n-1))/2 is the number of edges in a complete graph, however I can't help but feel that it is incorrect. Your reasoning is totally correct, assuming you represent your edges as pairs of vertices and actually put together a data structure. Of course, if your implementation is more abstract (e.g. it contains a set of vertices and then just returns true if asked whether any given pair is connected) then it might not take that long to construct. More on reddit.com
🌐 r/learnprogramming
7
1
July 17, 2023
Big O is a really frustrating. How to determine time complexity accurately
Big Oh isn’t super complicated. Effectively you are trying to characterize your algorithm and determine its worst possible run time complexity. This is typically going to be determined by how many loops you use. A big part of optimizing code is trying to reduce the number of loops in your codebase. Something with O(1) is called “constant” time and takes one operation to return — like looking something up in a hashmap for example. Something with O(n) is called “linear” time — ie it takes n steps to compute. This might be iterating over a list of numbers to get a sum or an average, for example Something with O(n2) is called “quadratic” time — ie it takes n2 steps to process an input. This might be iterating over one object in your array and then iterating over a child list in each parent object, for example. When interviewers ask about this, it’s mostly to make sure you are loosely familiar with the concept of algorithmic complexity. More on reddit.com
🌐 r/learnprogramming
15
5
May 1, 2023
estimate of time taken for running an algorithm
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Time_complexity
Time complexity - Wikipedia
4 days ago - For example, the Adleman–Pomerance–Rumely primality test runs for nO(log log n) time on n-bit inputs; this grows faster than any polynomial for large enough n, but the input size must become impractically large before it cannot be dominated by a polynomial with small degree.
🌐
Crio
crio.do › blog › time-complexity-explained
Time Complexity Examples - Simplified 10 Min Guide
September 8, 2022 - When using divide and conquer algorithms, such as binary search, the time complexity is O(log n). Another example is quicksort, in which we partition the array into two sections and find a pivot element in O(n) time each time.
🌐
Great Learning
mygreatlearning.com › blog › data science and analytics › what is time complexity and why is it essential?
What is Time Complexity? Examples and Algorithms
September 25, 2024 - The time complexity of this approach is typically better than O(n) because the search is guided by the spatial structure, which eliminates the need to compare distances with all drivers. It could be closer to O(log n) or even better, depending on the specifics of the spatial index. In this example, the difference in time complexity between the linear search and the spatial indexing approach showcases how algorithmic choices can significantly impact the real-time performance of a critical operation in a ride-sharing app.
🌐
USACO
usaco.guide › home › bronze › time complexity
Time Complexity · USACO Guide
Another example is that although binary search on an array and insertion into an ordered set are both ... Constant factor is entirely ignored in Big O notation. This is fine most of the time, but if the time limit is particularly tight, you may receive time limit exceeded (TLE) with the intended complexity.
🌐
W3Schools
w3schools.com › dsa › dsa_timecomplexity_theory.php
DSA Time Complexity
When talking about "operations" here, "one operation" might take one or several CPU cycles, and it really is just a word helping us to abstract, so that we can understand what time complexity is, and so that we can find the time complexity for different algorithms. One operation in an algorithm can be understood as something we do in each iteration of the algorithm, or for each piece of data, that takes constant time. For example: Comparing two array elements, and swapping them if one is bigger than the other, like the Bubble sort algorithm does, can be understood as one operation.
Find elsewhere
🌐
Encyclopedia Britannica
britannica.com › science › mathematics
Time complexity | Definition, Examples, & Facts | Britannica
March 7, 2023 - For example, the Quicksort sorting algorithm has an average time complexity of O(n log n), but in a worst-case scenario it can have O(n2) complexity.
🌐
Adrian Mejia
adrianmejia.com › most-popular-algorithms-time-complexity-every-programmer-should-know-free-online-tutorial-course
8 time complexities that every programmer should know | Adrian Mejia Blog
September 19, 2019 - Let’s see another example. We want to sort the elements in an array. One way to do this is using bubble sort as follows: You might also notice that for a very big n, the time it takes to solve the problem increases a lot. Can you spot the relationship between nested loops and the running time? When a function has a single loop, it usually translates into a running time complexity of O(n).
🌐
freeCodeCamp
freecodecamp.org › news › big-o-cheat-sheet-time-complexity-chart
Big O Cheat Sheet – Time Complexity Chart
November 7, 2024 - When you perform nested iteration, meaning having a loop in a loop, the time complexity is quadratic, which is horrible. A perfect way to explain this would be if you have an array with n items. The outer loop will run n times, and the inner loop will run n times for each iteration of the outer loop, which will give total n^2 prints. If the array has ten items, ten will print 100 times (10^2). Here is an example by Jared Nielsen, where you compare each element in an array to output the index when two elements are similar:
🌐
BtechSmartClass
btechsmartclass.com › data_structures › time-complexity.html
Data Structures Tutorials - Time Complexity with examples
Calculating Time Complexity of an algorithm based on the system configuration is a very difficult task because the configuration changes from one system to another system. To solve this problem, we must assume a model machine with a specific configuration. So that, we can able to calculate generalized time complexity according to that model machine.
🌐
WsCube Tech
wscubetech.com › resources › dsa › time-complexity
Time Complexity in Data Structure: All Types With Examples
February 25, 2026 - Learn about Time Complexity in DSA including types ,examples & more in this tutorial. Understand how it affects performance and efficiency in coding.
🌐
Tekolio
tekolio.com › home › dsa › time complexity › time complexity of algorithms explained with examples
Time Complexity of Algorithms Explained with Examples | Tekolio
July 9, 2023 - ... The above code is quadratic because there are two loops and each one will execute the algorithm n times – n*n or n^2. Other examples of quadratic time complexity include bubble sort, selection sort, and insertion sort.
🌐
Simplilearn
simplilearn.com › home › resources › software development › data structure tutorial for beginners › time & space complexity in data structure [2026]
Time and Space Complexity in Data Structures Explained
December 8, 2025 - Understand time and space complexity in data structures. Learn how to optimize performance and enhance your coding efficiency with practical examples and insights.
Address   5851 Legacy Circle, 6th Floor, Plano, TX 75024 United States
🌐
Adrian Mejia
adrianmejia.com › how-to-find-time-complexity-of-an-algorithm-code-big-o-notation
How to find time complexity of an algorithm? | Adrian Mejia Blog
October 3, 2020 - Since n log n has a higher order than n, we can express the time complexity as O(n log n). Another prevalent scenario is loops like for-loops or while-loops. For any loop, we find out the runtime of the block inside them and multiply it by the ...
Top answer
1 of 12
353

If you want examples of Algorithms/Group of Statements with Time complexity as given in the question, here is a small list -

O(1) time

  • Accessing Array Index (int a = ARR[5];)
  • Inserting a node in Linked List
  • Pushing and Poping on Stack
  • Insertion and Removal from Queue
  • Finding out the parent or left/right child of a node in a tree stored in Array
  • Jumping to Next/Previous element in Doubly Linked List

O(n) time

In a nutshell, all Brute Force Algorithms, or Noob ones which require linearity, are based on O(n) time complexity

  • Traversing an array
  • Traversing a linked list
  • Linear Search
  • Deletion of a specific element in a Linked List (Not sorted)
  • Comparing two strings
  • Checking for Palindrome
  • Counting/Bucket Sort and here too you can find a million more such examples....

O(log n) time

  • Binary Search
  • Finding largest/smallest number in a binary search tree
  • Certain Divide and Conquer Algorithms based on Linear functionality
  • Calculating Fibonacci Numbers - Best Method The basic premise here is NOT using the complete data, and reducing the problem size with every iteration

O(n log n) time

The factor of 'log n' is introduced by bringing into consideration Divide and Conquer. Some of these algorithms are the best optimized ones and used frequently.

  • Merge Sort
  • Heap Sort
  • Quick Sort
  • Certain Divide and Conquer Algorithms based on optimizing O(n^2) algorithms

O(n^2) time

These ones are supposed to be the less efficient algorithms if their O(nlogn) counterparts are present. The general application may be Brute Force here.

  • Bubble Sort
  • Insertion Sort
  • Selection Sort
  • Traversing a simple 2D array

O(n!) time

  • Solving the travelling salesman problem via brute-force search
  • generating all unrestricted permutations of a partially ordered set;
  • finding the determinant with Laplace expansion
  • enumerating all partitions of a set
2 of 12
40

O(1) - most cooking procedures are O(1), that is, it takes a constant amount of time even if there are more people to cook for (to a degree, because you could run out of space in your pot/pans and need to split up the cooking)

O(logn) - finding something in your telephone book. Think binary search.

O(n) - reading a book, where n is the number of pages. It is the minimum amount of time it takes to read a book.

O(nlogn) - cant immediately think of something one might do everyday that is nlogn...unless you sort cards by doing merge or quick sort!

🌐
Medium
medium.com › @n20 › a-beginners-guide-to-time-complexity-in-algorithms-o-1-o-n-o-n²-and-beyond-c15500c81583
A Beginner’s Guide to Time Complexity in Algorithms: O(1), O(n), O(n²), and Beyond | by MobileDev - NK | Medium
October 14, 2024 - In simple terms, time complexity tells us how the running time of an algorithm grows as the size of the input (usually called n) increases. For example, if you have an algorithm that processes an array, the size of the array is n. How will the ...
🌐
Studytonight
studytonight.com › data-structures › time-complexity-of-algorithms
Time Complexity of Algorithms | Studytonight
Time Complexity is most commonly estimated by counting the number of elementary steps performed by any algorithm to finish execution. Like in the example above, for the first code the loop will run n number of times, so the time complexity will be n atleast and as the value of n will increase the time taken will also increase.