🌐
ScienceDirect
sciencedirect.com › topics › computer-science › quadratic-time-complexity
Quadratic Time Complexity - an overview | ScienceDirect Topics
Quadratic-time complexity, denoted as O(n²), refers to algorithms whose running time grows quadratically with the input size, meaning the number of computational steps required is proportional to the square of the input size n.
🌐
GitHub
github.com › bradtraversy › traversy-js-challenges › blob › main › 05-complexity › 05-quadratic-time-complexity › readme.md
traversy-js-challenges/05-complexity/05-quadratic-time-complexity/readme.md at main · bradtraversy/traversy-js-challenges
Quadratic time complexity is when the runtime scales quadratically with the input. As the input size increases, the runtime of the algorithm also increases in a quadratic fashion (i.e. the runtime is proportional to the square of the input size).
Author   bradtraversy
Discussions

python - Linear time v.s. Quadratic time - Stack Overflow
Big-O is about how things scale ... a quadratic algorithm is much faster than a linear one. But, if you increase N, you know at some point this will reverse and linear will have the upper hand. Whether the user will want to increase N enough or not, depends on the particular use case. 2022-02-01T10:26:44.137Z+00:00 ... They must be referring to run-time complexity also known ... More on stackoverflow.com
🌐 stackoverflow.com
So what the hell is O(x) Time?
O(1) is constant time, which means it doesnt take longer as the input size increases. For example, referencing an item in an array takes O(1) time. O(logn) is logarithmic time, which means as the input size increases it takes a logarithmically small amount more time. For example, binary searching a sorted list is O(logn). O(n) is linear time, which means it takes a constant factor of time proportional to the size of the input size. For example, iterating over every element of an array 5 times is O(n). O(nlogn) is linear times logarithmic time, which is between linear and quadratic, and is a common goal for optimizing many algorithms. O(n²) is quadratic time, which means as the input size grows, you have to do square the number of things. So for instance, iterating over every element of an array, but each time you have to do the entire array again. An example is brute force sorting an array. O(n³), O(n⁴), etc... are like O(n²) but each gets more cumbersome to work with. All of these are "polynomial" times. Then theres exponential times like O(2n), which grow very fast. Then weve got O(n!) and similar, which is super exponential time. This means you have to do a factorial number of things for the input size. An example is brute forcing the travelling salesman problem. And thats a summary. Im assuming O(x) = O(n). More on reddit.com
🌐 r/compsci
56
8
April 30, 2024
Is this an example of Quadratic Time Complexity when it comes to type of big O notation
They are similar in the sense that we are considering the limiting behavior, but in different context. In the computer science example we are looking at how long a program will take to run. The given example is comparing all of the items in a list is O(n2 ). Imagine if we also had to count the number of things in that list which is O(n). As we increase n the counting process has a negligible impact on the runtime, i.e. a million counts vs million squared comparisons so we only use the fastest growing part in our big O notation. The limiting behavior is n going to infinity In numerical analysis we are concerned with how large our error is. Let's say we are trying to approximate e0.1 with ex =1+x. I know from Taylor expansion that the error is going to be x2 /2 +x3 /6 +... But 0.12 is much bigger than 0.13 (and all of the larger terms) so the approximation is O(x2 ). The limiting behavior here is x to 0 More on reddit.com
🌐 r/learnmath
2
2
May 8, 2025
optimization - Time complexity of quadratic programming - Mathematics Stack Exchange
I am using the Matlab built-in quadprog to solve a quadratic program with linear constraints. I vaguely recalled from school that the time complexity of quadratic programming should be $O(n^3)$, an... More on math.stackexchange.com
🌐 math.stackexchange.com
October 1, 2012
🌐
Medium
medium.com › @ariel.salem1989 › an-easy-to-use-guide-to-big-o-time-complexity-5dcf4be8a444
An Easy-To-Use Guide to Big-O Time Complexity | by Ariel Salem | Medium
March 1, 2017 - O(N²) — Quadratic Time: Quadratic Time Complexity represents an algorithm whose performance is directly proportional to the squared size of the input data set (think of Linear, but squared).
estimate of time taken for running an algorithm
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Time_complexity
Time complexity - Wikipedia
January 18, 2026 - No general-purpose sorts run in linear time, but the change from quadratic to sub-quadratic is of great practical importance. An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, that is, T(n) = O(nk) for some positive constant k. Problems for which a deterministic polynomial-time algorithm exists belong to the complexity class P, which is central in the field of computational complexity theory.
🌐
Medium
medium.com › @nickshpilevoy › quadratic-time-complexity-86f81e57e54e
Quadratic Time Complexity. In the previous chapter, we explained… | by Nikita Shpilevoy | Medium
September 28, 2024 - So, how many times do we expect to run it in the worst-case scenario? The correct answer is 20. You can visualize the solution using a simple geometric approach — grab a piece of paper and draw a 4x5 grid, then count the number of cells. This mirrors how the code goes through the two-dimensional array. | 1 | 2 | 3 | 4 | 5 | | 2 | 3 | 4 | 5 | 6 | | 3 | 4 | 5 | 6 | 7 | | 4 | 5 | 6 | 7 | 8 | A nested loop is the easiest example of quadratic complexity, also known as O(n²).
🌐
JavaScript in Plain English
javascript.plainenglish.io › o-n²-quadratic-time-complexity-an-overview-f4ef7f9a20f2
O(n²) — Quadratic Time Complexity: An Overview | by Josh Harris | JavaScript in Plain English
January 6, 2023 - Learn about O(n^2) time complexity, a measure of running time that increases quadratically with the size of the input. Find out how O(n^2) algorithms like bubble sort, insertion sort, and selection sort can process data in time proportional to the size of the input squared.
🌐
Fiveable
fiveable.me › all key terms › data structures › quadratic time complexity
Quadratic time complexity Definition - Data Structures Key Term | Fiveable
Quadratic time complexity refers to an algorithm whose running time grows proportionally to the square of the size of the input data. This means that as the number of elements increases, the time taken to execute the algorithm increases at a rate that is the square of that number.
Find elsewhere
🌐
BigBinary Academy
courses.bigbinaryacademy.com › learn-dsa › complexity-analysis › quadratic-time-complexity
Quadratic Time Complexity - Learn DSA using JavaScript | BigBinary Academy
## O(n^2) - Quadratic Time Complexity **Description:** Quadratic time complexity, denoted as `O(n^2)`, occurs when the time it takes to complete an operation is proportional to the square of the input size.
Top answer
1 of 4
66

A method is linear when the time it takes increases linearly with the number of elements involved. For example, a for loop which prints the elements of an array is roughly linear:

for x in range(10):
    print x

because if we print range(100) instead of range(10), the time it will take to run it is 10 times longer. You will see very often that written as O(N), meaning that the time or computational effort to run the algorithm is proportional to N.

Now, let's say we want to print the elements of two for loops:

for x in range(10):
    for y in range(10):
        print x, y

For every x, I go 10 times looping y. For this reason, the whole thing goes through 10x10=100 prints (you can see them just by running the code). If instead of using 10, I use 100, now the method will do 100x100=10000. In other words, the method goes as O(N*N) or O(N²), because every time you increase the number of elements, the computation effort or time will increase as the square of the number of points.

2 of 4
34

They must be referring to run-time complexity also known as Big O notation. This is an extremely large topic to tackle. I would start with the article on wikipedia: https://en.wikipedia.org/wiki/Big_O_notation

When I was researching this topic one of the things I learned to do is graph the runtime of my algorithm with different size sets of data. When you graph the results you will notice that the line or curve can be classified into one of several orders of growth.

Understanding how to classify the runtime complexity of an algorithm will give you a framework to understanding how your algorithm will scale in terms of time or memory. It will give you the power to compare and classify algorithms loosely with each other.

I'm no expert but this helped me get started down the rabbit hole.

Here are some typical orders of growth:

  • O(1) - constant time
  • O(log n) - logarithmic
  • O(n) - linear time
  • O(n^2) - quadratic
  • O(2^n) - exponential
  • O(n!) - factorial

If the wikipedia article is difficult to swallow, I highly recommend watching some lectures on the subject on iTunes University and looking into the topics of algorithm analysis, big-O notation, data structures and even operation counting.

Good luck!

🌐
Reddit
reddit.com › r/compsci › so what the hell is o(x) time?
r/compsci on Reddit: So what the hell is O(x) Time?
April 30, 2024 -

I have been learning programming in my own time for years now and I'm coming up on this topic when I've gone back to school. I just can't seem to understand what it asks of me. Can anyone clarify it? I'm a very visual learner, things like a stack, queues, dequeues, etc come easily, but this just slips out of my mind.

Top answer
1 of 23
176
O(1) is constant time, which means it doesnt take longer as the input size increases. For example, referencing an item in an array takes O(1) time. O(logn) is logarithmic time, which means as the input size increases it takes a logarithmically small amount more time. For example, binary searching a sorted list is O(logn). O(n) is linear time, which means it takes a constant factor of time proportional to the size of the input size. For example, iterating over every element of an array 5 times is O(n). O(nlogn) is linear times logarithmic time, which is between linear and quadratic, and is a common goal for optimizing many algorithms. O(n²) is quadratic time, which means as the input size grows, you have to do square the number of things. So for instance, iterating over every element of an array, but each time you have to do the entire array again. An example is brute force sorting an array. O(n³), O(n⁴), etc... are like O(n²) but each gets more cumbersome to work with. All of these are "polynomial" times. Then theres exponential times like O(2n), which grow very fast. Then weve got O(n!) and similar, which is super exponential time. This means you have to do a factorial number of things for the input size. An example is brute forcing the travelling salesman problem. And thats a summary. Im assuming O(x) = O(n).
2 of 23
20
O(*) stands for an entire class of functions whose growth is "bounded" by another function. We say a function is asymptotically bounded by another, when its values eventually become smaller than the other function, allowing for a constant factor. For example, the function f(n) = 2n is in O(n), because 2n < c*n for some constant c (eg, c = 3). As another example, the function f(n) = n * sqrt(n) is not in O(n), because no matter how big we make the constant c, eventually sqrt(n) will grow larger than c, and n * sqrt(n) > c * n. However, n * sqrt(n) is in O( n2 ). We use this notation in complexity theory to classify algorithms by how their run time is affected by the size of the input. An algorithm with O(n) complexity has a run time that grows at most linearly depending on the input size.
🌐
GeeksforGeeks
geeksforgeeks.org › dsa › analysis-algorithms-big-o-analysis
Big O Notation - GeeksforGeeks
Common examples of algorithms with polynomial time complexity include linear time complexity O(n), quadratic time complexity O(n2), and cubic time complexity O(n3).
Published   1 month ago
🌐
Reddit
reddit.com › r/learnmath › is this an example of quadratic time complexity when it comes to type of big o notation
r/learnmath on Reddit: Is this an example of Quadratic Time Complexity when it comes to type of big O notation
May 8, 2025 -

Below is the output generated by AI while exploring big O notation:

……...…...........

O(n2) – Quadratic Time Complexity Definition: The runtime increases quadratically as the input size grows. Doubling the input quadruples the runtime. Characteristics: Typically slower and less efficient, not suitable for large inputs. Examples: Nested loops for comparing each customer to every other customer. Business Case: Small-scale market basket analysis for cross-selling products.

....,.............

My query is if it is of the same type as discussed in this MITx Online Differential Calculus course except x3 replaced by n2.

https://www.reddit.com/r/MathHelp/s/XivJaFxhkz

Top answer
1 of 2
2
They are similar in the sense that we are considering the limiting behavior, but in different context. In the computer science example we are looking at how long a program will take to run. The given example is comparing all of the items in a list is O(n2 ). Imagine if we also had to count the number of things in that list which is O(n). As we increase n the counting process has a negligible impact on the runtime, i.e. a million counts vs million squared comparisons so we only use the fastest growing part in our big O notation. The limiting behavior is n going to infinity In numerical analysis we are concerned with how large our error is. Let's say we are trying to approximate e0.1 with ex =1+x. I know from Taylor expansion that the error is going to be x2 /2 +x3 /6 +... But 0.12 is much bigger than 0.13 (and all of the larger terms) so the approximation is O(x2 ). The limiting behavior here is x to 0
2 of 2
1
ChatGPT and other large language models are not designed for calculation and will frequently be r/confidentlyincorrect in answering questions about mathematics; even if you subscribe to ChatGPT Plus and use its Wolfram|Alpha plugin, it's much better to go to Wolfram|Alpha directly. Even for more conceptual questions that don't require calculation, LLMs can lead you astray; they can also give you good ideas to investigate further, but you should never trust what an LLM tells you. To people reading this thread: DO NOT DOWNVOTE just because the OP mentioned or used an LLM to ask a mathematical question. I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
🌐
AlgoCademy
algocademy.com › home › understanding algorithms: time complexity explained
Understanding Algorithms: Time Complexity Explained
October 12, 2024 - Quadratic time complexity, denoted as O(n²), occurs when an algorithm’s running time increases with the square of the input size. This typically happens in algorithms that involve nested loops.
🌐
Jarednielsen
jarednielsen.com › big-o-quadratic-time-complexity
Big O Quadratic Time Complexity | jarednielsen.com
February 7, 2020 - This page has moved to https://superlative.guide/big-o-quadratic-time-complexity
🌐
Fiveable
fiveable.me › all key terms › data structures › quadratic time
Quadratic Time - (Data Structures) - Vocab, Definition, ...
Quadratic time refers to a specific class of algorithmic complexity where the time required to complete a task grows proportionally to the square of the size of the input data. This means that if the input size doubles, the time taken to execute the algorithm increases by a factor of four.
🌐
Fiveable
fiveable.me › all key terms › data structures › quadratic time complexity
Quadratic time complexity: Data Structures Study Guide |...
Quadratic time complexity refers to an algorithm whose running time grows proportionally to the square of the size of the input data. This means that as the number of elements increases, the time taken to execute the algorithm increases at a rate that is the square of that number.
Top answer
1 of 2
10

Time complexity of Quadratic Programming.

It was proved by Vavasis at 1991 that the general quadratic program is NP-hard, i.e. it takes more than polynomial time to be solved "exactly" (in reality, its impossible to find an exact solution due to the finite precision arithmetic of the computer). If your QP is convex, there are polynomial time interior point algorithms (e.g. Ye and Tse at 1989 published an extension of Karmarkar's projective algorithm on convex quadratic programs). Also, there are approximation algorithms that return local solutions of nonconvex QPs in polynomial running time (e.g. Ye, 1998).

About quadprog's inner implementation and time-complexity.

Quadprog runs an active-set method (of exponential time-complexity for the worst case input instances) for a general QP. For a really hard QP (indefinite, with a near badly scaled Q matrix) , it may not converge to a solution. In the case that your input's QP is convex, quadprog runs an interior-point method.

I didn't ever check quadprog's time complexity in practice, but you can't lead to an inference by just observing the running times as you scale the dimension of a given QP. It's better to generate a test set, consisting of convex and not-convex QP, manually choose the algorithm you want to be runned by quadprog for each of them, and observe the running times.

2 of 2
5

I am not familiar with the details of the quadprog function, but I think the issues may be more universal. The cubic time complexity is an asymptotic worst-case bound. It does not mean that growing any problem by 10 times will increase running time by 1000 times. It will often be less, but may be that bad for some problem data.

There is often constant time "overhead" for many algorithms. The overhead may be so large that the running time is essentially independent of the problem size up to a certain point. The asymptotic bound only becomes relevant when comparing large problems with very large problems. In your example this will be hard to observe because quadratic programs with thousands or tens of thousands of variables can take hours or days to solve on a typical personal computer. Memory can become an issue since the problem data alone grows quadratically in the number of variables.

There is a whole industry devoted to these issues. Commercial solvers can cost tens of thousands of dollars. For specific applications, specialized solvers that exploit problem structure are often utilized. Sparsity of the input matrices is the most basic source of running time improvements.

Asymptotic worst-case bounds are often a poor guide for practical problem solving. They are certainly hard to verify using simulations. I suggest you move over to stackexchange for messy, detailed discussions of particular software implementations of algorithms.

🌐
arXiv
arxiv.org › abs › 2507.04515
[2507.04515] A Quadratic Programming Algorithm with $O(n^3)$ Time Complexity
July 6, 2025 - Abstract:Solving linear systems and quadratic programming (QP) problems are both ubiquitous tasks in the engineering and computing fields. Direct methods for solving systems, such as Cholesky, LU, and QR factorizations, exhibit data-independent time complexity of $O(n^3)$. This raises a natural question: could there exist algorithms for solving QPs that also achieve \textit{data-independent} time complexity of $O(n^3)$? This raises a natural question: could there exist algorithms for solving QPs that also achieve data-independent time complexity of $O(n^3)$? This is critical for offering an execution time certificate for real-time optimization-based applications such as model predictive control.
🌐
freeCodeCamp
freecodecamp.org › news › big-o-cheat-sheet-time-complexity-chart
Big O Cheat Sheet – Time Complexity Chart
November 7, 2024 - When you have a single loop within your algorithm, it is linear time complexity (O(n)). When you have nested loops within your algorithm, meaning a loop in a loop, it is quadratic time complexity (O(n^2)).