🌐
MIT
web.mit.edu › 16.070 › www › lecture › big_o.pdf pdf
Big O notation (with a capital letter O, not a zero), also ...
executes M times. As a result, the statements in the inner loop execute a total of N * M ... When a loop is involved, the same rule applies. For example:
🌐
Illinois
mfleck.cs.illinois.edu › building-blocks › version-1.2 › big-o.pdf pdf
Chapter 14 Big-O
To take a more complex example, let’s show that 3x2 + 7x + 2 is O(x2). If we pick c = 3, then our equation would look like 3x2 + 7x + 2 ≤3x2. This ... So let’s try c = 4. Then we need to find a lower bound on x that makes · 3x2 + 7x + 2 ≤4x2 true. To do this, we need to force 7x + 2 ≤x2.
🌐
Illinois
mfleck.cs.illinois.edu › building-blocks › version-1.0 › big-o.pdf pdf
Chapter 13 Big-O
To take a more complex example, let’s show that 3x2 + 7x + 2 is O(x2). If we pick c = 3, then our equation would look like 3x2 + 7x + 2 ≤3x2. This ... So let’s try c = 4. Then we need to find a lower bound on x that makes · 3x2 + 7x + 2 ≤4x2 true. To do this, we need to force 7x + 2 ≤x2.
🌐
Carnegie Mellon University
stat.cmu.edu › ~cshalizi › uADA › 13 › lectures › app-b.pdf pdf
14:51 Friday 18th January, 2013 Appendix B Big O and Little o Notation
Big-O means “is of the same order as”. The corresponding little-o means “is ul- timately smaller than”: f (n) = o(1) means that f (n)/c ! 0 for any constant c. Re- cursively, g(n) = o(f (n)) means g(n)/f (n) = o(1), or g(n)/f (n) ! 0. We also read · g(n) = o(f (n)) as “g(n) is ultimately ...
🌐
Stanford
web.stanford.edu › class › archive › cs › cs106b › cs106b.1176 › handouts › midterm › 5-BigO.pdf pdf
CS106B Handout Big O Complexity
The stragegy for computing Big-O depends on whether or not your program is recursive. For the case of iterative solutions, we try and count the number of executions that are performed. For the · case of recursive solutions, we first try and compute the number of recursive calls that are performed. ... then 3 times until n times. In order to calculate the Big-O for code that follows this format we use the
🌐
Stanford
web.stanford.edu › class › archive › cs › cs106b › cs106b.1218 › lectures › 07-bigo › Lecture7Slides.pdf pdf
Big-O Notation and Algorithmic Analysis
Big-O notation is a way of quantifying the rate at which some quantity grows. ... A square of side length r has area O(r2).
🌐
Medium
medium.com › @princemeghani › big-o-notation-a-simple-explanation-with-examples-1ef0356825a7
Big O Notation: A Simple Explanation With Examples | Medium
April 21, 2024 - Explore Big-O Notation with this simple guide. Understand algorithm performance through practical examples. A must-read for developers.
🌐
Runestone Academy
runestone.academy › ns › books › published › pythonds › AlgorithmAnalysis › BigONotation.html
3.3. Big-O Notation — Problem Solving with Algorithms and Data Structures
Order of magnitude is often called Big-O notation (for “order”) and written as \(O(f(n))\). It provides a useful approximation to the actual number of steps in the computation. The function \(f(n)\) provides a simple representation of the dominant part of the original \(T(n)\). In the above example, \(T(n)=1+n\).
Find elsewhere
🌐
EdX
courses.edx.org › c4x › MITx › 6.00.1x_5 › asset › handouts_Big_O_Notes.pdf pdf
April 13, 2011 6.00 Notes On Big-O Notation Sarina Canelake
So, if we let n = |a str|, this function is O(n). ... This code looks very similar to the function count ts, but it is actually very di↵erent! The conditional checks if char in b str - this check requires us, in the worst case, to · check every single character in b str!
🌐
Carnegie Mellon University
cs.cmu.edu › ~15110-s20 › slides › week7-2-bigo.pdf pdf
Runtime and Big-O Notation 15-110 - Wednesday 2/26
February 26, 2020 - Activity: Calculate the Big-O of Code · Activity: predict the Big-O runtime of the following piece of code. def sumEvens(lst): # n = len(lst) result = 0 · for i in range(len(lst)): if lst[i] % 2 == 0: result = result + lst[i] return result · Submit your answer to Piazza when you're done.
🌐
Milwaukee School of Engineering
faculty-web.msoe.edu › johnsontimoj › ELE1601 › files1601 › big_o.pdf pdf
Big O Notation
January 30, 2023 - This is my tenth year as an MSOE faculty member. Prior to joining MSOE I spent 3 years as a professor at Northern Illinois University · Before joining the academic ranks I enjoyed a thirty year career in the advanced technology industry spanning the military, industrial, commercial and consumer ...
🌐
Scribd
scribd.com › document › 351140853 › 2-3-Big-O-Notation-Problem-Solving-With-Algorithms-and-Data-Structures
2.3. Big-O Notation - Problem Solving With Algorithms and Data Structures | PDF | Summation | Logarithm
2.3. Big-O Notation — Problem Solving With Algorithms and Data Structures - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document introduces Big-O notation, which is used to characterize an algorithm's efficiency by evaluating how its runtime scales with the size of its input.
🌐
SlideShare
slideshare.net › home › data & analytics › 2 big o notation.pdf heheheheheheheheheh
2 BIG O NOTATION.pdf heheheheheheheheheh | PDF
The document explains Big O notation, which is used to describe the time and space complexity of algorithms in relation to input size. It outlines various complexity classifications such as constant (O(1)), linear (O(n)), logarithmic (O(log n)), and exponential (O(n!)). Additionally, the document provides rules for finding Big O and examples of time complexity analysis for different types of algorithms.
Top answer
1 of 16
1562

I'll do my best to explain it here on simple terms, but be warned that this topic takes my students a couple of months to finally grasp. You can find more information on the Chapter 2 of the Data Structures and Algorithms in Java book.


There is no mechanical procedure that can be used to get the BigOh.

As a "cookbook", to obtain the BigOh from a piece of code you first need to realize that you are creating a math formula to count how many steps of computations get executed given an input of some size.

The purpose is simple: to compare algorithms from a theoretical point of view, without the need to execute the code. The lesser the number of steps, the faster the algorithm.

For example, let's say you have this piece of code:

int sum(int* data, int N) {
    int result = 0;               // 1

    for (int i = 0; i < N; i++) { // 2
        result += data[i];        // 3
    }

    return result;                // 4
}

This function returns the sum of all the elements of the array, and we want to create a formula to count the computational complexity of that function:

Number_Of_Steps = f(N)

So we have f(N), a function to count the number of computational steps. The input of the function is the size of the structure to process. It means that this function is called such as:

Number_Of_Steps = f(data.length)

The parameter N takes the data.length value. Now we need the actual definition of the function f(). This is done from the source code, in which each interesting line is numbered from 1 to 4.

There are many ways to calculate the BigOh. From this point forward we are going to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.

We are going to add the individual number of steps of the function, and neither the local variable declaration nor the return statement depends on the size of the data array.

That means that lines 1 and 4 takes C amount of steps each, and the function is somewhat like this:

f(N) = C + ??? + C

The next part is to define the value of the for statement. Remember that we are counting the number of computational steps, meaning that the body of the for statement gets executed N times. That's the same as adding C, N times:

f(N) = C + (C + C + ... + C) + C = C + N * C + C

There is no mechanical rule to count how many times the body of the for gets executed, you need to count it by looking at what does the code do. To simplify the calculations, we are ignoring the variable initialization, condition and increment parts of the for statement.

To get the actual BigOh we need the Asymptotic analysis of the function. This is roughly done like this:

  1. Take away all the constants C.
  2. From f() get the polynomium in its standard form.
  3. Divide the terms of the polynomium and sort them by the rate of growth.
  4. Keep the one that grows bigger when N approaches infinity.

Our f() has two terms:

f(N) = 2 * C * N ^ 0 + 1 * C * N ^ 1

Taking away all the C constants and redundant parts:

f(N) = 1 + N ^ 1

Since the last term is the one which grows bigger when f() approaches infinity (think on limits) this is the BigOh argument, and the sum() function has a BigOh of:

O(N)

There are a few tricks to solve some tricky ones: use summations whenever you can.

As an example, this code can be easily solved using summations:

for (i = 0; i < 2*n; i += 2) {  // 1
    for (j=n; j > i; j--) {     // 2
        foo();                  // 3
    }
}

The first thing you needed to be asked is the order of execution of foo(). While the usual is to be O(1), you need to ask your professors about it. O(1) means (almost, mostly) constant C, independent of the size N.

The for statement on the sentence number one is tricky. While the index ends at 2 * N, the increment is done by two. That means that the first for gets executed only N steps, and we need to divide the count by two.

f(N) = Summation(i from 1 to 2 * N / 2)( ... ) = 
     = Summation(i from 1 to N)( ... )

The sentence number two is even trickier since it depends on the value of i. Take a look: the index i takes the values: 0, 2, 4, 6, 8, ..., 2 * N, and the second for get executed: N times the first one, N - 2 the second, N - 4 the third... up to the N / 2 stage, on which the second for never gets executed.

On formula, that means:

f(N) = Summation(i from 1 to N)( Summation(j = ???)(  ) )

Again, we are counting the number of steps. And by definition, every summation should always start at one, and end at a number bigger-or-equal than one.

f(N) = Summation(i from 1 to N)( Summation(j = 1 to (N - (i - 1) * 2)( C ) )

(We are assuming that foo() is O(1) and takes C steps.)

We have a problem here: when i takes the value N / 2 + 1 upwards, the inner Summation ends at a negative number! That's impossible and wrong. We need to split the summation in two, being the pivotal point the moment i takes N / 2 + 1.

f(N) = Summation(i from 1 to N / 2)( Summation(j = 1 to (N - (i - 1) * 2)) * ( C ) ) + Summation(i from 1 to N / 2) * ( C )

Since the pivotal moment i > N / 2, the inner for won't get executed, and we are assuming a constant C execution complexity on its body.

Now the summations can be simplified using some identity rules:

  1. Summation(w from 1 to N)( C ) = N * C
  2. Summation(w from 1 to N)( A (+/-) B ) = Summation(w from 1 to N)( A ) (+/-) Summation(w from 1 to N)( B )
  3. Summation(w from 1 to N)( w * C ) = C * Summation(w from 1 to N)( w ) (C is a constant, independent of w)
  4. Summation(w from 1 to N)( w ) = (N * (N + 1)) / 2

Applying some algebra:

f(N) = Summation(i from 1 to N / 2)( (N - (i - 1) * 2) * ( C ) ) + (N / 2)( C )

f(N) = C * Summation(i from 1 to N / 2)( (N - (i - 1) * 2)) + (N / 2)( C )

f(N) = C * (Summation(i from 1 to N / 2)( N ) - Summation(i from 1 to N / 2)( (i - 1) * 2)) + (N / 2)( C )

f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2)( i - 1 )) + (N / 2)( C )

=> Summation(i from 1 to N / 2)( i - 1 ) = Summation(i from 1 to N / 2 - 1)( i )

f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2 - 1)( i )) + (N / 2)( C )

f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N / 2 - 1) * (N / 2 - 1 + 1) / 2) ) + (N / 2)( C )

=> (N / 2 - 1) * (N / 2 - 1 + 1) / 2 = 

   (N / 2 - 1) * (N / 2) / 2 = 

   ((N ^ 2 / 4) - (N / 2)) / 2 = 

   (N ^ 2 / 8) - (N / 4)

f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N ^ 2 / 8) - (N / 4) )) + (N / 2)( C )

f(N) = C * (( N ^ 2 / 2 ) - ( (N ^ 2 / 4) - (N / 2) )) + (N / 2)( C )

f(N) = C * (( N ^ 2 / 2 ) - (N ^ 2 / 4) + (N / 2)) + (N / 2)( C )

f(N) = C * ( N ^ 2 / 4 ) + C * (N / 2) + C * (N / 2)

f(N) = C * ( N ^ 2 / 4 ) + 2 * C * (N / 2)

f(N) = C * ( N ^ 2 / 4 ) + C * N

f(N) = C * 1/4 * N ^ 2 + C * N

And the BigOh is:

O(N²)
2 of 16
216

Big O gives the upper bound for time complexity of an algorithm. It is usually used in conjunction with processing data sets (lists) but can be used elsewhere.

A few examples of how it's used in C code.

Say we have an array of n elements

int array[n];

If we wanted to access the first element of the array this would be O(1) since it doesn't matter how big the array is, it always takes the same constant time to get the first item.

x = array[0];

If we wanted to find a number in the list:

for(int i = 0; i < n; i++){
    if(array[i] == numToFind){ return i; }
}

This would be O(n) since at most we would have to look through the entire list to find our number. The Big-O is still O(n) even though we might find our number the first try and run through the loop once because Big-O describes the upper bound for an algorithm (omega is for lower bound and theta is for tight bound).

When we get to nested loops:

for(int i = 0; i < n; i++){
    for(int j = i; j < n; j++){
        array[j] += 2;
    }
}

This is O(n^2) since for each pass of the outer loop ( O(n) ) we have to go through the entire list again so the n's multiply leaving us with n squared.

This is barely scratching the surface but when you get to analyzing more complex algorithms complex math involving proofs comes into play. Hope this familiarizes you with the basics at least though.

🌐
GeeksforGeeks
geeksforgeeks.org › dsa › analysis-algorithms-big-o-analysis
Big O Notation - GeeksforGeeks
Below table categorizes algorithms based on their runtime complexity and provides examples for each type. Below are the classes of algorithms and their number of operations assuming that there are no constants. Below is a table comparing Big O notation, Ω (Omega) notation, and θ (Theta) notation: ... C, C1​, and C2​ are constants. n0​ is the minimum input size beyond which the inequality holds. These notations are used to analyze algorithms based on their worst-case (Big O), best-case (Ω) and average-case (θ) scenarios.
Published   1 month ago
🌐
HackerNoon
hackernoon.com › big-o-for-beginners-622a64760e2
Big O for Beginners | HackerNoon
November 26, 2018 - Big O Notation allows us to measure the time and space complexity of our code.
🌐
University of Washington
courses.cs.washington.edu › courses › cse373 › 19su › files › lectures › slides › lecture04.pdf pdf
Lecture 4: Formal Big-O, Omega and Theta CSE 373: Data Structures and
Find a model 𝑓𝑛for the running ... ... This is why we have definitions! ... No! Choose your value of 𝑐. I can find a prime ... And 𝑓𝑘= 𝑘> 𝑐⋅1 so the definition isn’t met! ... Our prime finding code is 𝑂(𝑛). But so is, for example, printing all ...
🌐
Rice
stat.rice.edu › ~dobelman › notes_papers › math › big_O.little_o.pdf pdf
Chapter 2 Asymptotic notations 2.1 The “oh” notations Terminology Notation
called “big oh” (O) and “small-oh” (o) notations, and their variants. These · notations are in widespread use and are often used without further explana- tion. However, in order to properly apply these notations and avoid mistakes · resulting from careless use, it is important to be aware of their precise defi- ... Example 2.1.
🌐
Uga
cobweb.cs.uga.edu › ~potter › dismath › Feb26-1009b.pdf
Big-O Notation
We cannot provide a description for this page right now