Your translation is correct. The intuition behind big-oh notation is that is
if
grows as fast or faster than
as
. This is used in computer science whenever studying the time complexity of an algorithm. Specifically, if we let
be the run-time (number of steps) that an algorithm takes on an
bit input to give an output, then it may be useful to say something like
is
, so we know that the algorithm is relatively fast for large inputs
. On the other hand, if all we knew was
is
, then
might run too slowly for large inputs.
Note I say "might" here, because big-oh only gives you an upper bound, so is
but
is not
.
big o - Discrete Math Big O Notation - Stack Overflow
discrete mathematics - Understanding definition of big-O notation - Mathematics Stack Exchange
discrete mathematics - The Big O notation what is the explanation? - Mathematics Stack Exchange
Discrete Math - Big O notation
Videos
Your translation is correct. The intuition behind big-oh notation is that is
if
grows as fast or faster than
as
. This is used in computer science whenever studying the time complexity of an algorithm. Specifically, if we let
be the run-time (number of steps) that an algorithm takes on an
bit input to give an output, then it may be useful to say something like
is
, so we know that the algorithm is relatively fast for large inputs
. On the other hand, if all we knew was
is
, then
might run too slowly for large inputs.
Note I say "might" here, because big-oh only gives you an upper bound, so is
but
is not
.
Yes, your verbose translation is correct.
The definition essentially indicates that Big-Oh notation is a tool to denote an upper bound for a function. That is, if is
, that means that, beyond some point, a constant multiple of
will always be bigger than
.
This is used in computer science to indicate that, in the long run, we can just assume it takes operations to perform the algorithm (because it's close enough).
Suppose we have two algorithms each of which sorts an array of length . Suppose the first takes
steps and the second takes $g(n) = 1000n \log(n) + 200n$ steps. Which is a better (faster) algorithm?
Well for small values of the first one is faster, while for large values of
, which is what matters in a lot of computing situations, the second one will be faster.
This is the essence of what the "big-O" notation conveys.
Other answers may connect this idea to the formal definition. (I might later today.)
Broadly speaking, is the runtime of some algorithm as a function of the "size" of the input
. You're interested in a brief, approximate description of the behavior of
for large
(because in computing we're frequently dealing with large problems). Big O notation provides such a description, by saying that
is no more than
if
is large, for some
.
For your specific questions:
and
are arbitrary functions. In the algorithm context,
is some descriptor of the size of the input to the algorithm,
is the runtime of your algorithm as a function of the size of the input, and
is a reference function that you're comparing
to.
and
are arbitrary constants. The role of
intuitively is to restrict attention to "large
". The role of
is to allow for a hidden constant factor, so that for example
.
I don't really know what this means, so I can't help you there.
As for the solution:
Big O notation is all about comparing two functions in some limit (in your context the limit is as
). Accordingly, small values of the input (less than some fixed constant) may be ignored.
The distinction between
and
in this context does not really matter, but indeed if
then
.
All the terms in
are positive if
is positive.
is the exact number of steps required for some unspecified algorithm to run. The point of Big O notation is to give a more approximate expression (usually because such an exact expression is not available).
I been trying to figure this out for the past hour and I'm having trouble starting the problem. The examples my professor be teaching us are way simpler than this. :\
I'm trying to find the "big O estimate for functions f(x), x∈R" where f(x) = (7x4- 5x3 + 4x -10) / (4x3 - logx2 -3x )
Any hints would be much appreciated.
In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2
For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)