I'm really confused about the differences between big O, big Omega, and big Theta notation.
I understand that big O is the upper bound and big Omega is the lower bound, but what exactly does big ? (theta) represent?
I have read that it means tight bound, but what does that mean?
This question is tagged with
~ Asked on 2012-04-29 22:56:19
It means that the algorithm is both big-O and big-Omega in the given function.
For example, if it is
?(n), then there is some constant
k, such that your function (run-time, whatever), is larger than
n*k for sufficiently large
n, and some other constant
K such that your function is smaller than
n*K for sufficiently large
In other words, for sufficiently large
n, it is sandwiched between two linear functions :
k < K and
n sufficiently large,
n*k < f(n) < n*K
~ Answered on 2012-04-29 23:03:05
First let's understand what big O, big Theta and big Omega are. They are all sets of functions.
Big O is giving upper asymptotic bound, while big Omega is giving a lower bound. Big Theta gives both.
Everything that is
?(f(n)) is also
O(f(n)), but not the other way around.
T(n) is said to be in
?(f(n)) if it is both in
O(f(n)) and in
In sets terminology,
?(f(n)) is the intersection of
For example, merge sort worst case is both
Omega(n*log(n)) - and thus is also
?(n*log(n)), but it is also
n^2 is asymptotically "bigger" than it. However, it is not
?(n^2), Since the algorithm is not
O(n) is asymptotic upper bound. If
O(f(n)), it means that from a certain
n0, there is a constant
C such that
T(n) <= C * f(n). On the other hand, big-Omega says there is a constant
C2 such that
T(n) >= C2 * f(n))).
Not to be confused with worst, best and average cases analysis: all three (Omega, O, Theta) notation are not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
We usually use it to analyze complexity of algorithms (like the merge sort example above). When we say "Algorithm A is
O(f(n))", what we really mean is "The algorithms complexity under the worst1 case analysis is
O(f(n))" - meaning - it scales "similar" (or formally, not worse than) the function
Well, there are many reasons for it, but I believe the most important of them are:
To demonstrate this issue, have a look at the following graphs:
It is clear that
f(n) = 2*n is "worse" than
f(n) = n. But the difference is not quite as drastic as it is from the other function. We can see that
f(n)=logn quickly getting much lower than the other functions, and
f(n) = n^2 is quickly getting much higher than the others.
So - because of the reasons above, we "ignore" the constant factors (2* in the graphs example), and take only the big-O notation.
In the above example,
f(n)=n, f(n)=2*n will both be in
O(n) and in
Omega(n) - and thus will also be in
On the other hand -
f(n)=logn will be in
O(n) (it is "better" than
f(n)=n), but will NOT be in
Omega(n) - and thus will also NOT be in
f(n)=n^2 will be in
Omega(n), but NOT in
O(n), and thus - is also NOT
1Usually, though not always. when the analysis class (worst, average and best) is missing, we really mean the worst case.
~ Answered on 2012-09-09 12:09:32