[algorithm] Big-oh vs big-theta

Possible Duplicate:
What is the difference between T(n) and O(n)?

It seems to me like when people talk about algorithm complexity informally, they talk about big-oh. But in formal situations, I often see big-theta with the occasional big-oh thrown in. I know mathematically what the difference is between the two, but in English, in what situation would using big-oh when you mean big-theta be incorrect, or vice versa (an example algorithm would be appreciated)?

Bonus: why do people seemingly always use big-oh when talking informally?

This question is related to algorithm complexity-theory

The answer is


Because there are algorithms whose best-case is quick, and thus it's technically a big O, not a big Theta.

Big O is an upper bound, big Theta is an equivalence relation.


There are a lot of good answers here but I noticed something was missing. Most answers seem to be implying that the reason why people use Big O over Big Theta is a difficulty issue, and in some cases this may be true. Often a proof that leads to a Big Theta result is far more involved than one that results in Big O. This usually holds true, but I do not believe this has a large relation to using one analysis over the other.

When talking about complexity we can say many things. Big O time complexity is just telling us what an algorithm is guarantied to run within, an upper bound. Big Omega is far less often discussed and tells us the minimum time an algorithm is guarantied to run, a lower bound. Now Big Theta tells us that both of these numbers are in fact the same for a given analysis. This tells us that the application has a very strict run time, that can only deviate by a value asymptoticly less than our complexity. Many algorithms simply do not have upper and lower bounds that happen to be asymptoticly equivalent.

So as to your question using Big O in place of Big Theta would technically always be valid, while using Big Theta in place of Big O would only be valid when Big O and Big Omega happened to be equal. For instance insertion sort has a time complexity of Big ? at n^2, but its best case scenario puts its Big Omega at n. In this case it would not be correct to say that its time complexity is Big Theta of n or n^2 as they are two different bounds and should be treated as such.


I'm a mathematician and I have seen and needed big-O O(n), big-Theta T(n), and big-Omega O(n) notation time and again, and not just for complexity of algorithms. As people said, big-Theta is a two-sided bound. Strictly speaking, you should use it when you want to explain that that is how well an algorithm can do, and that either that algorithm can't do better or that no algorithm can do better. For instance, if you say "Sorting requires T(n(log n)) comparisons for worst-case input", then you're explaining that there is a sorting algorithm that uses O(n(log n)) comparisons for any input; and that for every sorting algorithm, there is an input that forces it to make O(n(log n)) comparisons.

Now, one narrow reason that people use O instead of O is to drop disclaimers about worst or average cases. If you say "sorting requires O(n(log n)) comparisons", then the statement still holds true for favorable input. Another narrow reason is that even if one algorithm to do X takes time T(f(n)), another algorithm might do better, so you can only say that the complexity of X itself is O(f(n)).

However, there is a broader reason that people informally use O. At a human level, it's a pain to always make two-sided statements when the converse side is "obvious" from context. Since I'm a mathematician, I would ideally always be careful to say "I will take an umbrella if and only if it rains" or "I can juggle 4 balls but not 5", instead of "I will take an umbrella if it rains" or "I can juggle 4 balls". But the other halves of such statements are often obviously intended or obviously not intended. It's just human nature to be sloppy about the obvious. It's confusing to split hairs.

Unfortunately, in a rigorous area such as math or theory of algorithms, it's also confusing not to split hairs. People will inevitably say O when they should have said O or T. Skipping details because they're "obvious" always leads to misunderstandings. There is no solution for that.


Because my keyboard has an O key.
It does not have a T or an O key.

I suspect most people are similarly lazy and use O when they mean T because it's easier to type.


Bonus: why do people seemingly always use big-oh when talking informally?

Because in big-oh, this loop:

for i = 1 to n do
    something in O(1) that doesn't change n and i and isn't a jump

is O(n), O(n^2), O(n^3), O(n^1423424). big-oh is just an upper bound, which makes it easier to calculate because you don't have to find a tight bound.

The above loop is only big-theta(n) however.

What's the complexity of the sieve of eratosthenes? If you said O(n log n) you wouldn't be wrong, but it wouldn't be the best answer either. If you said big-theta(n log n), you would be wrong.


I have seen Big Theta, and I'm pretty sure I was taught the difference in school. I had to look it up though. This is what Wikipedia says:

Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta T for asymptotically tighter bounds.

Source: Big O Notation#Related asymptotic notation

I don't know why people use Big-O when talking formally. Maybe it's because most people are more familiar with Big-O than Big-Theta? I had forgotten that Big-Theta even existed until you reminded me. Although now that my memory is refreshed, I may end up using it in conversation. :)


One reason why big O gets used so much is kind of because it gets used so much. A lot of people see the notation and think they know what it means, then use it (wrongly) themselves. This happens a lot with programmers whose formal education only went so far - I was once guilty myself.

Another is because it's easier to type a big O on most non-Greek keyboards than a big theta.

But I think a lot is because of a kind of paranoia. I worked in defence-related programming for a bit (and knew very little about algorithm analysis at the time). In that scenario, the worst case performance is always what people are interested in, because that worst case might just happen at the wrong time. It doesn't matter if the actually probability of that happening is e.g. far less than the probability of all members of a ships crew suffering a sudden fluke heart attack at the same moment - it could still happen.

Though of course a lot of algorithms have their worst case in very common circumstances - the classic example being inserting in-order into a binary tree to get what's effectively a singly-linked list. A "real" assessment of average performance needs to take into account the relative frequency of different kinds of input.