[algorithm] Difference between Big-O and Little-O Notation

What is the difference between Big-O notation O(n) and Little-O notation o(n)?

The answer is


I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)

[stuff you know]A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.[/stuff you know]

Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(NloglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."

Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)


Big-O is to little-o as = is to <. Big-O is an inclusive upper bound, while little-o is a strict upper bound.

For example, the function f(n) = 3n is:

  • in O(n²), o(n²), and O(n)
  • not in O(lg n), o(lg n), or o(n)

Analogously, the number 1 is:

  • = 2, < 2, and = 1
  • not = 0, < 0, or < 1

Here's a table, showing the general idea:

Big o table

(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2) oscillates between 3 and 4 forever. It's in O(1) despite not having a normal limit, because it still has a lim sup: 4.)

I recommend memorizing how the Big-O notation converts to asymptotic comparisons. The comparisons are easier to remember, but less flexible because you can't say things like nO(1) = P.


In general

Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:

  • f(n) ? o(n) means: at some point, the more you zoom out, the more f(n) will be dominated by n (it will progressively diverge from it).
  • g(n) ? T(n) means: at some point, zooming out will not change how g(n) compare to n (if we remove ticks from the axis you couldn't tell the zoom level).

Finally h(n) ? O(n) means that function h can be in either of these two categories. It can either look a lot like n or it could be smaller and smaller than n when n increases. Basically, both f(n) and g(n) are also in O(n).

In computer science

In computer science, people will usually prove that a given algorithm admits both an upper O and a lower bound . When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem.

For example, if we prove that the complexity of an algorithm is both in O(n) and (n) it implies that its complexity is in T(n). That's the definition of T and it more or less translates to "asymptotically equal". Which also means that no algorithm can solve the given problem in o(n). Again, roughly saying "this problem can't be solved in less than n steps".

An upper bound of O(n) simply means that even in the worse case, the algorithm will terminate in at most n steps (ignoring all constant factors, both multiplicative and additive). A lower bound of (n) means on the opposite that we built some examples where the problem solved by this algorithm couldn't be solved in less than n steps (again ignoring multiplicative and additive constants). The number of steps is at most n and at least n so this problem complexity is "exactly n". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write T(n) for short.


The big-O notation has a companion called small-o notation. The big-O notation says the one function is asymptotical no more than another. To say that one function is asymptotically less than another, we use small-o notation. The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).


Examples related to algorithm

How can I tell if an algorithm is efficient? Find the smallest positive integer that does not occur in a given sequence Efficiently getting all divisors of a given number Peak signal detection in realtime timeseries data What is the optimal algorithm for the game 2048? How can I sort a std::map first by value, then by key? Finding square root without using sqrt function? Fastest way to flatten / un-flatten nested JSON objects Mergesort with Python Find common substring between two strings

Examples related to time-complexity

Find common substring between two strings Why is the time complexity of both DFS and BFS O( V + E ) How to find time complexity of an algorithm Hash table runtime complexity (insert, search and delete) matrix multiplication algorithm time complexity how to calculate binary search complexity What are the time complexities of various data structures? Complexities of binary tree traversals Time complexity of Euclid's Algorithm What does O(log n) mean exactly?

Examples related to big-o

Differences between time complexity and space complexity? Determining complexity for recursive functions (Big O notation) What exactly does big ? notation represent? How to merge two sorted arrays into a sorted array? Time complexity of Euclid's Algorithm Are there any worse sorting algorithms than Bogosort (a.k.a Monkey Sort)? Append an object to a list in R in amortized constant time, O(1)? What does O(log n) mean exactly? Is log(n!) = T(n·log(n))? Difference between Big-O and Little-O Notation

Examples related to asymptotic-complexity

Difference between Big-O and Little-O Notation

Examples related to little-o

Difference between Big-O and Little-O Notation