Definition
Big O notation describes the upper bound of an algorithm's running time or space requirements in terms of input size $n$. It characterizes the worst-case growth rate, ignoring constant factors and lower-order terms.
Formally, a function $f(n)$ is $O(g(n))$ if there exist positive constants $c$ and $n_0$ such that:
$$ 0 \leq f(n) \leq c \cdot g(n) \quad \text{for all} \quad n \geq n_0 $$
Worked Example
Suppose an algorithm has a running time:
$$ f(n) = 3n^2 + 5n + 7 $$
To express $f(n)$ in Big O notation:
- Identify the dominant term: $n^2$ grows faster than $n$ or constants as $n$ increases.
- Ignore lower-order terms and constants: Only the highest-degree term matters.
So,
$$ f(n) = 3n^2 + 5n + 7 = O(n^2) $$
Justification:
For $n \geq 1$,
$$ 3n^2 + 5n + 7 \leq 3n^2 + 5n^2 + 7n^2 = 15n^2 $$
Thus, $f(n) \leq 15n^2$ for $n \geq 1$, so $c = 15$, $n_0 = 1$.
Key Takeaways
- Big O notation expresses the upper bound of an algorithm's growth rate as input size increases.
- Only the fastest-growing term and its degree matter; constants and lower-order terms are ignored.
- Common Big O classes: $O(1)$ (constant), $O(\log n)$ (logarithmic), $O(n)$ (linear), $O(n^2)$ (quadratic), $O(2^n)$ (exponential).