Amortized time complexity
Amortized analysis is used for algorithms that have expensive operations that happen rarely.
Amortized complexity analysis is most commonly used with data structures, which have state that persists between operations. The basic idea is that an expensive operation can alter the state so that the worst case cannot occur again for a long time, thus amortizing its cost.
Definition: Let T1, T2, …, Tk be the complexities of a sequence of operations on a data structuture. The amortized complexity of a single operation in this sequence is (T1 + T2 + …+ Tk) / k.
Algorithm append(arr, x): if arr.size == arr.capacity arr.capacity ← 2 * arr.capacity resize arr arr[arr.size] ← x arr.size ← arr.size + 1
Let's start by looking at the worst case:
- if the array is full, the algorithm allocates a new array of length 2n,
- and then copies the elements from the old array into the new one.
Cleary, this result is overly pessimistic. The following n append operations will be much cheaper: each of them will run in constant time since the newly allocated array has room for all of the new elements.
An amortized time analysis gives a better understanding of the algorithm:
- There will be a total of n constant-time assignment and increment operations.
- The resizing will happen only at operation 1, 2, 4, …, 2k, for a total of 1 + 2 + 4 + …+ 2k = 2·2k - 1 constant-time element copy operations. Since 2k ≤ n, this is at most 2n - 1.