Hash tables are O(1)
average and amortized case complexity, however it suffers from O(n)
worst case time complexity. [And I think this is where your confusion is]
Hash tables suffer from O(n)
worst time complexity due to two reasons:
O(n)
time.However, it is said to be O(1)
average and amortized case because:
O(n)
, can at most happen after n/2
ops, which are all assumed O(1)
: Thus when you sum the average time per op, you get : (n*O(1) + O(n)) / n) = O(1)
Note because of the rehashing issue - a realtime applications and applications that need low latency - should not use a hash table as their data structure.
EDIT: Annother issue with hash tables: cache
Another issue where you might see a performance loss in large hash tables is due to cache performance. Hash Tables suffer from bad cache performance, and thus for large collection - the access time might take longer, since you need to reload the relevant part of the table from the memory back into the cache.