Google's (bad) approach, from HN thread. Store RLE-style counts.
Your initial data structure is '99999999:0' (all zeros, haven't seen any numbers) and then lets say you see the number 3,866,344 so your data structure becomes '3866343:0,1:1,96133654:0' as you can see the numbers will always alternate between number of zero bits and number of '1' bits so you can just assume the odd numbers represent 0 bits and the even numbers 1 bits. This becomes (3866343,1,96133654)
Their problem doesn't seem to cover duplicates, but let's say they use "0:1" for duplicates.
Big problem #1: insertions for 1M integers would take ages.
Big problem #2: like all plain delta encoding solutions, some distributions can't be covered this way. For example, 1m integers with distances 0:99 (e.g. +99 each one). Now think the same but with random distance in the range of 0:99. (Note: 99999999/1000000 = 99.99)
Google's approach is both unworthy (slow) and incorrect. But to their defense, their problem might have been slightly different.