Sorting solutions have been proposed, but sorting is a little too much: We don't need order; we just need equality groups.
So hashing would be enough (and faster).
This kind of recursive hash partitioning is actually being done by SQL Server when it needs to hash join or hash aggregate over huge data sets. It distributes its build input stream into many partitions which are independent. This scheme scales to arbitrary amounts of data and multiple CPUs linearly.
You don't need recursive partitioning if you can find a distribution key (hash key) that provides enough buckets that each bucket is small enough to be processed very quickly. Unfortunately, I don't think socks have such a property.
If each sock had an integer called "PairID" one could easily distribute them into 10 buckets according to PairID % 10
(the last digit).
The best real-world partitioning I can think of is creating a rectangle of piles: one dimension is color, the other is the pattern. Why a rectangle? Because we need O(1) random-access to piles. (A 3D cuboid would also work, but that is not very practical.)
Update:
What about parallelism? Can multiple humans match the socks faster?
What about the element distinctness problem? As the article states, the element distinctness problem can be solved in O(N)
. This is the same for the socks problem (also O(N)
, if you need only one distribution step (I proposed multiple steps only because humans are bad at calculations - one step is enough if you distribute on md5(color, length, pattern, ...)
, i.e. a perfect hash of all attributes)).
Clearly, one cannot go faster than O(N)
, so we have reached the optimal lower bound.
Although the outputs are not exactly the same (in one case, just a boolean. In the other case, the pairs of socks), the asymptotic complexities are the same.