The "big iron" answer is generally one thread per limited resource -- processor (CPU bound), arm (I/O bound), etc -- but that only works if you can route the work to the correct thread for the resource to be accessed.
Where that's not possible, consider that you have fungible resources (CPUs) and non-fungible resources (arms). For CPUs it's not critical to assign each thread to a specific CPU (though it helps with cache management), but for arms, if you can't assign a thread to the arm, you get into queuing theory and what's optimal number to keep arms busy. Generally I'm thinking that if you can't route requests based on the arm used, then having 2-3 threads per arm is going to be about right.
A complication comes about when the unit of work passed to the thread doesn't execute a reasonably atomic unit of work. Eg, you may have the thread at one point access the disk, at another point wait on a network. This increases the number of "cracks" where additional threads can get in and do useful work, but it also increases the opportunity for additional threads to pollute each other's caches, etc, and bog the system down.
Of course, you must weigh all this against the "weight" of a thread. Unfortunately, most systems have very heavyweight threads (and what they call "lightweight threads" often aren't threads at all), so it's better to err on the low side.
What I've seen in practice is that very subtle differences can make an enormous difference in how many threads are optimal. In particular, cache issues and lock conflicts can greatly limit the amount of practical concurrency.