In UTF-8:
1 byte: 0 - 7F (ASCII)
2 bytes: 80 - 7FF (all European plus some Middle Eastern)
3 bytes: 800 - FFFF (multilingual plane incl. the top 1792 and private-use)
4 bytes: 10000 - 10FFFF
In UTF-16:
2 bytes: 0 - D7FF (multilingual plane except the top 1792 and private-use )
4 bytes: D800 - 10FFFF
In UTF-32:
4 bytes: 0 - 10FFFF
10FFFF is the last unicode codepoint by definition, and it's defined that way because it's UTF-16's technical limit.
It is also the largest codepoint UTF-8 can encode in 4 byte, but the idea behind UTF-8's encoding also works for 5 and 6 byte encodings to cover codepoints until 7FFFFFFF, ie. half of what UTF-32 can.