ASCII - Software allocates only 8 bit byte in memory for a given character. It works well for English & adopted (loanwords like façade) characters as their corresponding decimal values falls below 128 in the decimal value. Example C program.
UTF-8 - Software allocates 1 to 4 variable 8 bit bytes for a given character. What does mean by variable here? Let us say you are sending the character 'A' through your HTML pages in the browser (HTML is UTF-8), the corresponding decimal value of A is 65, when you convert it into decimal it becomes 01000010. This requires only 1 bytes, 1 byte memory is allocated even for special adopted English characters like 'ç' in a word façade. However, when you want to store European characters, it requires 2 bytes, so you need UTF-8. However, when you go for Asian characters, you require minimum of 2 bytes and maximum of 4 bytes. Similarly, Emoji's require 3 to 4 bytes. UTF-8 will solve all your needs.
UTF-16 will allocate minimum 2 bytes and maximum of 4 bytes per character, it will not allocate 1 or 3 bytes. Each character is either represented in 16 bit or 32 bit.
Then why exists UTF-16? Originally, Unicode was 16 bit not 8 bit. Java adopted the original version of UTF-16.
In a nutshell, you don't need UTF-16 anywhere unless it has been already been adopted by the language or platform you are working on.
Java program invoked by web browsers uses UTF-16 but the web browser sends characters using UTF-8.