Let us assume the database character set is UTF-8, which is the recommended setting in recent versions of Oracle. In this case, some characters take more than 1 byte to store in the database.
If you define the field as
VARCHAR2(11 BYTE), Oracle can use up to 11 bytes for storage, but you may not actually be able to store 11 characters in the field, because some of them take more than one byte to store, e.g. non-English characters.
By defining the field as
VARCHAR2(11 CHAR) you tell Oracle it can use enough space to store 11 characters, no matter how many bytes it takes to store each one. A single character may require up to 4 bytes.
One has exactly space for 11 bytes, the other for exactly 11 characters. Some charsets such as Unicode variants may use more than one byte per char, therefore the 11 byte field might have space for less than 11 chars depending on the encoding.
Depending on the system configuration, size of CHAR mesured in BYTES can vary. In your examples:
I am not sure since I am not an Oracle user, but I assume that the difference lies when you use multi-byte character sets such as Unicode (UTF-16/32). In this case, 11 Bytes could account for less than 11 characters.
Also those field types might be treated differently in regard to accented characters or case, for example 'binaryField(ete) = "été"' will not match while 'charField(ete) = "été"' might (again not sure about Oracle).