There are two ways to interpret the input data; either -128 is the lowest value, and 127 is the highest (i.e. true signed data), or 0 is the lowest value, 127 is somewhere in the middle, and the next "higher" number is -128, with -1 being the "highest" value (that is, the most significant bit already got misinterpreted as a sign bit in a two's complement notation.
Assuming you mean the latter, the formally correct way is
signed char in = ...
unsigned char out = (in < 0)?(in + 256):in;
which at least gcc properly recognizes as a no-op.