[c] 1 = false and 0 = true?

I came across an is_equals() function in a c API at work that returned 1 for non-equal sql tables (false) and 0 for equal ones (true). I only realized it after running test cases on my code, one for the positive example and one for the negative and they both failed which at first made little sense. The code in the API does not have a bug as the output was recorded correctly in its documentation.

My questions - are there upside down worlds / parallel universes / coding languages where this logical NOTing is normal? Isn't 1 usually true? Is the coder of the API making an error?

This question is related to c logic

The answer is


This upside-down-world is common with process error returns. The shell variable $? reports the return value of the previous program to execute from the shell, so it is easy to tell if a program succeeds or fails:

$ false ; echo $?
1
$ true ; echo $?
0

This was chosen because there is a single case where a program succeeds but there could be dozens of reasons why a program fails -- by allowing there to be many different failure error codes, a program can determine why another program failed without having to parse output.

A concrete example is the aa-status program supplied with the AppArmor mandatory access control tool:

   Upon exiting, aa-status will set its return value to the
   following values:

   0   if apparmor is enabled and policy is loaded.

   1   if apparmor is not enabled/loaded.

   2   if apparmor is enabled but no policy is loaded.

   3   if the apparmor control files aren't available under
       /sys/kernel/security/.

   4   if the user running the script doesn't have enough
       privileges to read the apparmor control files.

(I'm sure there are more widely-spread programs with this behavior, but I know this one well. :)


There's no good reason for 1 to be true and 0 to be false; that's just the way things have always been notated. So from a logical perspective, the function in your API isn't "wrong", per se.

That said, it's normally not advisable to work against the idioms of whatever language or framework you're using without a damn good reason to do so, so whoever wrote this function was probably pretty bone-headed, assuming it's not simply a bug.


It may very well be a mistake on the original author, however the notion that 1 is true and 0 is false is not a universal concept. In shell scripting 0 is returned for success, and any other number for failure. In other languages such as Ruby, only nil and false are considered false, and any other value is considered true, so in Ruby both 1 and 0 would be considered true.


I suspect it's just following the Linux / Unix standard for returning 0 on success.

Does it really say "1" is false and "0" is true?


I'm not sure if I'm answering the question right, but here's a familiar example:

The return type of GetLastError() in Windows is nonzero if there was an error, or zero otherwise. The reverse is usually true of the return value of the function you called.