[this post references my knowledge of mathematical logic, not philosophy style logic or programming style logic]
No, what he defined is the "if and only if" function. so basically "A iff B" is true if either A and B are false, or A and B are true.
This stems from "A -> B", read as "A implies B". If a statement A implies a statement B, then we know that if A is true, B must be true. The truth table for that is:
A|B|A->B
F|F|T
F|T|T
T|F|F
T|T|T
If you think about it this way, if we say "1=2 -> 3=4" or "1=2 -> 3=3", we know 1 is not equal to two, and since the first thing is false, then sure, the implication (A->B as a whole) can be true.
If A is true and B is false, then obviously B does not imply A. If both are true, again, the implication is true.
This definition is kind of confusing, but think of it this way. The implication "x=1 -> 2x=2" is obviously true. If x=4, then since x!=4, we don't know the truth of the statement 2x=2. If we didn't use the above truth table, then the implication would only be true some of the time, which isn't what we want.
A iff B is sometimes written A<->B, because what we are saying is that "A implies B and B implies A", so (A->B) and (B<-A). We can see that the truth table of that is the same as your "noxor" command:
A|B|A->B |(A->B) and (B<-A)
F|F|T____|T
F|T|T____|F
T|F|F____|F
T|T|T____|T
So that noxor thing is actually pretty useful in logic, because, for example, if A<->B and B<->C, A<->C. Plus it's obviously invertible (A<->B=B<->A, A noxor B = B noxor A)
in summation: NO CHARLIE SHEEN IS NOT WINNING.