#include<stdio.h>
#include<string.h>
void printlen(char *s,char *t){
int c=0;
int len=((strlen(s)-strlen(t)>c)?strlen(s):strlen(t));
printf("%d",len);
}
void main(){
char *x="abc";
char *y="defgh";
printlen(x,y);
}
strlen(s)
is 3 and strlen(t)
is 5, so why is strlen(s)-strlen(t)>c
evaluating to true?
The strlen
function returns a value of type size_t
which is unsigned.
So subtracting unsigned 3 from unsigned 5 results in a very large unsigned number. This number is greater than 0 so the condition is true, causing strlen(s)
to be evaluated and assigned to len
.
The result of the subtraction should be cast to int
to properly store a signed value.
int len=(((int)(strlen(s)-strlen(t))>c)?strlen(s):strlen(t));
Better yet, cast the result of each strlen
to avoid an out-of-range conversion from unsigned to signed which is implementation defined:
int len= ((int)strlen(s)-(int)strlen(t))>c)?strlen(s):strlen(t);