Since the eventual aim of mutation testing is to detect faults with programs by finding test cases that would reveal the faults (by changing parts of the program and then verifying the output), it seems that only if the output is different for the mutants as compared to the original, can a fault related to that mutant be detected.
However, if the developer is not sure of the output of the program or if, for different test cases different values are indeed expected, how can she/he detect if a mutant has detected a fault or not (except, of course, when the mutation causes compile time errors)?
EDIT : Is it correct to say that the mutant is killed, simply if the original and the mutant have different outputs, without verifying if the original program's output is right?
It depends on what type of mutation testing you are talking about.
In weak mutation testing a mutant is considered killed if it results in a change in the internal state of the program - the change does not even need to be externally visible.
In firm mutation testing the mutant is considered killed if the change propagates some distance from its origin.
In strong mutation testing the change must propagate and be detected by an assertion in a test case.
So the answer to
"Is it correct to say that the mutant is killed, simply if the original and the mutant have different outputs, without verifying if the original program's output is right?"
is
The popular open source mutation testing tools are mainly (all?) strong mutation testing systems.