Search code examples
machine-learningdecision-treeentropy

Why am I getting a negative information gain?


[SOLVED]

My mistake was that I did not realise that entropy is 0 if all are of one type. Thus if all are positive, entropy is 0 and if all are negative it is zero as well. Entropy will be 1 if equal amount are positive and negative.

It does not make sense that one would get negative information gain.

However based on this example I am getting a negative information gain.

here is the data: enter image description here

And if I calculate the information gain on the Humidity attribute I get this:

enter image description here

Obviously I am missing something here.

EDIT: To clarify how I understand it.

Entropy of the whole system is defined as:

enter image description here

Which in this case then is:

enter image description here

And the information gain per atribute is defined as:

enter image description here

Which for humidity I calculate to:

Entropy of system - (1/4)Entropy of Humidity Normal - (3/4)Entropy of Humidity High

As per this Libre Office Calc: enter image description here

Or is my understanding of the formula for information gain for an attribute incorrect?


Solution

  • To begin with, I'm assuming your S variable is EnjoySport. (I think you could phrase the text more clearly, BTW.)

    So the entropy of S is 0.8113, but that's the last part with which I agree.

    The entropy of S given Normal is 0, as it is deterministic.

    The entropy of S given High is 0.91829583405448945, but you need to multiply that by 0.75, because that is the probability of Normal. So that gives you 0.68872187554086706.

    The difference is non-negative, as expected.


    Note that the Information gain is the expected difference in Entropy, and the expectation needs to take into account the probability of the conditioned event.