My total loss function has three terms:
L = λ1*L1 + λ2*L2 + λ3*L3
And all the λ are set by loss_weights{"λ1":1, "λ2":1, "λ2":1}
when I run model.compile.
Now I want to remove the L1
term.
Is it ok if I change loss_weights{"λ1":0, "λ2":1, "λ2":1}
in loss_weights instead of removing the output of L1 term in my model ?
Yes, it should be fine, it will cancel gradients from that part of the loss. This trick is commonly done in object detection losses, so we know it works.