From what I know, the loss function in machine learning context is usually a function of two variables: the prediction and the ground truth. Is there such thing as a loss function that does not depend on the ground truth? For a simple example, if I'm predicting a real-valued variable, I can use a mean squared error as the loss function. But if I know, on physical grounds, that the output describes some positive quantity, I can define an improved custom loss function that is mainly the MSE, but maybe has an additional term that penalizes every time a negative prediction is made. This additional term need not take into account the value of the ground truth. Is this kind of idea prevalent? Does it have some kind of name?
Loss function does not have to depend on ground truth. It is literally anything that you minimize, which can be per-point comparison to ground truth, it can be verification of some property (like what you describe) it can be a regularisation (e.g. norm of actiavtions), or it can depend on overall statistics (e.g. loss encouraging predictions for two different points to be different). There are no "rules" here, literally everything you want to minimize is by definition, a loss function.