Search code examples
pythonpytorchneural-network

How to make heat equation dimensionless for neural network in pytorch


I am trying to use PyTorch for making a Physics Informed Neural Network for the heat equation in 1D:

enter image description here

I tried the following code to make a loss function for PDE residual:

def lossPDE(self,x_PDE):
    g = x_PDE.clone()
    g.requires_grad = True # Enable differentiation
    f = self.forward(g)
    f_x_t = torch.autograd.grad(f,g,torch.ones([g.shape[0],1]).to(device),retain_graph=True, create_graph=True)[0] # first derivative of time
    f_xx_tt = torch.autograd.grad(f_x_t,g,torch.ones(g.shape).to(device), create_graph=True)[0]#second derivative of x
    f_t = f_x_t[:,[1]]
    f_xx = f_xx_tt[:,[0]]
    f = f_t - alpha * f_xx
    return self.loss_function(f,f_hat) # f_hat is a tensor of zeros and it minimizes the f value

In the simulation I am making a one day simulation (86400 seconds) for a bar with the length of 1 meters. Now, I want to make my PDE dimensionless. How can I do it in pytorch? Which part of my code should be changed? The unit of alpha is already in m2/s. I can share my whole code but it would be hundreds lines of code. I very much appreciate any help in advance.


Solution

  • Extending my previous comment to answer.

    As long as you make use of PyTorch operators (which take care of gradients), you can define a generic Python function that takes the truth value y and the predicted value yhat in input and returns a number.

    Very broadly:

    def my_basic_loss(y, yhat):
        loss = torch.mean(torch.abs(y - yhat)/torch.abs(y))
        return loss
    

    Or you can consider quadratic values (probably better).

    No need to manually call autograd.

    For other operations you should extend PyTorch instead. See this post for extra details.