Search code examples
machine-learningbfloat16intel-pytorchintel-ai-analytics

How do you specify the bfloat16 mixed precision with the Intel Extension for PyTorch?


I would like to know how to use mixed precision with PyTorch and Intel Extension for PyTorch.

I have tried to look at the documentation on their GitHub, but I can't find anything that specifies how to go from fp32 to blfoat16.


Solution

  • The IPEX GitHub might not be the best place to look for API documentation. I would try and use the PyTorch IPEX page, which includes examples of API applications.

    This would be an example of how to use fp32

    model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32)

    This would be an example of how to use bfloat16

    model, optimizer = ipex.optimize(model, optimizer, dtype=torch.bfloat16)