I see tensorflow offers the use of fp16
in training and testing, is it safe to use it or will it have an adverse effect on the final result?
It will affect the output while training, because of the extra math precision that float32 provides, but after training you can 'quantize' the operations in your network to float16 to have faster performances if your hardware supports the float16 natively. If the hardware does not support such operation you might likely have a slow down in terms of performances.