Search code examples
tensorflowtensorboard

TensorBoard - What does 'trace inputs' mean/do?


What does the 'Trace input' in TensorBoard in the Graph tab do? I can see that it highlights different aspects of the graph, but I don't know how to read it. I hope to use it to more clearly see what is connected to what in my graph, but I am having a hard time.

enter image description here


Solution

  • If you highlight a given node in Tensorboard, you can toggle "trace inputs" to see the upstream dependencies of that node. For example, in a deep neural network you can see all the layers feeding into the layer you clicked on.

    It can be useful if you have a very large, complex graph and want to filter out the visual noise created by dozens or hundreds of auxiliary nodes so you can isolate only stuff relevant to a particular node that's behaving unexpectedly. Depending on your network architecture and how your graph ends up rendered in Tensorboard, the stuff feeding strictly into a given node in the computation graph may not be readily apparent at first glance.

    It's also incredibly useful if you're trying to present a network architecture to someone step by step--just click on the nodes in sequence to avoid overwhelming people or expecting them to know which arrows you're talking about.