Search code examples
pythonpytorchtensorboardpytorch-lightning

How to plot multiple scalars in Tensorboard in the same figure without spamming the experiment list?


This is not an answer, just a workaround.

This too, is not an answer, (images taken from there).

I am looking for an answer with code in in pytorch-lightning.

Also could be phrased: How to plot training and validation losses on the same graph in Tensorboard with Pytorch lightning, without spamming Tensorboard?


I want to create graphs like this one

enter image description here

but without causing spam like this

enter image description here

All I could find was this answer, which explains only either how to plot such a multi-scalar graph with spam, or avoid spam while splitting the graphs.

How can I just get multiple scalars into a single graph?

Code (using pytorch lightning):

tb = self.logger.experiment  # This is a SummaryWriter

label_ind_by_names = {
            "A": 0,
            "B": 1,
            "C": 2,
            "D": 3,
            "E": 4,
            "F": 5,
            "G": 6,
            "H": 7,
            "I": 8,
            "J": 9, 
}

computed_confusion = np.random.randint(low=0, high=100, size=(10, 10))
per_class_accuracy = computed_confusion.diagonal()/(computed_confusion.sum(1) + np.finfo(float).eps)

drs = {names: per_class_accuracy[label_ind_by_names[names]] for names in label_ind_by_names.keys() if any("is_damaged_True" in n for n in names)}
fas = {names: 1.0 - per_class_accuracy[label_ind_by_names[names]] for names in label_ind_by_names.keys() if any("is_damaged_False" in n for n in names)}

code for separate graphs:

for names, dr in drs.items():
    tb.add_scalar(f"dr/{str(names)}", dr, self.current_epoch)
for names, fa in fas.items():
    tb.add_scalar(f"fa/{str(names)}", fa, self.current_epoch)

and for united graphs which disorganize the plot list

tb.add_scalars(
    "DR", {
        str(k): v for k, v in drs.items()
    },
    self.current_epoch
)
tb.add_scalars(
    "FA", {
        str(k): v for k, v in fas.items()
    },
    self.current_epoch
)

Solution

  • I finally found a sufficient answer here. Here is the doc.

    Here is an adaptation to pytorch-lightning:

    def on_fit_start(self):
        tb = self.logger.experiment  # noqa
    
        layout_scalar_names_losses = [r"train_loss_epoch", "val_loss_epoch"]
        layout = {
            "metrics": {
                f"loss_epoch": ["Multiline", layout_scalar_names_losses],
            }
        }
    
        tb.add_custom_scalars(layout)
    

    and

    def _common_log(self, loss, stage: str):
        assert stage in ("train", "val", "test")
        self.log(f"{stage}_loss", loss, on_step=True, on_epoch=True)
    
    def training_step(self, batch, batch_nb):
        stage = "train"
        augmented_image, outputs, labels, loss = self._common_step(batch, batch_nb, stage=stage)
        self._common_log(loss, stage=stage)
        return {"loss": loss, "outputs": outputs, "labels": labels}
    
    def validation_step(self, batch, batch_nb):
        stage = "val"
        augmented_images, outputs, labels, loss = self._common_step(batch, batch_nb, stage=stage)
        self._common_log(loss, stage=stage)
    
        return {"loss": loss, "outputs": outputs, "labels": labels}
    
    def _common_step(self, batch, batch_nb, stage: str):
        assert stage in ("train", "val", "test")
        augmented_image, labels = batch
        outputs, aux_outputs = self(augmented_image)
        loss = self._criterion(outputs, labels)
    
        return augmented_image, outputs, labels, loss
    

    This shows the following, under the "custom scalars" tab in Tensorboard.

    enter image description here

    enter image description here

    I was still unable to do this for scalars other than the losses, but will update this answer when I do, as I am certain this is the way to go.