Search code examples
pythonpytorchtorchpytorch-lightningshap

Shap & PyTorch Lightning - Problem with Tensor size


I am trying to use shap to explain the outputs of a Pytorch (Lightning) model. Here is the code:

train_size = int(0.7 * len(dataset))
    val_size = int(0.1 * len(dataset))
    test_size = len(dataset) - train_size - val_size

    train_dataset, val_dataset, test_dataset = torch.utils.data.random_split(dataset,[train_size,val_size,test_size])

    train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=256, shuffle=True)
    val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=256, shuffle=True)
    test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=256, shuffle=True)

    model = Model.load_from_checkpoint("path")

    batch = next(iter(test_dataloader))
    x, _, _ = batch

    background = x[:100].to(model.device)
    test_points = x[100:180].to(model.device)

    # model(test_points) PLEASE NOTE THAT THIS LINE RUN WITH NO WARNING/ERROR

    e = shap.DeepExplainer(model, background)
    shap_values = e.shap_values(test_points)

the last line of the code, raise the following error:

Traceback (most recent call last):
  File "shap_computation.py", line 40, in <module>
    main()
  File "shap_computation.py", line 35, in main
    shap_values = e.shap_values(test_points)
  File "virtualenv/lib/python3.9/site-packages/shap/explainers/_deep/__init__.py", line 124, in shap_values
    return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)
  File "virtualenv/lib/python3.9/site-packages/shap/explainers/_deep/deep_pytorch.py", line 185, in shap_values
    sample_phis = self.gradient(feature_ind, joint_x)
  File "virtualenv/lib/python3.9/site-packages/shap/explainers/_deep/deep_pytorch.py", line 121, in gradient
    grad = torch.autograd.grad(selected, x,
  File "virtualenv/lib/python3.9/site-packages/torch/autograd/__init__.py", line 300, in grad
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "virtualenv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 62, in __call__
    return self.hook(module, *args, **kwargs)
  File "virtualenv/lib/python3.9/site-packages/shap/explainers/_deep/deep_pytorch.py", line 226, in deeplift_grad
    return op_handler[module_type](module, grad_input, grad_output)
  File "virtualenv/lib/python3.9/site-packages/shap/explainers/_deep/deep_pytorch.py", line 358, in nonlinear_1d
    grad_output[0] * (delta_out / delta_in).repeat(dup0))
RuntimeError: The size of tensor a (50) must match the size of tensor b (25) at non-singleton dimension 1

is there anyone that can help?


Solution

  • The original model was something like

    fc1 = nn.Linear(...) 
    fc2 = nn.Linear(...)
    

    and so on. Inspired by a discussion on GitHub, I found out that by changing the model using nn.Sequential. The code posted in the question works without problems