I'm wondering to use Sagemaker clarify explainability in combination with HPO. I saw the sagemaker examples using the 'experiments.run ([docs link][1]), but I did not figure out how to adapt for HPO task. Thank you all for any help.
Short docs examples using xgboost:
with Run(
experiment_name=experiment_name,
run_name="combined-report",
sagemaker_session=sagemaker_session,
) as run: # model training and explainability reports created in the same experiment run
xgb.fit({"train": train_input}, logs=False)
clarify_processor.run_explainability(
data_config=explainability_data_config,
model_config=model_config,
explainability_config=shap_config,
)
Instead, I would like to use the HPO:
clarify.run_exaplainability(params) # combine it with HPO bellow
optimizer = sagemaker.tuner.HyperparameterTuner(
container,
hyperparameter_ranges=hp_ranges,
stratefy='Random',
objecive_type='Maximize',
objective_metric_name='val:auc',
metric_definitons=metric_definitions,
max_jobs=10,
max_parallel_jobs=2,
)
optimizer.fit(data_channels, wait=True)
``
[1]: https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/sagemaker_clarify_integration/tracking_bias_explainability.html
I found in the docs the experiment_config
parameter of run_explainability
. It allowed me to include the explainability result in the main HPO job. So, I ran HPO and then I get the last job to include in run_explainability
. In short:
clarify.run_explainability(
...,
...,
experiment_config={'ExperimentName': optimizer.latest_jtuning_job.name}
)
Note: It includes the explainability in the main job and not for each hpo training.