Search code examples
azureazure-data-factoryazure-databricks

Is there a way that Azure Data Factory Pipeline step to execute automatic retry if the step failed?


By the step, I mean each copy, each filter, each columns, or each sink. I have a hard time figuring it out because I only know how to re-run the whole pipeline but not the individual steps inside pipeline. The re-run of each step is to determine th intermittent errors.


Solution

  • You can use Activity Policies available only for execution activities (data movement and data transformation activities)

    This property includes a timeout and retry behavior.

    enter image description here

    For more information, see official MS doc on Activity policy