Manage Pipelines

In ADOC, pipeline management lets you do more than observe pipeline runs. It also lets you compare current performance to historical behavior, define monitoring policies for failures and SLA breaches, and trigger automated data reliability actions when key pipeline events occur. These controls help keep pipelines reliable and efficient.

Configure a Comparison Baseline

A comparison baseline helps you evaluate new runs against past performance. ADOC compares each new run to the baseline to detect meaningful changes, such as increased execution time. The baseline you configure also affects how pipeline-level aggregate metrics are displayed in ADOC. For example, the percentage change shown next to average run time is calculated by comparing the most recent results to the baseline you set.

To configure the baseline:

  1. In the Compared to average of last field, enter the number of recent runs to include in the baseline calculation, such as 10, 50, or 100.
  2. Select Only include successful runs to prevent failed or cancelled runs from skewing the average. This is the recommended setting for a stable performance baseline.

Example: If you set the baseline to the last 10 successful runs, ADOC compares the latest run against that average when calculating the percentage change in run time shown on dashboards.

Configure a Monitoring Policy

Monitoring policies help enforce pipeline SLAs and trigger alerts when performance or reliability thresholds are breached. The monitoring policy view has two main areas: Pipeline Execution Failure Settings and the Monitoring Policies List.

Pipeline Execution Failure Settings

This is a default alert that triggers when a pipeline fails for any reason not already covered by a more specific rule.

To configure it:

  1. Click Edit.
  2. Review the Alert Name, which is system-generated.
  3. Set the Severity level.
  4. Select the Notification Channels.
  5. Click Save.

Use case: If you want immediate notification whenever a critical daily pipeline fails, configure this default failure alert to notify the appropriate team channel.

Monitoring Policies List

The Monitoring Policies List displays all custom monitoring policies, grouped by the entity type they apply to:

  • Pipeline
  • Job
  • Span
  • Event

Expand each entity to view the specific rules, along with their severity and conditions. To manage an existing rule, expand the relevant entity, click the ellipsis icon, and edit or delete the rule.

Create a New Monitoring Policy

To create a new monitoring policy:

  1. Click Add Monitoring Policy. The Define Metrics screen opens.
  2. Select the monitoring level: Pipeline, Job, Span, or Event.
  3. Add one or more rules as needed. Use Add Another Metric to include multiple rules in a single policy.
  4. Use the Enabled switch to turn each rule on or off.

The available configuration fields are dynamic and change based on the selected Metric Type and Comparison Method.

For time-based or numerical metrics, ADOC supports two comparison methods:

  • User Threshold: Triggers an alert when a fixed value that you define is crossed, such as duration greater than a specific time
  • Previous Executions: Triggers an alert by comparing the current run to a historical baseline, where you define both the baseline period and the threshold for deviation

For each rule, you also set:

  • Severity: Critical, High, Medium, or Low
  • Notification Channel: email, Slack, PagerDuty, Microsoft Teams, or webhooks

Example: Time-Based Rule for Pipeline Duration

When you create a rule for Pipeline Duration, configure the following:

  1. Select Pipeline Duration as the metric.
  2. Choose User Threshold or Previous Executions as the comparison method.
  3. Define the threshold condition, such as Greater Than 00:10:00.
  4. Set the alert severity.
  5. Select the notification channel.

Use case: If a pipeline is expected to complete within a business SLA, create a duration-based rule so the team is alerted when execution time exceeds the allowed limit.

Example: Event Metadata Rule

For Event-level monitoring, you can:

  1. Optionally associate the event with a specific span.
  2. Select the metadata key to evaluate.
  3. Define the alert logic, such as Equals, Not Equals, or In.
  4. Set the alert severity.
  5. Select the notification channel.
  6. Save the policy.

This type of rule is useful when you want to monitor a specific value reported in event metadata rather than a time-based or numerical metric.

Automate Data Reliability

You can automate data quality checks, profiling, or reconciliation policies that are triggered by pipeline, job, or span outcomes. This helps ensure that data reliability tasks run at important points in the data lifecycle. The main view also displays all existing automations and their status, including type, targeted data asset, and triggering pipeline event.

Create a New Automation

To create an automation:

  1. Click Add Automation. The Setup Data Reliability Automation form opens.

  2. Configure the trigger:

    • Select the trigger entity: Pipeline, Job, or Span
    • Select the trigger status: Success, Aborted, or Failed
    • Select the specific pipeline, job, or span name to monitor
  3. Define the action:

    • Profiling: Runs a data profiling job on the selected asset
    • Data Quality: Executes predefined data quality rules
    • Reconciliation: Performs a reconciliation check between two datasets
  4. Select the data asset.

  5. Choose the execution type:

    • Full
    • Incremental
  6. Click Save.

Manage Existing Automations

To edit or delete an existing automation:

  1. Go to the Automated Data Reliability list.
  2. Use the Edit or Delete icons on the right side of the row.

What’s Next

To investigate a run that triggered an alert, see Pipeline Run Details. To configure monitoring policies that apply across multiple pipelines from the Pipelines page, see Bulk Monitoring Policy.

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard