Release Notes
Acceldata Data Observability Cloud
Title
Message
Create new category
What is the title of your new category?
Edit page index title
What is the title of the page index?
Edit category
What is the new title of your category?
Edit link
What is the new title and URL of your link?
Version 26.4.0
Summarize Page
Copy Markdown
Open in ChatGPT
Open in Claude
Connect to Cursor
Connect to VS Code
Date: 11th April 2026
This section consists of the new features and enhancements introduced in this release.
- Pipeline Co-Pilot: ADOC introduces Pipeline Co-Pilot, an AI-powered assistant for investigating data pipeline issues using natural language. It combines pipeline lineage, execution metadata, and anomaly detection to help you identify root causes and understand the downstream impact of issues across complex pipelines. In an upcoming v2 release, Co-Pilot will add interactive pipeline views with anomaly highlighting and deeper context integration.
- dbt Cloud Connector Integration: ADOC now supports dbt Cloud as a generally available data source. The connector automatically discovers dbt Cloud jobs as pipelines and tracks each job execution as a pipeline run. You can onboard specific dbt Cloud projects, view end-to-end lineage across dbt models, snapshots, seeds, sources, and tests, and inspect the compiled SQL query for each resource execution directly within ADOC. For more information, see dbt Cloud.
- SnapLogic Integration — Snowflake Lineage and Nested Pipeline Support: ADOC now supports lineage extraction for Snowflake assets used within SnapLogic pipelines and tracks dependencies across nested SnapLogic pipelines. For asset stitching to function correctly, the hostname configured in your SnapLogic account must match the URL used when registering the data source in ADOC. For more information, see SnapLogic.
- Enhanced Pipeline Detail View: ADOC introduces enhancements to the Pipeline Detail View, improving navigation and visibility in complex pipeline environments. The update adds node search, alert-based filtering, and visual highlighting to quickly identify relevant nodes and issues. It also introduces inline visibility of related pipelines with one-level stitching, enabling you to trace upstream and downstream dependencies without leaving the view. For more information, see Pipeline Run Details.
- Schema-Level Lineage: ADOC now allows you to view consolidated lineage at the schema level. The schema lineage view lists all assets within the schema that have lineage, along with their upstream and downstream sources. A toggle displays the lineage diagram across all assets in the schema, visualizing upstream and downstream connections in a single view.
- Microsoft Teams Notification — Workflow-Based Webhook Support: ADOC now supports Microsoft Teams notifications using the Teams Workflows (Power Automate) integration, replacing the legacy Incoming Webhook connector, which Microsoft deprecated in December 2025. Existing Teams notification configurations must be updated to use a Workflow-generated webhook URL. For more information, see Configure Microsoft Teams Webhooks.
- Rule-Level Naming for Data Reliability Policies: ADOC now assigns each rule within a policy a stable, unique name and an optional display name. Rule names remain consistent when a policy is edited, making it easier to track individual rules across executions and policy versions. This applies to Data Quality, Reconciliation, Data Drift, Data Freshness, and Profile Anomaly policies. After upgrading the dataplane to 26.4.0, column labels in execution output records will reflect the new rule names. Teams consuming these records in downstream systems must update their column references after the upgrade. For more information, see Rule Identifiers.
- Rule Sets — Policy Schedule and Notification Configuration: ADOC now allows you to configure a policy execution schedule and notification channels when applying or scheduling a rule set. All policies generated by an applied rule set run inherit the schedule and notification settings specified at the time of application. For more information, see Rules and Rule Sets.
- Cadence and Freshness Policy Support for SingleStore (MemSQL): ADOC now supports Cadence and Freshness policies for SingleStore (MemSQL). You can track data ingestion frequency, monitor data freshness, and detect delays or staleness in SingleStore pipelines. For more information, see Data Freshness Policy.
- Configurable Data Cadence Controls: ADOC now allows you to manage cadence job scheduling at the data source level. You can define custom schedules using cron expressions, enable or disable cadence monitoring per data source, and update schedules dynamically — with changes reflected immediately in execution orchestration. Previously, cadence jobs ran on a fixed hourly schedule with no user configuration.
- Treat Zero Rows as Success for Data Quality and Reconciliation Policies: ADOC introduces an option to treat zero rows as a successful policy evaluation. When enabled, policies that return zero rows or files are marked as successful instead of failed, reducing false alerts in scenarios where no data is expected.
- Kubernetes Pod Mapping API: ADOC introduces a Kubernetes Pod Mapping API that maps Data Quality and Reconciliation policy executions to their underlying driver and executor pods in real time, enabling faster debugging in Kubernetes environments. Refer to the API documentation for usage details.
- Customizable Column Visibility in Tables: ADOC now allows you to show or hide columns in tables across the platform, including Asset Discovery, Policy listing, and other table views. Column selections are retained within the session.
The following features and connectors have transitioned from Preview to General Availability:
- Unified Left-Hand Navigation
- Dataplane V3
- dbt Cloud Connector
- Export and Import Manager
- Service Users
This section lists the issues that have been resolved in this release.
- Resolved an issue where policies manually unarchived after an asset became available again did not resume scheduled execution. Policies now execute at their next scheduled time following manual unarchival, consistent with the behavior of newly created policies.
- Resolved an issue where the Last Updated timestamp for Data Quality and Reconciliation policies was incorrectly updated each time a policy was executed. The Last Updated field now reflects only changes made to the policy configuration via the UI or API, and the Last Executed field is updated independently upon policy execution.
- Resolved an issue where Data Quality policies failed for SAP HANA assets when the table name contained a period (.) character. Policies now execute successfully regardless of whether the table name contains a period, for both Spark and Pushdown execution modes.
- Resolved an issue where Data Quality policies and profile scans failed for BigQuery assets containing BIGNUMERIC columns. BIGNUMERIC columns with precision exceeding 38 digits are now automatically cast to FLOAT64 during Spark execution.
Note For higher precision requirements, use a custom SQL query with an explicit cast. - Resolved an issue where the next execution time displayed on the policy scheduling screen was incorrect when the selected timezone differed from the browser's timezone. The execution schedule now reflects the correct next run time based on the selected timezone.
- Resolved an issue where clicking an alert URL redirected SAML-authenticated users to the home screen instead of the intended execution details page. Users authenticating via SAML single sign-on (SSO) are now correctly redirected to the relevant execution details page after login.
- Resolved an issue where the Trino crawler failed to crawl catalogs with hyphens in their names. Metadata discovery now works correctly for Trino catalogs regardless of special characters in the catalog name.
- Resolved an issue where report filters were not visible to users who had view or modify permissions but lacked the create:report permission. Report filters are now correctly displayed based on view and modify permissions, without requiring create:report permission.
- Resolved an issue where the List Data Quality Policies API did not return policies that had no rules configured, resulting in a discrepancy between the policy count shown in the UI and the count returned by the API. The API now returns all non-archived policies, including those without rules.
- Resolved an issue where SQL View creation failed for Iceberg tables. SQL Views can now be created successfully for Iceberg tables.
- Resolved an issue where valid cron expressions configured via API were not displayed correctly on the Policy View and Edit pages. Policy schedules now accurately reflect the configured hour and minute values across all policy types.
- Resolved an issue where Autosys pipeline discovery failed for datasources with a large number of configured jobs due to the API request URL exceeding the allowed length limit. Job requests are now split into smaller batches and aggregated internally, allowing pipeline discovery to complete successfully regardless of the number of jobs configured.
- Resolved an issue where Autosys pipeline synchronization failed entirely when one or more configured jobs no longer existed in Autosys. Missing jobs are now skipped, and synchronization continues for all remaining valid jobs.
- Resolved an issue where Data Quality and profiling jobs for Redshift datasources configured with username and password authentication were incorrectly using IRSA permissions instead, causing execution failures. Jobs now use the authentication method configured on the datasource.
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
Last updated on
Was this page helpful?
Next to read:
Version 26.3.0For additional help, contact www.acceldata.force.com OR call our service desk +1 844 9433282
Copyright © 2025
Discard Changes
Do you want to discard your current changes and overwrite with the template?
Archive Synced Block
Message
Create new Template
What is this template's title?
Delete Template
Message