Version 4.10.0

Date: 6th December 2025

This section consists of the new features and enhancements introduced in this release.

  • Pushdown Support for More Data Sources: ADOC expands its Pushdown Data Engine capabilities to support Oracle, MSSQL, Redshift, MySQL, MemSQL, ClickHouse, and Postgres. This enhancement enables users to execute reliability tasks on these data sources using the Pushdown Engine, reducing dependency on Spark and improving performance. For more information, see Pushdown Data Engine.
  • Adhoc Execution Support for Scheduled Policies (Including Incremental Policies): ADOC now allows users to manually run Data Quality or Reconciliation policies even if they are already set to run automatically on a schedule. For more information, see Executing a Data Quality Policy.
  • MemSQL Credential File Support for SingleStore: ADOC now allows users to authenticate using a MemSQL credentials file when registering a SingleStore data source. For more information, see SingleStore (memSQL).
  • Service Users for Data Plane: ADOC now supports using dedicated service users for dataplane authentication, eliminating reliance on personal user credentials. This ensures more stable connectivity, secure key management, and seamless credential rotation. For more information, see Configure Data Plane.
  • Redshift Partition Configuration Support for Parallel Execution: ADOC now supports partition configuration for Redshift assets, allowing the system to process large datasets in parallel. This improvement speeds up profiling and data quality checks on big Redshift tables and reduces the risk of job failures caused by slow, single-threaded processing. For more information, see Amazon Redshift.
  • BigQuery Workload Identity Support for Pushdown: ADOC now supports Workload Identity for BigQuery when using the Pushdown Engine. This enhancement improves security and enables seamless, credential-free authentication for Pushdown operations on BigQuery data sources. For more information, see Google BigQuery.
  • Failure Notifications for Policy Executions: ADOC now sends real-time notifications when a policy run fails or is aborted, giving users immediate visibility into execution issues without needing to manually check the Jobs page. This improves operational awareness and helps teams respond faster to data quality failures.

This section lists the issues that have been resolved in this release.

  • Resolved an issue where profiling, sample data retrieval, and Data Quality policies failed on Trino tables containing timestamp with time zone columns; the system now properly handles Trino's timestamp data types with timezone information for both Spark and pushdown execution modes.
  • Addressed an issue where SQL view creation on BigQuery failed to complete; queries would execute successfully on BigQuery, but results were not posted back to the standalone service, preventing users from saving SQL views and requiring workarounds for view-based workflows.
  • Fixed an issue where changes to Bulk Rules were not reflected in policies after updating SQL expressions in rulesets; modifications made to rules during migration or updates now propagate correctly to all associated policies, ensuring data quality validations remain consistent.
  • Resolved an issue where Data Quality and Data Freshness policies failed with "driver container failed with ExitCode: 143" errors; improved container stability and error handling to prevent premature termination during policy execution.
  • Fixed an issue where the Reconciliation results download page failed to populate data, causing downloaded CSV files to contain Java exception errors instead of good and bad records; this occurred despite records being visible in the UI and present in the backend storage, preventing users from exporting reconciliation results for analysis.
  • Addressed an issue where blank notification emails were triggered alongside proper email alerts for Data Freshness policies and Snowflake pushdown Data Quality policies; duplicate notifications with empty content have been eliminated while preserving valid alert emails.
  • Resolved an issue where large Autosys pipeline jobs (specifically BDDMZ_156PDB_CPS001_MB000_ODM_DAILY_LOAD) failed to load on the UI, causing the application to hang with a stuck blue loading bar; users previously had to close and reopen the application to regain functionality.
  • Fixed an issue where Data Freshness policies with Anomaly Detection triggered false positive alerts on the first run immediately after data refresh, impacting policy scores even when data was refreshed within the configured SLA window; the anomaly detection logic now accounts for expected data refresh patterns.
  • Addressed an issue where results.json files for SQL-based Data Quality policies were missing the SQL statement that was executed; this limited debugging capabilities and audit trails for understanding policy execution and results.
  • Resolved an issue where Anomaly strength and model sensitivity threshold settings were not retained during import/export operations of Data Anomaly policies; users had to manually reconfigure these parameters after importing policies.
  • Fixed an issue where users were unable to provide feedback on detected Anomalies through the UI; the feedback mechanism has been restored to enable users to mark detections as true positives or false positives.
  • Addressed an issue where PowerBI assets could not be added to Asset Groups; resolved asset type compatibility to support PowerBI datasets and reports in group management workflows.
  • Resolved an issue where manual and scheduled policies failed to run after changing the dataplane assignment when the original resource strategy was no longer available; improved dataplane migration handling and resource validation during policy execution.
  • Fixed an issue where some Snowflake assets displayed duplicate columns in the asset schema, causing errors during policy processing and preventing successful policy execution; column metadata synchronization has been corrected to eliminate duplicates.
  • Addressed an issue where the Fetch Mapping feature only auto-mapped a partial set of matching columns between source and target tables, failing to identify all columns with identical names; the feature also overwrote previously configured mappings on each fetch, requiring users to manually remap columns repeatedly.
  • Resolved an issue where the new Import Configuration feature introduced in version 4.8 failed specifically for Databricks assets; the import functionality now properly handles Databricks datasource types and configurations.
  • Fixed an issue where Azure MSSQL crawling failed with SSL handshake errors (Failed to validate the server name in a certificate during SSL initialization) when connecting to Azure SQL servers configured with private link endpoints; SSL certificate validation has been updated to support private endpoint connections.
  • Addressed an issue where Lookup Data Quality policies incorrectly marked valid values as bad records when SQL filters were applied on Databricks datasources; the lookup validation logic now properly evaluates reference data against filtered datasets.
  • Fixed an issue where exact distinct count calculation did not work on Snowflake pushdown despite enabling the 'Use exact distinct count calculation' toggle in Asset Settings; the system was generating approximate count queries instead of exact count queries, affecting data quality measurement accuracy.
  • Resolved a regression issue where UDT-Validations (User-Defined Type Validations) pulled asset UDT-variables at policy authoring time instead of at run-time; this caused policies to use stale or incorrect variable values during execution, leading to validation errors.
  • Fixed an issue where validation of UDF-Validation rules (User-Defined Function Validations) failed in SQL-based policies running in Spark mode because the validation rules could not access projected columns from the SQL query; column visibility for UDF validations has been corrected.
  • Addressed an issue where the ADOC UI failed to list database schemas that contained only views without any tables; users previously had to add a dummy table to make view-only schemas visible, preventing proper view cataloging and governance.
  • Resolved an issue where reliability reports failed to generate and deliver to recipients due to Selenium Chrome driver session errors; concurrent session management in the report generation service has been improved to prevent conflicts.
  • Fixed an issue where policies did not execute on their configured schedule after being enabled through the UI; scheduler synchronization has been corrected to ensure UI-enabled policies trigger as expected.
  • Addressed an issue where test connection failed for Databricks datasources using Service Principal authentication configured for Serverless Cluster compute; authentication handling now properly supports Service Principal credentials with serverless environments.
  • Resolved an issue where S3 Freshness policies remained stuck in "Waiting" status for more than 24 hours even though the underlying job execution had failed; status transition logic has been corrected to properly reflect failed executions and prevent indefinite waiting states.
  • Fixed an issue where Data Freshness policies with configured SLAs were being incorrectly evaluated, resulting in inaccurate policy scores and false SLA breach notifications being sent to users; SLA evaluation logic for freshness policies has been improved for accuracy.
  • Addressed an issue where freshness policies configured on S3 assets failed incorrectly, marking data as stale when it was actually fresh; this has been consolidated with related freshness policy corrections.
  • Fixed an issue where new columns were not automatically detected for ADLS datasources when the "Evolving Schema" option was enabled in datasource configuration; schema changes during crawling now properly trigger column discovery and updates.
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
On This Page
Version 4.10.0