Release Notes 4.1.X
ADOC V4.1.1
Date: 12 March, 2025
This section consists of the new features and enhancements introduced in this release.
Data Reliability
- Column-Level Insights in Discover Assets: The Discover Assets page now includes column-level insights for a more detailed analysis of data quality. Selecting a table from the Navigator panel displays a column-level breakdown in the main panel, showing key policy details such as column names, average policy scores, and the number of applied rules. For more information, see Column-Level Insights.
- Refined Column Selection Placement: The common column selection option has been removed from the top section of the DQ Policy configuration. Instead, it is now available within the SQL Filter, UDT, SQL Rule, UDF, and Persistence Path sections, ensuring consistency with the pre-KPI changes. For more information, see Data Quality Policy.
- Support for 100% Threshold in DQ Policies: Users can now set both the lower and upper thresholds to 100% while configuring rules in a Data Quality (DQ) policy. This enhancement ensures that even the slightest deviation (e.g., 99.99%) results in rule failure, allowing users to enforce stricter validation criteria for critical policies.
- EKS Pod Identity-Based Authentication: EKS Pod Identity-based authentication is now available, allowing seamless and secure authentication for workloads running on Amazon EKS. This enhances security by eliminating the need for static credentials, enabling workloads to authenticate using IAM roles assigned to Kubernetes pods. For more information, see EKS Pod Identity-Based Authentication.
This section lists the issues that have been resolved in this release.
- Fixed the issue where users were prompted multiple times for the same input token when using an SQL Template to import an SQL Rule in a Data Quality (DQ) policy. Now, users are only prompted once per unique input token, ensuring a smoother and more intuitive rule configuration experience.
- Fixed the issue where User-Defined Functions (UDFs) failed when passing a value instead of a column in Data Quality (DQ) policies. The issue has been resolved, and UDFs now correctly handle fixed input values without misinterpreting them as columns, ensuring successful policy execution.
- Fixed the issue where the Quality Score always displayed as 0 when segmentation was used with Custom SQL, even when the number of failed rows was lower than the passed rows. Now, the Quality Score is accurately calculated based on passed and failed rows, ensuring consistent results regardless of segmentation and Custom SQL usage.
- Fixed the issue where enabling Keep empty and null values as good records did not work correctly in pushdown mode, causing these values to be treated as bad records. Now, empty and null values are correctly classified as good records in pushdown mode, ensuring consistent behavior across execution methods.
- Fixed the issue where users without the View Complete Data permission could still see value frequency and pattern samples in the column profile details. Additionally, users with View Complete Data but without View Protected Data permissions could access value frequency and pattern samples for protected data. Now, data visibility correctly adheres to assigned permissions, ensuring restricted data remains inaccessible as intended.
- Fixed the issue where alerts sent to Teams or Slack channels displayed start and end times in epoch format, requiring users to manually convert timestamps. Now, alerts use a human-readable date-time format (e.g., YYYY-MM-DD HH:mm:ss), improving readability and streamlining alert consumption.
ADOC V4.1.0
Date: 27 February, 2025
This section consists of the new features and enhancements introduced in this release.
Data Reliability
- Enhanced Lineage Visualizations UI: The lineage page UI has been enhanced for a more interactive experience. Users can now search for specific columns at the lineage level, while hovering over a column highlights its entire lineage flow for better traceability. Additionally, table colors have been updated for improved readability. For more information, see Asset Lineage.
- Automated Data Lineage for Tableau: ADOC now supports automated data lineage for Tableau, enabling seamless tracking and visualization of data flow from external sources to Tableau dashboards. This feature ensures greater accuracy, reliability, and transparency in analytics. For more information, see Data Lineage for Tableau.
- Anomaly Detection in Data Quality Policy: Users can now set anomaly detection sensitivity to low, medium, or high for each rule when creating a Data Quality policy, allowing for more precise monitoring and control. For more information, see Create Data Quality Policy.
- Option to Disable Warning Notifications for Data Quality Alerts: Users can now disable warning notifications independently when configuring alerts in Data Quality policies, ensuring notifications are only received when necessary. For more information, see Create Data Quality Policy.
- Configurable Annotation Support for Crawler Pods: Crawler pods now receive annotations set via the KUBERNETES_POD_ANNOTATIONS config variable, ensuring consistent configuration and preventing failures.
- Switched from Klaxon to Jackson for JSON Processing: ADOC now uses Jackson instead of Klaxon for JSON processing, improving performance, memory efficiency, and serialization or deserialization. This change ensures better scalability and processing speed.
- Upgraded Ktor for Performance and Security: ADOC has upgraded Ktor to version 2.3.7, bringing improvements in performance, security, and backend stability for a smoother user experience.
Compute
- Data Ingestion Monitoring Enhancements: A new Data Ingestion section has been introduced, featuring enhanced widgets and dashboards to improve visibility into Snowflake ingestion workflows using Snowpipe. Previously scattered under the Costs and Performance section, Snowpipe monitoring is now consolidated, allowing platform admins to track data loads, detect failures, and optimize resource usage for pipeline reliability. For more information see, Snowflake Data Ingestion.
- Optimized Query Studio Performance: The Query Studio view has been refined by adding and removing specific columns to enhance usability. A new Snowflake Cost field now displays query costs attributed to Snowflake, alongside key metrics like query hash and parameterized hash, improving visibility into query execution costs. For more information see, Snowflake Query Studio.
- SQL Warehouses Insights for Databricks: A new SQL Warehouses section has been added under Compute, providing enhanced visibility into costs, queries, tables, and warehouse details for Databricks data sources. This allows users to monitor and optimize SQL warehouse usage effectively. For more information see, Databricks SQL Warehouses.
- More Accurate Snowflake Storage Cost Calculations: Storage cost savings will no longer be estimated for Iceberg tables in Snowflake’s Unused Tables recommendations, as they do not incur storage costs. This ensures more accurate cost insights for platform admins managing Snowflake environments.
UI/UX
- Filter Widgets by Tags and Type in Widget Library: ADOC has enhanced the widget selection experience by introducing filters for tags and widget types in the Widget Library. Users can now easily refine their search and find the right widgets faster when creating dashboards. For more information see, Dashboard.
- Refined Widget Descriptions and Enhanced Metadata: Finding the right widget is now easier with streamlined descriptions and enhanced metadata in the Widget Library. Each widget now includes clearer descriptions and additional tags, allowing for better categorization and more precise filtering, making dashboard creation more efficient. For more information see, Dashboard.
- Pre-Configured Dashboard Templates for Quick Setup: Users can now select from pre-built dashboard templates when creating a new dashboard, making it faster to get started. Additionally, dashboard templates now include thumbnails, improving visibility and selection. For more information see, Dashboard.
- Enhanced Policy Execution Widgets for Better Visibility: The Policy Execution widget has been expanded to include additional execution summary widgets for Data Drift, Data Freshness, Data Quality, Profile Anomaly, Reconciliation, and Schema Drift policies. These new widgets provide a clearer breakdown of execution metrics at the tenant level, improving monitoring and analysis of policy performance. For more information see, Dashboard.
- White-Labeled Reliability Reports for Seamless Branding: Enterprise can now customize the emailed Reliability Reports by replacing the default Acceldata logo with their own. This enhancement ensures brand consistency, improves stakeholder trust, and eliminates confusion by aligning reports with the recipient company’s identity. For more information, see Report Configuration.
This section lists the issues that have been resolved in this release.
Data Reliability
- Fixed the issue where the quality score incorrectly showed 0 when using segmentation with Custom SQL.
- Fixed the issue that prevented users from adding descriptions to a discovered asset due to the description input box being disabled.
- Fixed the issue where the
get_policy()
method in the Acceldata Python SDK did not include the Dynamic SQL Filter and its mapping in the response. The method now correctly returns thefilter
,sparkFilterSelectedColumns
, andsparkSQLDynamicFilterVariableMapping
fields when defined in the policy. - Fixed the issue where, in some cases, the source and sink assets were interchanged in the exported JSON for reconciliation policies. The export now correctly maintains the intended source and target asset order.
- Fixed the issue where selecting a 0% threshold for a Data Drift policy resulted in an error. Users can now set a 0% threshold to ensure no data drift occurs.
- Fixed the issue that prevented users from successfully configuring an Alation data source.
- Fixed the issue where the Databricks Service Principal Client Secret was displayed in plain text during data source creation and editing. The client secret is now masked as a password field and securely stored like other data source credentials.
- Fixed the issue where, in Ruleset Rules, leaving the upper or lower limit blank defaulted to 0, causing unintended behavior. Now, blank limits persist correctly, similar to DQ Policy Rules, ensuring proper range validation.
- Fixed bugs in incremental file processing via SQS for AWS S3 data sources.
- Fixed an issue where the execution score and status for Data Quality policies were incorrect.
- Fixed the issue where changing the timestamp filter in data reliability reports did not update the results in the Policies tab.
- Resolved an issue where copying and pasting a policy name into the Manage Policies table search bar did not return any results due to an unintended leading space.
- Resolved the issue where Data Freshness policies were not being executed.
- Resolved an issue where the Total Records Processed metric showed 0 for Freshness, Anomaly, and Schema Drift policies, causing ambiguity. It now displays 'N/A' to indicate that this metric is not applicable to these policies.
Compute
- Fixed an issue where loading the Top 50 Most Expensive Queries chart either took an excessive amount of time or resulted in a 500 API error. The chart now loads faster and displays data as expected.
- Service principal credentials are now encrypted in the backend and masked in the UI for improved security.
UI/UX
- Fixed an issue where the search bar in Data Reliability Reports did not return results when searching by asset name. Users can now search and filter reports based on asset names as expected.
This section consists of known limitations we are aware of persisting in this release.
- Within the Freshness Trend chart found in the Data Cadence tab, the upper bound, lower bound, and change in row count metrics are exclusively accessible for time filters such as today, yesterday, and those based on an hourly time frame.
- Anomaly detection is not supported for nested columns of an asset.
- When the refresh token time expires for ServiceNow OAuth Integration, the status remains as configured instead of changing to expired.
- Dataproc is only supported on GCP as the cloud provider for data plane installation.
- Glossary window fails to load data due to Dataproc APIs failure.
- The Smart tag feature is currently available only for Kubernetes-driven Spark deployments.
- When a 2FA credential is activated, the credentials do not work.
- User-specific usage data for a specific day may not be accurately displayed or retrieved.
- Issue with GCP test connections.
- Profiling in BigQuery fails for tables when the partition filter is enabled.
- DQ Policy fails sometimes due to an inability to verify a space column with special characters.
- Unable to pick a reference asset on a Data Policy template when defining a Lookup Rule.
- The data lineage view for job runs is not visible on Databricks' Job Run Details page.
- If all values are null for primitive string and complex data types, profiling will occur, but the data type will be unknown.
- All failed events are not captured in the Audit Logs page.
- When performing Incremental Data Quality (DQ) checks with
Datetime2
andDateOffset
formats in Synapse, if no new records are added, the system processes the last record instead of skipping processing.
This section consists of important links to get started with ADOC.
Was this page helpful?