Release Notes 4.3.X
ADOC V4.3.1
Date: 15 May, 2025
This section lists the issues that have been resolved in this release.
Data Reliability
- Resolved an issue where the child node displayed as null when the Autosys Pipeline was in an activated state.
- Resolved the issue of jobs getting stuck in the running state due to a conditional job API exception.
- Fixed the UI issue where not all job nodes were displayed for nested box type pipelines.
- Improved the reliability score display in the asset search feature. The score now accurately reflects the asset's actual reliability, ensuring you have the correct information at your fingertips.
- Resolved an issue that prevented users from editing manual tags. You can now easily update your tags as needed for better organization.
- Renamed Overall Reliability to Data Reliability Score on the asset overview page for consistent nomenclature across the platform.
- Resolved an issue where some transaction logs were not displaying for queries executed on Google BigQuery data sources, leading to incomplete query lineage. Users can now view the complete history of all queries associated with Google BigQuery data sources for improved tracking and analysis.
- Resolved an issue that caused errors when viewing specific pipeline runs after the 4.3 upgrade. Users can now access pipeline run details without encountering any errors.
- Fixed an issue in Schema-Drift/Freshness Recommendations where recommendations were not being generated for few data sources in a tenant.
- Resolved an issue where column level lineage was not showing while viewing sub-level lineage. Users can now easily interact with all columns for improved navigation and usability.
- Improved the Crawler API to ensure that when users apply the assets filter in the start crawling request, only the specified assets are crawled successfully. This enhancement provides a more focused and efficient crawling experience.
- Resolved an issue where importing policies would fail without any error message if the zip file contained unknown or invalid JSON. Users will now receive a clear error message in the Import Policies pop-up window if the import job encounters unknown or invalid JSON, ensuring a smoother troubleshooting experience.
- Fixed an issue in Recon Policies where the fetch mappings API frequently timed out for tables with a large number of columns, causing delays and UI timeouts. The fetch mapping in the create flow is now on demand.
- Fixed an issue where the UI showed multiple SQL rules as validating, even though only one rule can be validated at a time, with the next rule being validated only after the current one finishes.
- Fixed an issue in DQ Policies where SQL rule validation failed for BigQuery partitioned tables due to missing partition filters on the partition column.
- Fixed an issue where APIs did not enforce domain permissions correctly, allowing unauthorized access to domain-level resources.
- Fixed an issue in Pipeline Alerts where start and end times were displayed in epoch format instead of a human-readable format.
UI/UX
- Fixed an issue on the homepage where the selector overlapped with the dashboard title, causing text to appear incorrectly.
- Resolved an issue where tags created after the definition of a custom dashboard did not appear in the dashboard's filter list.
ADOC V4.3.0
Date: 28 April, 2025
This section consists of the new features and enhancements introduced in this release.
Data Reliability
- Atlan Integration Support: ADOC now allows users to integrate with Atlan's metadata framework, seamlessly pushing data quality scores and policy information for enhanced data governance and proactive quality optimization. For more information, see Atlan.
- Native Incremental Processing Support for Custom SQL Queries: ADOC now supports WHERE clauses with {{{lower_bound}}} and {{{upper_bound}}} placeholders in SQL queries for Data Quality policies. During incremental or selective runs, these are replaced with the specified offset or timestamp values. For more information, see Create Data Quality Policy.
- Option to Disable Auto-Classification Tags: Users can now disable auto-classification tags from the Tags page, streamlining tag filters across Manage Policies and Discover Assets views. For more information, see Tags.
- Enhanced Tags Page UI for Improved Usability: ADOC has refined the Tags listing page to improve clarity and user experience by standardizing data formats, ensuring consistent action visibility, and organizing key details into separate columns. For more information, see Tags.
- Support for Non-English Language Data in Reliability Policies: ADOC now supports profiling and data quality checks on non-English language columns (e.g., Japanese, Latin), enabling accurate monitoring of multilingual datasets.
- Centralized Spark Reader Configuration and Query Construction: Reader option management and query logic are now centralized in the control plane, reducing reliance on the data plane. This enhances flexibility, enables dynamic query construction, and simplifies support for diverse data sources. By default, upgrading to ADOC v4.3.0 will also upgrade the data plane to support this architecture change.
Compute
- Support for Fetching Databricks Cost from System Tables for Azure Databricks Data Source: You can now fetch Databricks cost data from system tables for Azure Databricks data sources, in addition to the existing API-based approach. This allows you to choose the preferred method—system table or API—for retrieving cost information during onboarding. For more information see, Cost Retrieval via System Table Method.
UI/UX
- Filter Policy Summary Dashboard by Tags or Assets: The Policy Summary Dashboard template now supports filtering by Tags or Assets, enabling users to view graphs and insights specific to selected categories for more focused analysis. This enhancement mirrors the filtering capabilities of the Data Reliability Dashboard, allowing users to set context-relevant views and streamline their workflows. By making it easier to locate relevant policy information, this feature improves usability and boosts efficiency in monitoring and analysis tasks. For more information, see Create a Dashboard.
This section lists the issues that have been resolved in this release.
Data Reliability
- Fixed an issue where numeric columns (e.g., INT, FLOAT, DECIMAL) were not appearing in the Metric Check configuration dropdown. Users can now select all valid numeric types when setting up Sum, Avg, Min, or Max checks in the Data Quality module.
- Fixed the issue where Schedule details were not being displayed while viewing the details of a reconciliation policy.
- Resolved an issue where Data Quality policies using the SUM metric check with relative decrease conditions (e.g., drop by 0 in the last run) did not correctly calculate policy scores.
- Fixed the issue where the policy score chart data was inaccurate with relevance to the selected date-time stamp.
- Fixed an issue where pipeline alerts appeared false due to unclear messaging. The alert description now shows the actual metric value instead of just the drift, providing accurate context for threshold breaches.
- Resolved an issue that allowed users to run profiles, create, and execute Data Quality policies on archived datasets. These actions are now blocked, and users will receive an appropriate error message.
- Fixed the issue where the Policy Name and Description text boxes on the Composite Policy page were too small and non-expandable, making it difficult to enter details. Also resolved an error that occurred when renaming and saving composite policies, allowing successful updates without unknown errors.
- Fixed the issue where backward lineage for Power BI assets incorrectly displayed a connection to an unrelated Acceldata asset. The backward lineage now accurately reflects only the actual source assets from which the Power BI data is derived.
- Resolved the issue where SAP HANA data sources were incorrectly represented by the Oracle icon across multiple areas, including Discover Assets, Add Policy, and Resource Group setup. SAP HANA data sources now display the appropriate icon consistently throughout the platform.
- Resolved an issue where importing Data Quality policies caused Anomaly-Based rules to be incorrectly recognized as Absolute-Based Rowcheck rules, leading to misconfigured policies and missing anomaly detection checks. The import process now correctly retains the original rule types, ensuring accurate policy configuration.
- Addressed the gap where the policy evaluation strategy—whether weightage-based or rule-based—was not indicated in the policy execution UI or captured in the results.json file. The evaluation strategy and rule thresholds are now clearly displayed and recorded, providing full visibility into how policy outcomes are determined during execution.
- Removed the duplicate Add Policy button on the Asset/Policies page, ensuring only a single action button is displayed for a cleaner and more intuitive user experience.
- Resolved the issue where the Create SQL View and Query Lineage options were not visible for some users. These permissions are now correctly linked to the Tenant Role → Reliability → Data Reliability Settings, allowing admins to enable or disable these options by modifying the relevant settings.
- Resolved the issue where the initial timestamp for profiling jobs was not updating correctly when using the incremental strategy with a datetime column. The profiler job now correctly picks up the start datetime provided on the profiler incremental strategy page, instead of defaulting to the last successful run's offset.
- Resolved the issue where the execution details page attached to Data Quality policy alert notifications was missing key information, including the policy name, execution summary, and quality summary. The notification now includes all relevant details, such as rule success rates, which were previously visible on the policy summary page.
- Resolved the issue where policy imports were failing, even though the data platform version, underlying assets, schema, and database were the same. The import process now completes successfully without errors.
- Fixed the issue where freshness policy recommendations were incorrectly shown for views, even though freshness policies cannot be created on them. The recommendations page now correctly excludes options to enable freshness policies on views.
- Updated the sorting behavior for sub-assets in the Navigator panel on the Discover Assets page. Sub-level assets (schemas, databases, tables) are now listed in ascending (ASC) alphanumeric order, rather than the previous descending (DESC) order.
- Resolved the issue where the Data Freshness policy displayed as "Not Yet Executed" despite being enabled for a Snowflake asset and not showing the Data Freshness score. The policy now correctly displays execution details and the freshness score after configuration.
- Fixed the issue where, with RBAM enabled, the Navigator window failed to expand properly when a domain/resource group contained only a subset of the data source assets. Users now can view and expand permitted data sources in the Navigator panel on the Discover Assets page, without encountering errors related to missing permissions for root objects.
- Resolved the issue where asset lineage was not showing the correct auto-derived relationships to source tables/views. The lineage now correctly displays connections between the asset and its source, even when the source schema has already been crawled.
- Resolved the issue where Data Cadence metrics were not populating correctly for certain data sources. Metrics for row counts and size are now correctly displayed for sources with regularly updated data.
- Fixed the issue where crawling an S3 bucket with no files caused the crawler to fail. The crawler now provides feedback that the source contains no data, rather than failing outright, ensuring smoother processing when encountering empty buckets.
- Resolved UI issues in the Visual View creation process, including misalignment of the view name input box, lack of a scrollbar for assets with more than 20 columns, and difficulties in selecting join columns for assets with large column sets due to the absence of search/sort functionality. The input box text is now left-aligned, and the scrollbar is properly displayed within the pop-up frame for large assets.
Compute
Fixed the issue where the warehouse filter was not being applied in Query Studio after selecting a warehouse in the Snowflake Cost → Compute → Warehouse Insights flow. Now, both the warehouse and query type filters are automatically applied in Query Studio as expected.
UI/UX
Resolved an issue where applying the 'Add Filters' condition for Snowflake Database and Warehouse resulted in a NIL value being displayed in dashboards.
Common Services
Fixed an issue where pipeline alert emails contained invalid links that led to 404 errors on ADOC. This occurred specifically in notifications triggered by pipeline duration breaches.
This section consists of known limitations we are aware of persisting in this release.
Oracle's TIMESTAMP WITH LOCAL TIMEZONE data type is not supported in Spark and may result in errors such as: org.apache.spark.SparkSQLException: Unrecognized SQL type -102
Native Incremental Processing Support for Custom SQL Queries Limitations:
- This feature only supports SQL-based policies. It does not apply to alias-based filtering (commonly used in Spark type configurations).
- The placeholders for bounds must strictly be {{{lower_bound}}} and {{{upper_bound}}}. Custom placeholder names are not supported.
- When using a custom SQL view with incremental execution, one additional row may be included due to the inclusive nature of the BETWEEN clause.
- Full policy execution is currently not supported for custom SQL queries that use bound placeholders ({{{lower_bound}}}, {{{upper_bound}}}).
- Values entered for SQL validation during policy creation are not stored and are used only for testing the query.
- In Selective execution mode, the bounds defined in the incremental configuration window are passed to the SQL query.
- Queries using NOT BETWEEN conditions will not work with incremental execution. For example: SELECT * FROM table WHERE date_column NOT BETWEEN {{{lower_bound}}} AND {{{upper_bound}}}
- This feature is not compatible with older Dataplane versions. Customers must upgrade to Dataplane 4.3.0 or later to use native incremental SQL queries.
Within the Freshness Trend chart found in the Data Cadence tab, the upper bound, lower bound, and change in row count metrics are exclusively accessible for time filters such as today, yesterday, and those based on an hourly time frame.
Anomaly detection is not supported for nested columns of an asset.
When the refresh token time expires for ServiceNow OAuth Integration, the status remains as configured instead of changing to expired.
Dataproc is only supported on GCP as the cloud provider for data plane installation.
Glossary window fails to load data due to Dataproc APIs failure.
The Smart tag feature is currently available only for Kubernetes-driven Spark deployments.
When a 2FA credential is activated, the credentials do not work.
User-specific usage data for a specific day may not be accurately displayed or retrieved.
Issue with GCP test connections.
Profiling in BigQuery fails for tables when the partition filter is enabled.
DQ Policy fails sometimes due to an inability to verify a space column with special characters.
Unable to pick a reference asset on a Data Policy template when defining a Lookup Rule.
The data lineage view for job runs is not visible on Databricks' Job Run Details page.
If all values are null for primitive string and complex data types, profiling will occur, but the data type will be unknown.
All failed events are not captured in the Audit Logs page.
When performing Incremental Data Quality (DQ) checks with
Datetime2
andDateOffset
formats in Synapse, if no new records are added, the system processes the last record instead of skipping processing.
This section consists of important links to get started with ADOC.