Manage Data Plane
After you have successfully installed and deployed your Data Plane, you can manage, monitor, and maintain it using several key features available in the ADOC UI and CLI.
1. View Data Plane List
The Data Plane List View shows all registered Data Planes in your ADOC environment in a tabular format. Each row includes details such as the Data Plane’s name, current status (e.g., Installing or Running), number of associated data sources, cloud provider (AWS, GCP, or Azure), deployment region, description, installation URL, version, and last updated timestamp. Available actions include Delete, View Logs, and Upgrade Pipeline.
2. Viewing Data Plane Logs
To view logs for a Data Plane:
- Navigate to the Data Plane list view.
- Click the vertical ellipsis (⋮) next to the Data Plane.
- Select View Logs.
Available Log Types
Crawler Logs: For data sources like Snowflake or Databricks, crawler logs are generated by the analysis service on the Data Plane. To view:
- Navigate to the relevant data source.
- Click the ⋮ icon, then select View Crawler Logs.
- Monitor the output in real-time to diagnose pipeline issues.
Node Details: Kubernetes collects details about each node's CPU, memory, storage, OS, and usage.
Ensure that the Kubernetes monitoring service is running and that you have appropriate RBAC permissions to view this data.
3. Delete a Data Plane
You can delete a Data Plane from the ADOC UI. This action only removes the Data Plane from the ADOC Control Plane — the actual cloud resources must be manually deleted from your cloud provider.
To delete a Data Plane:
- In the left navigation, click Register, then select the Data Planes tab.
- Find the Data Plane you want to delete and click the delete icon next to it.
- In the confirmation pop-up, click Confirm to proceed with the deletion.
Next Step: Manually delete the associated infrastructure on your cloud platform to complete the cleanup.
For AWS:
- Delete the Load Balancer
- Delete the CloudFormation Stack
- Remove the Data Plane YAML and any related files from your S3 bucket
For Azure:
- Delete resources from the Resource Group
- Remove the Data Plane YAML and related files from the Azure Storage container
4. Data Plane Health Monitor
Starting with ADOC v3.0, you can access a powerful Data Plane Health Monitor that gives real-time visibility into the operational health of your deployed Data Planes. When you select Data Plane Observability in the ADOC interface, a new window opens showing a live dashboard with key insights into:
- Spark History Server activity
- Client analysis performance
- Job activity and resource usage
These dashboards help you monitor how well your Data Plane is running, identify slowdowns or failures quickly, and ensure resources are being used efficiently.
For a complete walkthrough of the features and what each metric means, refer to the Data Plane Health Monitor documentation.
5. Data Plane Configuration Utility (Switch Data Plane)
The Data Plane Configuration Utility allows you to reassign a data source from one Data Plane to another — ideal for disaster recovery or planned maintenance. This supports Business Continuity Planning (BCP) by enabling fast, automated transitions between primary and backup Data Planes without manual reconfiguration.
Key Benefits
- Minimizes downtime with quick failover
- Reduces manual effort — no need to manually edit JDBC or config
- Maintains continuity of data observability across Data Planes
5.1 How It Works
The utility takes a JSON input file that defines:
- The data source
- Current and target Data Plane
- Updated JDBC/config details
Sample Input JSON
{  "datasources": {    "<dataSourceName>": {      "dataplane_name_from": "<from_dataplane>",      "dataplane_name_to": "<to_dataplane>",      "config": {        "connectionConfig": {          "properties": [            {"key": "jdbc.url", "value": "<new_jdbc_url>"},            {"key": "jdbc.user", "value": "<username>"},            {"key": "jdbc.password", "value": "<password>"}          ]        },        "dataObservabilityConfig": {          "properties": [            {"key": "data.freshness.monitoring", "value": true},            {"key": "schema.drift.monitoring", "value": true}          ]        }      }    }  }}Tip: Validate the JSON file before running the utility to avoid runtime errors.
5.2 Execute the Switch Script
Run the following command:
    python datasource_dataplane_switch.py <url> <access_key> <secret_key> <input.json path> <continue_on_fail> <connection_check>Example
python datasource_dataplane_switch.py https://example.url.com/ ABCD1234 XYZ7890 /path/to/input.json true true5.3 Logs and Rollback
Logs will show success/failure messages per data source.
Automatic rollback is triggered for any failure to ensure data integrity. The data source remains tied to the previous Data Plane.
5.4 Example Configuration for Azure MSSQL
"connectionConfig": {  "properties": [    {"key": "mssql.url", "value": ""},    {"key": "mssql.user", "value": ""},    {"key": "mssql.use.msi", "value": false},    {"key": "use.service.principals", "value": false},    {"key": "enable.secret.manager", "value": true},    {"key": "secret.configuration.name", "value": "SECRETS_MANAGER"},    {"key": "secret.key", "value": "secretKey"}  ]},"dataObservabilityConfig": {  "properties": [    {"key": "mssql.db.0", "value": "dbName"},    {"key": "query.analysis.service", "value": false},    {"key": "timeZone", "value": "UTC"},    {"key": "schema.drift.monitoring", "value": false},    {"key": "slots.enabled", "value": false}  ]}You can follow a similar format for:
- Kafka
- HDFS
- MongoDB
- BigQuery
- Hive
- GCS
Best Practices
What to Do After Installing a Data Plane: A Maintenance & Operations Guide
Once your Data Plane is installed and running, the focus shifts to ongoing operations and lifecycle management. This involves monitoring health, resolving issues, upgrading safely, and preparing for failover or disaster recovery. Here’s what you need to know:
- Understand & Monitor Data Plane Health
Why?
A healthy Data Plane is critical to ensure your pipelines run smoothly, data sources are reachable, and observability metrics flow without disruption.
What to Do: Use the Data Plane Health Dashboard (aka Data Observability) to track:
- Spark History Server metrics (if Spark is enabled)
- Resource usage: CPU, memory, disk
- Analysis client performance
- Running jobs, failures, retries
- Enable Kubernetes monitoring and ensure you have permissions to view node-level metrics.
- Set up alerting (external to ADOC if needed) for threshold breaches (e.g., memory pressure, node failures).
- Tackle Issues and Troubleshoot Effectively
Why?
Failures in crawlers, pipelines, or the environment can affect data freshness, schema drift detection, or ingestion.
What to Do:
Use View Logs from the Data Plane list for:
- General DP activity logs
- Analysis service and crawler logs (per data source)
- If Spark on Kubernetes is used, collect logs from the Spark job UI via the History Server.
Check:
- If pods are restarting or crashing
- If nodes are under-provisioned or unschedulable
- If config changes (JDBC, secret keys) are out of sync
- Test and Validate After Install
Why?
Verifying the installation is successful avoids surprises during production usage.
What to Do:
Add a test data source (or existing non-critical one) to validate:
- Successful crawler run
- Logs are generated
- Schema and freshness checks are working
- Confirm that analysis jobs run without manual intervention
- Monitor the DP from the Health Dashboard for a few cycles before moving to production load
- Upgrade Data Plane Components
Why?
To get the latest performance, security, and functionality improvements.
What to Do:
- Use the Upgrade Pipeline action from the UI
- Validate your existing data sources after upgrade — especially connectors, JDBC configs, and secret integrations
- Backup your YAMLs and config files before triggering upgrades
- Ensure your team is aware of the version compatibility matrix (e.g., DP version vs ADOC version)
- Business Continuity Planning (BCP)
Why?
In case of region failure, misconfiguration, or security issues, you need a quick fallback.
What to Do:
- Set up a secondary (standby) Data Plane in a different region or cloud provider
- Use the Data Plane Configuration Utility (aka Switch Data Plane) to:
- Reassign data sources from one DP to another
- Swap JDBC/config and observability settings automatically
- Test the failover at least once per quarter
- Keep both primary and backup DPs in sync in terms of version and configuration
- Housekeeping and Maintenance Tips
Keep in mind:
- Periodically clean up old, unused Data Planes
- Always remove cloud resources manually after deleting a Data Plane in the UI
- Rotate credentials used in the Data Plane configuration regularly
- Review storage utilization for logs and output datasets (especially if using cloud buckets or volumes)
- Maintain an internal runbook or checklist for DP lifecycle activities (install, test, maintain, upgrade, delete)
