Troubleshooting ODP
This section helps you with solving and configuring common issues related to component installation in ODP.
Configure /tmp
If /tmp is required to be mounted as exec but compliance issues necessitate mounting it as noexec, follow the steps below on every node to create an alternative tmp directory:
- Create an alternative tmp directory:
sudo mkdir -p /hadoop/tmp
sudo chown root:hadoop /hadoop/tmp
sudo chmod 1770 /hadoop/tmp
- Update Ambari configuration for respective services:
- For HBase, YARN-HBase (Timeline Reader), and Ambari Metrics (AMS): In Ambari, update the configurations YARN
yarn-hbase-env
, AMSams-hbase-env
, and HBasehbase-env
by adding:
export HBASE_OPTS="$HBASE_OPTS -Dorg.apache.hbase.thirdparty.io.netty.native.workdir=/hadoop/tmp"
- For Knox: Directly update
gateway.sh
andknoxcli.sh
in the file system on all Knox hosts by appending:
-Djava.io.tmpdir=/hadoop/tmp -D*jna*.tmpdir=/hadoop/tmp
- For HDFS/YARN/MR: Override
HADOOP_OPTS
and append:
-Djava.io.tmpdir=/hadoop/tmp
The above adjustments ensure compatibility and functionality despite the noexec mount constraint.
Impala
- Update/add these configurations respectively for smooth impala installation
Pre-installation > Update in impala-env, if not set :
is_coordinator = true
is_executor = true
Post-installation > Add custom properties in hadoop core-site :
hadoop.proxyuser.impala.groups=*
hadoop.proxyuser.impala.hosts=*
- Impala service failed to register ranger authorization :
E0524 21:09:47.710661 104717 catalog.cc:87] InternalException: Unable to instantiate authorization provider: org.apache.impala.authorization.ranger.RangerAuthorizationFactory
CAUSED BY: InvocationTargetException: null
CAUSED BY: IllegalArgumentException: bound must be positive
Ensure ranger is enabled in impala configs. If issue issue still persists, current workaround requires handlers to manage and update hive clients and ranger policies.
- Created dir /etc/ranger/<cluster_name>_hive/policycache and copied respective files from HiveServer2 to impala components
- Copied files /etc/hive/3.2.2.0-2/0/ranger-hive-audit.xml, ranger-hive-security.xml from HS2 to impala components
Ranger
- If KMS ranger policy fails to create. Perform the following steps:
- Remove the empty KMS policy folder from CLI
- Restart Ranger from Ambari UI
- Then, Restart Ranger KMS from Ambari UI
- If service repo creation fails with missing rangerlookup user, create rangerlookup user manually from Ranger UI.
Spark2
During the installation of Spark2, if the user interface displays "installing Livy3" instead of "Livy2," it is a typographical error on the UI front and can be disregarded.
MySQL 8.x
- If service setup or start fails with the error "Host 'host' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'", follow these steps:
# Check and increase max connections
# MySQL shell
SET GLOBAL max_connect_errors=100000;
- If MySQL connection fails with the error "SQLException: SQL state: 08001 java.sql.SQLNonTransientConnectionException: Public Key Retrieval is not allowed ErrorCode: 0", this is caused by the default authentication plugin change from
mysql_native_password
tocaching_sha2_password
in MySQL 8.0. See MySQL 8.0 Reference Manual :: 3.5 Changes in MySQL 8.0.
To fix this issue, add ?allowPublicKeyRetrieval=true&useSSL=false
to the MySQL JDBC connection string. For example: "jdbc:mysql://hostname:3306/ranger?allowPublicKeyRetrieval=true&useSSL=false"
.
Ozone
- Ozone service check fails when Ranger is enabled. To resolve this issue, add a new policy granting
ambari-qa
user permissions to operate onambarismokevolume
as shown below:

Service check is unsuccessful:

Ozone Manager start fails with process cannot set priority:
- Check Ozone Manager Java Heap size and re-configure the value according to the available heap memory through the Ambari UI.
- Restart the Ozone Manager service after making these changes.
If queries fail with the following error due to restrictions when using end-user permissions in Hive: "org.apache.hadoop.security.authorize.AuthorizationException: User: hive is not allowed to impersonate ..."
You can resolve this issue by following these steps:
- Access the Ambari UI.
- Navigate to Ozone > Configurations.
- Locate Custom Core-site configurations.
- Add the following settings:
- hadoop.proxyuser.hive.groups=*
- hadoop.proxyuser.hive.hosts=*
- hadoop.proxyuser.hive.users=*
- Restart the affected services after applying these configurations.