Title
Create new category
Edit page index title
Edit category
Edit link
Known Limitations
This section consists of known limitations we are aware of, persisting in this release.
Oozie
Issue description: SSL certificate with multiple SAN entries are not supported on 3.3.6.2-1
Oozie service will fail to start if keystore contains such certificates.
Could not start EmbeddedOozieServer! Error message:KeyStores with multiple certificates are not supported on the base classorg.eclipse.jetty.util.ssl.SslContextFactory.(Use org.eclipse.jetty.util.ssl.SslContextFactory$Server ororg.eclipse.jetty.util.ssl.SslContextFactory$Client instead)Workaround
# Step 1: Take backup of the existing jarcp /usr/odp/3.3.6.2-1/oozie/embedded-oozie-server/oozie-server-5.2.1.3.3.6.2-1.jar \ /tmp/oozie-server-5.2.1.3.3.6.2-1.jar.bak# Step 2: Download the patched jarcurl -o /usr/odp/3.3.6.2-1/oozie/embedded-oozie-server/oozie-server-5.2.1.3.3.6.2-1.jar \https://ad-odp.s3.us-west-1.amazonaws.com/ODP_Patches/3.3.6.2-1/oozie/oozie-server-5.2.1.3.3.6.2-1.jar# Step 3: Verify ownership and permissions (optional but recommended)chown oozie:hadoop /usr/odp/3.3.6.2-1/oozie/embedded-oozie-server/oozie-server-5.2.1.3.3.6.2-1.jarchmod 644 /usr/odp/3.3.6.2-1/oozie/embedded-oozie-server/oozie-server-5.2.1.3.3.6.2-1.jarHive
Issue description: Hive Compaction Failure
Hive compaction jobs fail due to a version conflict with the protobuf-java library.
- This typically occurs during a minor or major compaction operation on ORC-backed Hive tables.
- This can be resolved by updating MapReduce Classpath.
- The issue is identified in ODP versions 3.3.6.0-1 and 3.3.6.1-1.
- The issue will be fixed in ODP version 3.3.6.2-1.
When executing the compaction command, the jobs fail with the following error message in the logs.
ALTER TABLE employee COMPACT 'minor'; 16:06:50.650 [main] ERROR org.apache.hadoop.mapred.YarnChild - Error running child : java.lang.NoSuchMethodError: 'com.google.protobuf.LazyStringList com.google.protobuf.LazyStringList.getUnmodifiableView()' at org.apache.orc.OrcProto$Type$Builder.buildPartial(OrcProto.java:20430) at org.apache.orc.OrcProto$Type$Builder.build(OrcProto.java:20408) at org.apache.orc.OrcUtils.appendOrcTypes(OrcUtils.java:203) at org.apache.orc.OrcUtils.getOrcTypes(OrcUtils.java:110) at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:1031) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:2459) at org.apache.hadoop.hive.ql.txn.compactor.MRCompactor$CompactorMap.map(MRCompactor.java:823) at org.apache.hadoop.hive.ql.txn.compactor.MRCompactor$CompactorMap.map(MRCompactor.java:799) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)This indicates a version mismatch in the protobuf-java library used during MapReduce execution, resulting in a NoSuchMethodError.
Root Cause
The Hive client uses a newer version of the protobuf-java library that includes the getUnmodifiableView() method. However, during compaction, the MapReduce job loads an older version of protobuf-java from the cluster's classpath, which lacks this method, leading to a runtime error.
Workaround
To ensure the correct protobuf-java version is used during MapReduce jobs, you need to explicitly update the MapReduce classpath to include the Hive client’s protobuf-java jar before other entries.
The steps to fix via Ambari are as follows:
Pre-requisites: Ensure that the Tez client is installed on all NodeManagers.
- Log into the Ambari UI.
- Navigate to MapReduce2 → Configs → Advanced → Advanced mapred-site.
- Locate the property
mapreduce.application.classpath. - Prepend the following path to the existing value (do not overwrite the current classpath):
/usr/odp/current/tez-client/lib/protobuf-java-3.21.1.jar:Also, ensure to confirm if the same version exists on the cluster, if not, use the available version.
The final value must look similar to the following:
/usr/odp/current/tez-client/lib/protobuf-java-3.21.1.jar:$PWD/mr-framework/hadoop/share... (rest of the classpath)- Save the changes.
- Restart the necessary services (typically MapReduce and Hive components) to apply the new configuration.
Verify
Use the following steps to verify the fix.
- Re-run the following command.
ALTER TABLE employee COMPACT 'minor';- Monitor the YARN application logs for successful completion.
- Confirm that the error is no longer present and the compaction completes as expected.
HBase
Issue description: Hbase master fails to start with Ranger-HBase plugin enabled and no Tez client on HBase master.
When the Ranger-Hbase plugin is enabled, the HBase master fails to start due to a NoClassDefFoundError for org.apache.commons.lang.StringUtils.
- The issue is identified in ODP versions 3.3.6.0-1 and 3.3.6.1-1.
- The issue will be fixed in ODP version 3.3.6.2-1.
ERROR [master/hbase:16000:becomeActiveMaster] coprocessor.CoprocessorHost: The coprocessor org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor threw 'java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtilsjava.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils' at org.apache.ranger.authorization.hadoop.config.RangerConfiguration.getFileLocation(RangerConfiguration.java:76) ~[?:?] at org.apache.ranger.authorization.hadoop.config.RangerConfiguration.addResourceIfReadable(RangerConfiguration.java:48) ~[?:?] at org.apache.ranger.authorization.hadoop.config.RangerPluginConfig.addResourcesForServiceType(RangerPluginConfig.java:287) ~[?:?] at org.apache.ranger.authorization.hadoop.config.RangerPluginConfig.<init>(RangerPluginConfig.java:73) ~[?:?]Workaround:
Depending on the cluster setup, we can proceed with either of the following workarounds.
- If the cluster includes Tez components, ensure the Tez client is installed on the HBase Master host.
- Execute the following command on the HBase Master host to add the required JAR file, this resolves the HBase Master startup issue.
cp /usr/odp/3.3.6.1-1/hadoop/lib/ranger-hdfs-plugin-impl/commons-lang-2.6.jar /usr/odp/3.3.6.1-1/ranger-hbase-plugin/lib/ranger-hbase-plugin-impl/