Known issues in 7.1.9 SP1 CHF 5

You must be aware of the known issues and limitations, the areas of impact, and workaround in Cloudera Runtime 7.1.9 SP1 CHF 5.

The list of known issues for the Runtime release 7.1.9 SP1 CHF 5 includes the known issues from Runtime release 7.1.9 SP1. For more information, see Known issues in Cloudera Runtime 7.1.9 SP1.

CDPD-49323: INSERT statement does not respect Ranger policies for HDFS
In a cluster with Ranger auth (and with legacy catalog mode), even if you provide RWX to cm_hdfs -> all-path for the user impala, inserting into a table whose HDFS POSIX permissions happen to exclude impala access will result in AnalysisException: Unable to INSERT into target table (default.t1) because Impala does not have WRITE access to HDFS location: hdfs://XXXXXXXXXXXX

Apache Jira: IMPALA-11871

CDPD-78069: Oozie action configuration's Java options are not applied due to CDPD-60551
Java options configured in workflow xml's action configurations such as yarn.app.mapreduce.am.command-opts and mapreduce.map.java.opts are not applied to the Oozie Launcher AM's JVM.
  1. Go to Cloudera Manager > Oozie > Configuration.
  2. Search for Oozie Server Advanced Configuration Snippet (Safety Valve) for oozie-site.xml.
  3. In the Oozie Server Advanced Configuration Snippet (Safety Valve) for oozie-site.xml, add the following line:
    <name>oozie.LauncherConfigurationInjector.hadoop.search.properties</name><value> </value>
  4. Restart Oozie.
CDPD-77054: Exporting shell entities can cause the import to fail
For Apache Hive tables created with Apache Spark, there may be shell entities created in Apache Atlas. If these entities are in the export zip file before they are resolved, the import will fail.
Workaround: None
CDPD-75742: Upgrade commons-lang3 to 3.13.0
The Commons-text 1.11.0 in Spark calls Range.of(..) method of commons-lang3 which is not available in the version 3.12.0, resulting in Oozie-Spark job failure.
Upgrade to commons-lang3 version 3.13.0 which contains the required method.
CDPD-77399: HBase fails to register the servlet metrics and throws ClassNotFoundException: org.apache.hadoop.metrics.MetricsServlet
The MetricsServlet class is a Hadoop 2-based metric servlet unavailable in Hadoop 3 deployments.
Workaround: Ignore this WARN log message during HBase Master and RegionServer startup.
Apache Issue: HBASE-28315
CDPD-70450: Impala SQL queries that include the “WITH” clause should populate lineage in Atlas
Impala SQL queries that do not use the WITH clause can show lineage in Atlas, but queries that do use the WITH clause cannot show lineage in Apache Atlas. Impala SQL queries using the WITH clause are not supported.
CDPD-76035: Resource lookup for Atlas service is failing
Once the Atlas configuration snippet atlas.authentication.method.file is enabled and a classification is created, these do not synchronize correctly to the Type Category resource field setting of Apache Ranger. The newly created classification won't be able to be selected as the Type Name.
CDPD-77738: Atlas hook authorization issue causing HiveCreateSysDb timeout
Atlas hook authorization error causes HiveCreateSysDb command to time out due to repeated retries.
None
CDPD-79160: NPE while deleting BusinessMetadata
If business metadata is created without adding any applicable types, a NullPointerException is produced when we try to delete that business metadata.
None

Apache Jira: ATLAS-4863

On FIPS cluster, Knox-Impala connection is failing with SSL error code 5
The Knox-Impala connection is failing with SSL error code 5 on FIPS clusters. This prevents accessing Impala through Knox.
CDPD-78656: Health test for Knox fails if the gateway.client.auth.needed = true is set
The health test for Knox Gateway fails if the gateway.client.auth.needed parameter is set to true. Environments using the "curl" call are impacted. The curl call from CM is not specifying any certificate (store) while Knox is configured to require one.
To avoid the startup failure, use gateway.client.auth.wanted = true.
CDPD-77911: Missing Log4j Redactor dependency
The missing org.cloudera.logredactor dependency can result impossible Log4j errors during startup. Additionally, it can lead to unwanted data leak because of incorrect log redaction.
CDPD-80921: Without permission for one glossary, /glossary call throws exception
When you don't have the permission for a glossary, the /glossary call results in a exception, blocking authorization for all glossaries. The error message is the following:
{  "errorCode": "ATLAS-403-00-001",  "errorMessage": "hrt_qa is not authorized to perform read entity: guid=*********-****-****-****-*******"}  
CDPD-82056: UI: when server response date fields as '0', UI shows as current time
If an API response contains an invalid date value (such as 0) intended for display on the user interface, the current system date is shown. This issue specifically affects the Entity Detail page, where the create time and modified time are displayed.
Apache JIRA: ATLAS-5015
CDPD-69140: Incremental export: When an entity has tag propagated and is exported, the tag is not propagated to it in the export
When creating tables with multiple levels of depth, a tag applied to the first table will be propagated along the lineage. After exporting the last table, it will not have the propagated tag.
CDPD-69213: Export/Import, Incremental Export: When entity exported has a tag propagated from entity which is deleted , tag is not propagated to it at target
When creating tables with multiple levels of depth, a tag applied to the first table will be propagated along the lineage. After dropping the first table and exporting the entire lineage, the child tables will not have the propagated tag.