Cloudera Manager 7.13.1.200 Cumulative hotfix 2
Know more about the Cloudera Manager 7.13.1.200 cumulative hotfixes 2.
This cumulative hotfix was released on April 24, 2025.
- Floodix Daemon support for Cloudera Manager 7.13.1.200 CHF2
-
From Cloudera Manager 7.13.1.200 CHF2 release, the Floodix Daemon (which uses the client library called Anacrolix/torrent written in Golang) replaces the old Flood Daemon (which uses the Libtorrent protocol and libraries) on all the hosts managed by Cloudera Manager. The purpose of the Floodix daemon and the previous Flood daemon is to efficiently distribute Cloudera Runtime parcels.
With the new Floodix Daemon running, now the Cloudera Manager Server acts as a seeder for all the parcels that are managed across different clusters. In any case, if you want to disable the Cloudera Manager Server acting as a seeder, then perform the steps from How to disable the Cloudera Manager Server acting as a seeder for all the managed parcels.
- Rocky Linux 9.4 support for Cloudera Manager
-
Starting with the Cloudera Manager 7.13.1.200 CHF2 release, Cloudera Manager provides support for Rocky Linux. This update ensures seamless compatibility with Rocky Linux version 9.4, offering greater flexibility and platform options.
Rocky Linux 9.4 supports only Python 3.9 version in Cloudera Manager 7.13.1.200 CHF2 release.
- Apply java options for each Ozone role separately
-
Earlier, java options added in the Ozone configurations using the Ozone Java Options configuration are applied to all the Ozone roles together. You cannot add the java options individually to each Ozone role separately.
Now, new configurations such as Ozone Manager Java Options, Ozone SCM Java Options, and so on allows you to add java options to each Ozone role separately. Also, common java options added to the Ozone Java Options configuration apply to all the Ozone roles.
- OPSAPS-72398: Hive Metastore connection pool metrics in Cloudera Runtime Charts
- Starting with Cloudera Runtime 7.3.1.200 SP1 new Hive Metastore
charts in Cloudera Manager display connection pool metrics. These
charts provide visibility into the connection pools(such as the objectstore, txnhandler)
by following pool metrics:
objectstore.pool.ActiveConnections
objectstore.pool.IdleConnections
objectstore.pool.PendingConnections
objectstore.pool.TotalConnections
This enhancement helps you monitor and optimize Hive Metastore performance.
- OPSAPS-68733: New KMS Tomcat metrics are available in Cloudera Manager
- New Tomcat container metrics are added in Cloudera Manager. They can
be used by the end users to monitor Tomcat operations. These metrics are available in
chart builder and can be used to create a new chart as per the requirement. List of new
metrics added are as follows:
- ranger_kms_max_connections
- ranger_kms_active_connections
- ranger_kms_accept_connections
- ranger_kms_connection_timeout
- ranger_kms_connection_keepalive_timeout
- ranger_kms_max_worker_thread_count
- ranger_kms_min_worker_thread_count
- ranger_kms_active_worker_thread_count
- ranger_kms_total_worker_thread_count
- OPSAPS-73546: Service Monitor fails to perform Canary tests on HMS / HBASE / ZooKeeper due to missing dependencies
-
Due to a missing dependency caused by an incomplete build and packaging in certain OS releases, the HMS (Hive Metastore) Canary health test fails, logging a ClassNotFoundException in the Service Monitor log. This problem relates to all deliveries using runtime cluster version 7.1.x or 7.2.x, while the Cloudera Manager version is 7.13.1.x and the OS is NOT RHEL8.
- OPSAPS-73225: Cloudera Manager Agent reporting inactive/failed processes in Heartbeat request
-
As part of introducing Cloudera Manager 7.13.x, some changes were done to the Cloudera Manager logging, eventually causing Cloudera Manager Agent to report on inactive/stale processes during Heartbeat request.
As a result, the Cloudera Manager servers logs are getting filled rapidly with these notifications though they do not have impact on service.
In addition, with adding the support for the Observatory feature, some additional messages were added to the logging of the server. However, in case the customer did not purchase the Observatory feature, or the telemetry monitoring is not being used, these messages (which appears as "TELEMETRY_ALTUS_ACCOUNT is not configured for Otelcol" are filling the server logs and preventing proper follow-up on the server activities).
This will be fixed in a later release by moving these log notifications to DEBUG level so they don't appear on the Cloudera Manager server logs. Until that fix, perform the following workaround to filter out these messages.
- OPSAPS-73211: Cloudera Manager 7.13.1 does not clean up Python Path impacting Hue to start
-
When you upgrade from Cloudera Manager 7.7.1 or lower versions to Cloudera Manager 7.13.1 or higher versions with CDP Private Cloud Base 7.1.7.x Hue does not start because Cloudera Manager forces Hue to start with Python 3.8, and Hue needs Python 2.7.
The reason for this issue is because Cloudera Manager does not clean up the Python Path at any time, so when Hue tries to start the Python Path points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by Hue.
- OPSAPS-72756:The runOzoneCommand API endpoint fails during the Ozone replication policy run
- The
/clusters/{clusterName}/runOzoneCommand Cloudera Manager API
endpoint fails when the API is called with the getOzoneBucketInfo
command. In this scenario, the Ozone replication policy runs also fail if the following
conditions are true:
- The source Cloudera Manager version is 7.11.3 CHF11 or 7.11.3 CHF12.
- The target Cloudera Manager is version 7.11.3 through 7.11.3 CHF10 or 7.13.0.0 or later where the feature flag API_OZONE_REPLICATION_USING_PROXY_USER is disabled.
- OPSAPS-65377: Cloudera Manager - Host Inspector not finding Psycopg2 on Ubuntu 20 or Redhat 8.x when Psycopg2 version 2.9.3 is installed.
-
Host Inspector fails with Psycopg2 version error while upgrading to Cloudera Manager 7.13.1.x versions. When you run the Host Inspector, you get an error Not finding Psycopg2, even though it is installed on all hosts.
- OPSAPS-72447, CDPD-76705: Ozone incremental replication fails to copy renamed directory
- Ozone incremental replication using Ozone replication policies succeed but might fail to sync nested renames for FSO buckets.
- OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.
-
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users
configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed. - OPSAPS-69847:Replication policies might fail if source and target use different Kerberos encryption types
-
Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in Kerberos because of different Java versions. For example, the Java 11 and higher versions might use the aes256-cts encryption type, and the versions lower than Java 11 might use the rc4-hmac encryption type.
- OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode
-
MariaDB 10.6, by default, includes the property
require_secure_transport=ON
in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection. - OPSAPS-70771: Running Ozone replication policy does not show performance reports
- During an Ozone replication policy run, the A
server error has occurred. See Cloudera Manager server log for details error
message appears when you click:
- Replication Policies page. or on the
- Download CSV on the Replication History page to download any report.
- CDPD-53185: Clear REPL_TXN_MAP table on target cluster when deleting a Hive ACID replication policy
- The entry in REPL_TXN_MAP table on the target cluster is
retained when the following conditions are true:
- A Hive ACID replication policy is replicating a transaction that requires multiple replication cycles to complete.
- The replication policy and databases used in it get deleted on the source and target cluster even before the transaction is completely replicated.
In this scenario, if you create a database using the same name as the deleted database on the source cluster, and then use the same name for the new Hive ACID replication policy to replicate the database, the replicated database on the target cluster is tagged as ‘database incompatible’. This happens after the housekeeper thread process (that runs every 11 days for an entry) deletes the retained entry.
- OPSAPS-71592: Replication Manager does not read the default value of “ozone_replication_core_site_safety_valve” during Ozone replication policy run
- During the Ozone replication policy run, Replication
Manager does not read the value in the
ozone_replication_core_site_safety_valve
advanced configuration snippet if it is configured with the default value. - OPSAPS-71897: Finalize Upgrade command fails after upgrading the cluster with CustomKerberos setup causing INTERNAL_ERROR with EC writes.
- After the UI FinalizeCommand fails, you
must manually run the finalize commands through the Ozone Admin CLI:
- kinit with the scm custom kerberos principal
- ozone admin scm finalizeupgrade
- ozone admin scm finalizationstatus
- OPSAPS-72204: HMS compaction configuration not updated through Cloudera Manager UI
- The hive.compactor.initiator.on checkbox in Cloudera Manager UI for Hive Metastore (HMS) does not reflect the actual configuration value in cloud deployments. The default value is false, causing the compactor to not run.
- OPSAPS-70702: Ranger replication policies fail because of the truststore file location
- Ranger replication policies fail during the
Exporting services, policies and roles from Ranger remote
step. - OPSAPS-71424: The configuration sanity check step ignores during the replication advanced configuration snippet values during the Ozone replication policy job run
- The OBS-to-OBS Ozone replication policy jobs fail if the
S3 property values for
fs.s3a.endpoint
,fs.s3a.secret.key
, andfs.s3a.access.key
are empty inOzone Service Advanced Configuration Snippet (Safety Valve) for ozone-conf/ozone-site.xml
even though you defined the properties inOzone Replication Advanced Configuration Snippet (Safety Valve) for core-site.xml
. - OPSAPS-71403: Ozone replication policy creation wizard shows "Listing Type" field in source Cloudera Private Cloud Base versions lower than 7.1.9
- When the source Cloudera Private Cloud Base cluster version is lower than 7.1.9 and the Cloudera Manager version is 7.11.3, the Ozone replication policy creation wizard shows Listing Type and its options. These options are not available in Cloudera Private Cloud Base 7.1.8x versions.
- OPSAPS-71659: Ranger replication policy fails because of incorrect source to destination service name mapping
- Ranger replication policy fails because of incorrect source to destination service name mapping format during the transform step.
- OPSAPS-69782: HBase COD-COD replication from 7.3.1 to 7.2.18 fails during the "create adhoc snapshot" step
- An HBase replication policy replicating from 7.3.1 COD to 7.2.18 COD cluster that has ‘Perform Initial Snapshot” enabled fails during the snapshot creation step in Cloudera Replication Manager.
- OPSAPS-71414: Permission denied for Ozone replication policy jobs if the source and target bucket names are identical
- The OBS-to-OBS Ozone replication policy job fails with the com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden or Permission denied error when the bucket names on the source and target clusters are identical and the job uses S3 delegation tokens. Note that the Ozone replication jobs use the delegation tokens when the S3 connector service is enabled in the cluster.
- OPSAPS-71256: The “Create Ranger replication policy” action shows 'TypeError' if no peer exists
- When you click , the TypeError: Cannot read properties of undefined error appears.
- OPSAPS-71067: Wrong interval sent from the Replication Manager UI after Ozone replication policy submit or edit process.
- When you edit the existing Ozone replication policies, the schedule frequency changes unexpectedly.
- OPSAPS-70848: Hive external table replication policies fail if the source cluster is using Dell EMC Isilon storage
- During the Hive external table replication policy run,
the replication policy fails at the
Hive Replication Export
step. This issue is resolved. - OPSAPS-71005: RemoteCmdWork uses a singlethreaded executor
- Replication Manager runs the remote commands for a replication policy through a single-thread executor.
- OPSAPS-59553: SMM's bootstrap server config should be updated based on Kafka's listeners
- SMM does not show any metrics for Kafka or Kafka Connect when multiple listeners are set in Kafka.
- OPSAPS-69317: Kafka Connect Rolling Restart Check fails if SSL Client authentication is required
- The rolling restart action does not work in Kafka Connect when the ssl.client.auth option is set to required. The health check fails with a timeout which blocks restarting the subsequent Kafka Connect instances.
- OPSAPS-70971: Schema Registry does not have permissions to use Atlas after an upgrade
- Following an upgrade, Schema Registry might not have the required permissions in Ranger to access Atlas. As a result, Schema Registry's integration with Atlas might not function in secure clusters where Ranger authorization is enabled.
- OPSAPS-59597: SMM UI logs are not supported by Cloudera Manager
- Cloudera Manager does not display a Log Files menu for SMM UI role (and SMM UI logs cannot be displayed in the Cloudera Manager UI) because the logging type used by SMM UI is not supported by Cloudera Manager.
- OPSAPS-72298: Impala metadata replication is mandatory and UDF functions parameters are not mapped to the destination
- Impala metadata replication is enabled by default but the legacy Impala C/C++ UDF's (user-defined functions) are not replicated as expected during the Hive external table replication policy run.
- OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
- You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
- OPSAPS-72468: Subsequent Ozone OBS-to-OBS replication policy do not skip replicated files during replication
- The first Ozone replication policy run is a bootstrap run. Sometimes, the subsequent runs might also be bootstrap jobs if the incremental replication fails and the job runs fall back to bootstrap replication. In this scenario, the bootstrap replication jobs might replicate the files that were already replicated because the modification time is different for a file on the source and the target cluster.
- OPSAPS-72470: Hive ACID replication policies fail when target cluster uses Dell EMC Isilon storage and supports JDK17
- Hive ACID replication policies fail if the target cluster is deployed with Dell EMC Isilon storage and also supports JDK17.
- OPSAPS-72809: Ranger policy script for Knox fails due to double quotation marks
- The Ranger policy script for Knox
(setupRanger.sh)
fails, because theCSD_JAVA_OPTS
parameters are enclosed by double quotation marks in the script. The issue is fixed now. - OPSAPS-72795: Do not allow multiple Ozone services in a cluster
- It is possible to configure multiple Ozone services in a single cluster which can cause irreversible damage to a running cluster. So, this fix allows you to install only one Ozone service in a cluster.
- OPSAPS-72767:
Install Oozie ShareLib
Cloudera Manager command fails on FIPS and FedRAMP clusters - The
Install Oozie ShareLib
command using Cloudera Manager fails to execute on FIPS and FedRAMP clusters. This issue is fixed now. - OPSAPS-72323: Cloudera Manager UI is down with
bootstrap failure due to
ConfigGenExecutor
throwing exception - This issue is fixed now.
- OPSAPS-71566: The polling logic of RemoteCmdWork goes down if the remote Cloudera Manager goes down
- When the remote Cloudera Manager goes down or when there are network failures, the RemoteCmdWork stops to poll. To ensure that the daemon continues to poll even when there are network failures or if the Cloudera Manager goes down, you can set the remote_cmd_network_failure_max_poll_count=[*** ENTER REMOTE EXECUTOR MAX POLL COUNT***] parameter on the page. Note that the actual timeout is provided by a piecewise constant function (step function) where the breakpoints are: 1 through 11 is 5 seconds, 12 through 17 is 1 minute, 18 through 35 is 2 minutes, 36 through 53 is 5 minutes, 54 through 74 is 8 minutes, 75 through 104 is 15 minutes, and so on. Therefore when you enter 1, the polling continues for 5 seconds after the Cloudera Manager goes down or after a network failure. Similarly when you set it 75, the polling continues for 15 minutes.
- OPSAPS-67197: Ranger RMS server shows as healthy without service being accessible
- Being a Web service, Ranger RMS might not be initialized
due to other issues causing RMS to be inaccessible. But Ranger RMS service was still
shown as healthy, because Cloudera Manager only monitors Process
Identification Number (PID).
This issue is fixed now. Added the health status canary support for Ranger RMS service which connects to RMS after some specific intervals and shows alert on the Cloudera Manager UI if RMS is not reachable.
- OPSAPS-71933: Telemetry Publisher is unable to publish Spark event logs to Cloudera Observability when multiple History Servers are set up in the Spark service.
- This issue is now resolved by adding the support for multiple Spark History Server deployments in Telemetry Publisher.
- OPSAPS-71623: Some Spark jobs are missing from the Workload XM interface. In the Telemetry Publisher logs for these Spark jobs, the error message java.lang.IllegalArgumentException: Wrong FS for Datahub cluster is displayed.
- The issue is resolved by addressing Telemetry Publisher failures during the processing of Yarn logs.
- Fixed Common Vulnerabilities and Exposures
- For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.13.1 cumulative hotfix 2, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.13.1 and Cloudera Manager 7.13.1 cumulative hotfixes.
Cloudera Manager 7.13.1.200 CHF 2 download information
The repositories for Cloudera Manager 7.13.1.200-CHF 2 are listed in the following table:
Repository Type | Repository Location |
---|---|
RHEL 9 Compatible | Repository: Repository
File:
|
RHEL 8 Compatible | Repository: Repository
File:
|
SLES 15 | Repository: Repository
File:
|
Ubuntu 22 | Repository: Repository
File:
|
Ubuntu 20 | Repository: Repository
File:
|