Known Issues in Kafka
Learn about the known issues in Kafka, the impact or changes to the functionality, and the workaround in Cloudera Runtime 7.1.9 SP1 CHF 8.
Known issues identified in Cloudera Runtime 7.1.9 SP1 CHF 9
There are no new known issues identified in this release.
Known issues identified before Cloudera Runtime 7.1.9 SP1 CHF 9
- CDPD-60862: Rolling restart fails during ZDU when DDL operations are in progress
-
During a Zero Downtime Upgrade (ZDU), the rolling restart of services that support Data Definition Language (DDL) statements might fail if DDL operations are in progress during the upgrade. As a result, ensure that you do not run DDL statements during ZDU.
The following services support DDL statements:- Impala
- Hive – using HiveQL
- Spark – using SparkSQL
- HBase
- Phoenix
- Kafka
Data Manipulation Lanaguage (DML) statements are not impacted and can be used during ZDU. Following the successful upgrade, you can resume running DDL statements.
- OPSAPS-59553: SMM's bootstrap server config should be updated based on Kafka's listeners
-
SMM does not show any metrics for Kafka or Kafka Connect when multiple listeners are set in Kafka.
- The
offsets.topic.replication.factorproperty must be less than or equal to the number of live brokers - The
offsets.topic.replication.factorbroker configuration is now enforced upon auto topic creation. Internal auto topic creation will fail with aGROUP_COORDINATOR_NOT_AVAILABLEerror until the cluster size meets this replication factor requirement. - Requests fail when sending to a nonexistent topic with
auto.create.topics.enableset to true -
The first few
producerequests fail when sending to a nonexistent topic withauto.create.topics.enableset to true. - Performance degradation when SSL Is enabled
- In some configuration scenarios, significant performance degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM version, Kafka configuration, and message size. Consumers are typically more affected than producers.
- OPSAPS-43236: Kafka garbage collection logs are written to the process directory
- By default Kafka garbage collection logs are written to the agent process directory. Changing the default path for these log files is currently unsupported.
- RANGER-3809: Idempotent Kafka producer fails to initialize due to an authorization failure
- Kafka producers that have idempotence enabled require the
Idempotent Write permission to be set on the cluster resource in Ranger. If permission
is not given, the client fails to initialize and an error similar to the following is
thrown:
Idempotence is enabled by default for clients in Kafka 3.0.1, 3.1.1, and any version after 3.1.1. This means that any client updated to 3.0.1, 3.1.1, or any version after 3.1.1 is affected by this issue.org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1125) at org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartition(TransactionManager.java:442) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1000) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:914) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:800) . . . Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed. - CDPD-49304: AvroConverter does not support composite default values
- AvroConverter cannot handle schemas containing a
STRUCTtype default value. - DBZ-4990: The Debezium Db2 Source connector does not support schema evolution
- The Debezium Db2 Source connector does not support the evolution (updates) of schemas. In addition, schema change events are not emitted to the schema change topic if there is a change in the schema of a table that is in capture mode. For more information, see DBZ-4990.
- CFM-3532: The Stateless NiFi Source, Stateless NiFi Sink, and HDFS Stateless Sink connectors cannot use Snappy compression
- This issue only affects Stateless NiFi Source and Sink
connectors if the connector is running a dataflow that uses a processor that uses Hadoop
libraries and is configured to use Snappy compression. The HDFS Stateless Sink connector
is only affected if the
Compression CodecorCompression Codec for Parquetproperties are set toSNAPPY.If you are affected by this issue, errors similar to the following will be present in the logs.Failed to write to HDFS due to java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Failed to write to HDFS due to java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support. - OPSAPS-69317: Kafka Connect Rolling Restart Check fails if SSL Client authentication is required
- The rolling restart action does not work in Kafka Connect when the ssl.client.auth option is set to required. The health check fails with a timeout which blocks restarting the subsequent Kafka Connect instances.
