Deploying Kafka Connect clusters
You can deploy a Kafka Connect cluster by creating a KafkaConnect resource. The Kafka Connect workers are automatically configured to run in distributed mode. You can configure the number of workers. Each worker is a separate pod.
- Ensure that the Strimzi Cluster Operator is installed and running. See Installation.
- Ensure that you have a working Kafka cluster. The Kafka cluster does not need to be managed by Strimzi, and it does not need to run on Kubernetes.
-
Ensure that a namespace is available where you can deploy your cluster. If not, create one.
kubectl create namespace[***NAMESPACE***]
-
Ensure that the Secret containing credentials for the Docker registry where Cloudera Streams Messaging - Kubernetes Operator artifacts are hosted is available in the namespace where you plan on deploying your cluster. If the Secret is not available, create it.
kubectl create secret docker-registry [***CREDENTIALS SECRET***] \ --from-file=[***PATH TO CREDENTIALS JSON***] \ --namespace=[***NAMESPACE***]
-
[***CREDENTIALS SECRET***] must be the same as the name of the Secret containing registry credentials that you created during Strimzi installation.
- [***PATH TO CREDENTIALS JSON***] is the path to a Docker configuration JSON file that includes a registry hostname where artifacts are available as well as credentials providing access to the registry. For more information, see Installing Strimzi with Helm.
-
- The following steps walk you through a basic cluster deployment example. If you want to deploy a Kafka Connect cluster that has third-party connectors or other types of plugins installed, see Installing Kafka Connect connector plugins.
NAME DESIRED REPLICAS READY
my-connect-cluster 3 True
- Learn more about configuring your Kafka Connect cluster. See Configuring Kafka Connect clusters.
- Install third-party connectors. See Installing Kafka Connect connector plugins.
- Deploy connectors. See Deploying connectors.