Removing cluster hosts and Kubernetes nodes

Follow the instructions for removing cluster hosts and Kubernetes nodes.

Before removing the node, shut down the nodes following the instructions in Performing maintenance of a single host in the Embedded Container Service cluster.
If you want to repurpose nodes from Cloudera Private Cloud Base, you must use a different procedure from the one described below. For more information, see Deleting hosts Cloudera Private Cloud.
  1. In Cloudera Manager, go to CDSW service > Status, select the name of a host, and click Actions > Stop Roles on Hosts.
    After you click Confirm, you see a success message.
  2. In Instances, select the Docker Daemon and Worker roles assigned to ageorge-cdsw-4.com and ageorge-cdsw-5.com, and click Actions > Delete.
    In Instances, in Hostnames, ageorge-cdsw-4.com and ageorge-cdsw-5.com are no longer listed.
  3. Use kubectl commands to delete the nodes from Kubernetes, and then check that the nodes are removed.
    [root@ageorge-cdsw-1 ~]# kubectl delete node ageorge-cdsw-4.com
    node "ageorge-cdsw-4.com" deleted
    [root@ageorge-cdsw-1 ~]# kubectl delete node ageorge-cdsw-5.com
    node "ageorge-cdsw-5.com" deleted
                        
                        
    [root@ageorge-cdsw-1 ~]# kubectl get nodes
    NAME                 STATUS   ROLES    AGE   VERSION
    ageorge-cdsw-1.com   Ready    master   61m   v1.19.15
    ageorge-cdsw-2.com   Ready    <none>   61m   v1.19.15
    ageorge-cdsw-3.com   Ready    <none>   61m   v1.19.15              
  4. Log into each host, ageorge-cdsw-4.com and ageorge-cdsw-5.com, and reset Kubernetes.
    kubeadm reset
    Expect a similar output:
    [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
    [reset] Are you sure you want to proceed? 
  5. Respond yes to the prompt, and follow the instructions in the resulting output.
    Expect a similar output:
    [preflight] Running pre-flight checks
    ...
    The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
                            
    The reset process does not reset or clean up iptables rules or IPVS tables.
    If you wish to reset iptables, you must do so manually by using the "iptables" command.
                            
    If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    to reset your system's IPVS tables.
                            
    The reset process does not clean your kubeconfig files and you must remove them manually.
    Please, check the contents of the $HOME/.kube/config file.                
  6. Check that the Kubernetes nodes are removed.
    kubectl get nodes
    Expect a similar output:
    NAME                 STATUS   ROLES    AGE   VERSION
    ageorge-cdsw-1.com   Ready    master   65m   v1.19.15
    ageorge-cdsw-2.com   Ready    <none>   64m   v1.19.15
    ageorge-cdsw-3.com   Ready    <none>   64m   v1.19.15
  7. If the Spark Gateway role is active, in the SPARK ON YARN service, in Instances, select each Gateway role, and click Actions > Delete.
    In this example, the Role Types for ageorge-cdsw-5.com and ageorge-cdsw-4.com hosts are selected.

    Click Delete, and the Role Types assigned to ageorge-cdsw-5.com and ageorge-cdsw-4.com hosts are deleted.

  8. Delete any other active roles from the hosts.
  9. In All Hosts, select the ageorge-cdsw-5.com and ageorge-cdsw-4.com hosts, and click Actions > Remove from Cluster.
    You are prompted to confirm removing the hosts from the cluster:

    Click Confirm, and view the success message upon completion.