Additional resource requirements for Cloudera AI
Standard resource mode requirements for standalone Cloudera AI. Node count should not be a limiting factor assuming the other memory and CPU minimums are reached.
Component | Minimum | Recommended |
---|---|---|
Node Count | 1 | 1 per workspace + additional nodes depending on expected user workloads |
CPU | 32 Cores Per Workspace+ additional Cores depending on expected user workloads | 48 Cores Per workspace + additional Cores depending on expected user workloads |
Memory | 128 GB + additional memory depending on the expected workloads | 256 GB Per Workspace + additional memory depending on the expected workloads |
Storage |
Set up ECS/Longhorn with SSDs with the recommended cumulative 2600 GB of Block storage. For Production environments, it is strongly recommended to setup an External NFS environment with at least 1000 GB of NFS storage with additional Block storage based on project file sizing. The total (not per node) storage needed only for Cloudera AI in Cloudera Embedded Container Servicewithout disaster recovery (DRS) is 1300 Gi per workbench with the external NFS. If the Cloudera AI Workbench uses internal NFS, the total minimum storage needed per workbench is 3300Gi. Considering the DRS and single backup of the workbench, the total storage needed is 1300 Gi * 2 = 2600 Gi for the workbench with external NFS. If the workbench uses internal NFS, the total storage needed is 6600Gi. |
|
Network Bandwidth | 1GB/s to all nodes and base cluster | 1GB/s to all nodes and base cluster |
Additional Resources for User Workloads:
Component | Minimum | Recommended |
---|---|---|
CPU | 1 Core per concurrent workload | 2–16 cores per concurrent workload (dependent on use cases) |
Memory | 2 GB per concurrent workload | 4–64 GB per concurrent workload (dependent on use cases) |