Creating an On-Premise Cluster
TOC
Prerequisites
Node Requirements
- If you downloaded a single-architecture installation package from Download Installation Package, ensure your node machines have the same architecture as the package. Otherwise, nodes won't start due to missing architecture-specific images.
- Verify that your node operating system and kernel are supported. See Supported OS and Kernels for details.
- Perform availability checks on node machines. For specific check items, refer to Node Preprocessing > Node Checks.
- If node machine IPs cannot be directly accessed via SSH, provide a SOCKS5 proxy for the nodes. The
globalcluster will access nodes through this proxy service.
Load Balancing
For production environments, a load balancer is required for cluster control plane nodes to ensure high availability.
You can provide your own hardware load balancer or enable Self-built VIP, which provides software load balancing using haproxy + keepalived.
We recommend using a hardware load balancer because:
- Better Performance: Hardware load balancing performs better than software load balancing.
- Lower Complexity: If you're unfamiliar with keepalived, misconfigurations could make the cluster unavailable, leading to lengthy troubleshooting and seriously affecting cluster reliability.
When using your own hardware load balancer, you can use the load balancer's VIP as the IP Address / Domain parameter. If you have a domain name that resolves to the load balancer's VIP, you can use that domain as the IP Address / Domain parameter.
Note:
- The load balancer must correctly forward traffic to ports
6443,11780, and11781on all control plane nodes in the cluster. - If your cluster has only one control plane node and you use that node's IP as the
IP Address / Domainparameter, the cluster cannot be scaled from a single node to a highly available multi-node setup later. Therefore, we recommend providing a load balancer even for single-node clusters.
When enabling Self-built VIP, you need to prepare:
- An available VRID
- A host network that supports the VRRP protocol
- All control plane nodes and the VIP must be on the same subnet, and the VIP must be different from any node IP.
Connecting global Cluster and Workload Cluster
The platform requires mutual access between the global cluster and workload clusters. If they're not on the same network, you need to:
- Provide
External Accessfor the workload cluster to ensure theglobalcluster can access it. Network requirements must ensureglobalcan access ports6443,11780, and11781on all control plane nodes. - Add an additional address to
globalthat the workload cluster can access. When creating a workload cluster, add this address to the cluster's annotations with the keycpaas.io/platform-urland the value set to the public access address ofglobal.
Image Registry
Cluster images support Platform Built-in, Private Repository, and Public Repository options.
- Platform Built-in: Uses the image registry provided by the
globalcluster. If the cluster cannot accessglobal, see Add External Address for Built-in Registry. - Private Repository: Uses your own image registry. For details on pushing required images to your registry, contact technical support.
- Public Repository: Uses the platform's public image registry. Before using, complete Updating Public Repository Credentials.
Container Networking
If you plan to use Kube-OVN's Underlay for your cluster, refer to Preparing Kube-OVN Underlay Physical Network.
Creation Procedure
-
Enter the Administrator view, and click Clusters/Clusters in the left navigation bar.
-
Click Create Cluster.
-
Configure the following sections according to the instructions below: Basic Info, Container Network, Node Settings, and Extended Parameters.
Basic Info
Container Network
An enterprise-grade Cloud Native Kubernetes container network orchestration system developed by Alauda. It brings mature networking capabilities from the OpenStack domain to Kubernetes, supporting cross-cloud network management, traditional network architecture and infrastructure interconnection, and edge cluster deployment scenarios, while greatly enhancing Kubernetes container network security, management efficiency, and performance.
Node Settings
Node Addition Parameters
Extended Parameters
Note:
-
Apart from required configurations, it's not recommended to set extended parameters, as incorrect settings may make the cluster unavailable and cannot be modified after cluster creation.
-
If a entered Key duplicates a default parameter Key, it will override the default configuration.
Procedure
- Click Extended Parameters to expand the extended parameter configuration area. You can optionally set the following extended parameters for the cluster:
- Click Create. You'll return to the cluster list page where the cluster will be in the Creating state.
Post-Creation Steps
Viewing Creation Progress
On the cluster list page, you can view the list of created clusters. For clusters in the Creating state, you can check the execution progress.
Procedure
-
Click the small icon View Execution Progress to the right of the cluster status.
-
In the execution progress dialog that appears, you can view the cluster's execution progress (status.conditions).
Tip: When a certain type is in progress or in a failed state with a reason, hover your cursor over the corresponding reason (shown in blue text) to view detailed information about the reason (status.conditions.reason).
Associating with Projects
After the cluster is created, you can add it to projects in the project management view.