Create a Cluster
Introduction
This section describes the creation procedure and the configuration options of a Switch Cloud Kubernetes (SCK) cluster. Wherever it makes sense, we refer to the official project documentation webpage where you can expand your knowledge even further. You can create a cluster from scratch or from a predefined template. You can start creating a cluster from scratch using one of the following actions:
- Navigating to the Project Overview tab and selecting Cluster from the Create Resource dropdown button.
- Navigating to the Resources > Cluster tab and clicking Create Cluster.
To create a cluster from a predefined template you can do one of the following:
- Navigate to the Project Overview tab and select Cluster from Template from the Create Resource dropdown button.
- Navigate to the Resources > Cluster tab and click the Create Clusters From Template button.
- Navigate to the Cluster Templates tab and click the Create Cluster from Template button to the right of a relevant template row.
Note
SCK actually consists of several Kubernetes clusters with different purposes. In this guide, the term cluster always refers to what is called a user cluster (UC) in proper terminology — the clusters you, the user, create and manage in your project. In fact, the control plane of all user clusters is fully managed by SCK and runs as Pods on our so-called seed clusters, which are in turn managed by a master cluster! Please keep this in mind when consulting other documentation.
Provider
Select the cloud provider and the datacenter. Currently, all clusters are deployed on Switch Cloud Compute, which is powered by OpenStack.
Cluster
Enter a name for your cluster or generate one using the Generate name button located to the right of the Name field.
Network Configuration
SCK supports three CNI (Container Network Interface) plugin types:
- Cilium: Default and recommended option. Offers advanced networking features, including eBPF-based networking, network policies, and visibility tooling.
- Canal: It is a combination of Flannel CNI and Calico CNI, which sets up Flannel to manage Pod networking and Calico to handle policy management. It is a CNI plugin that works fine in most environments but may not be sufficient for some large-scale use cases.
- None: No CNI plugin will be installed out of the box. The cluster will remain non-functional for Pod networking until a CNI is manually deployed by a cluster administrator. This option is recommended only for advanced users with specific customization needs.
Important
CNI type cannot be changed after cluster creation.
You can also choose between:
- IPv4: Use IPv4 addressing only.
- IPv4 and IPv6 (Dual Stack): Enables IPv6 support in addition to IPv4.
Warning
The assigned IPv4 addresses are private, but public IPv4 addresses can be added by assigning floating IPs to the worker nodes later. However, please be aware that the assigned IPv6 addresses are public by default.
Advanced Network Configuration
Use this section to customize your cluster's networking settings. Defaults are suitable for most users, but advanced scenarios may require specific configuration.
-
Proxy Mode: Configures the kube-proxy mode used to route Kubernetes Services traffic. We recommend
ebpf
for Cilium, oripvs
for Canal. In both casesiptables
is available as a fallback mode for legacy compatibility. -
Expose Strategy: Specifies how the Kubernetes API server is exposed externally:
-
LoadBalancer
: Uses a dedicated OpenStack Octavia load balancer to expose the API. Default and highly recommended option. -
NodePort
andTunneling
: Uses an OpenStack Octavia load balancer that is shared between all customers to expose the API. Should be avoided. Please contact the Switch Cloud support team before using either of these options.
-
Info
Expose strategy affects how kubectl, CI/CD tools, and end-users connect to the cluster's API server.
- Allowed IP Ranges for API Server: Specify the CIDR blocks that are allowed to access the Kubernetes API server. If left empty, access is allowed from all IP addresses. This feature is available only for clusters with the
LoadBalancer
Expose Strategy. In addition to the desired IP ranges, several IP ranges that depend on both our and your setup need to be allowed for cluster-internal communication as well. Please contact our support before configuring this option.
Warning
For security reasons, we highly recommend restricting access to the Kubernetes API server to trusted IP ranges only.
-
Allowed IP Ranges for NodePorts: Specify the CIDR blocks that are allowed to access the whole NodePort range on all your worker nodes from the internet. If left empty, access is blocked from all IP addresses. Only open access if you understand the security implications. Note that this setting has no practical effect if the cluster uses IPv4-only networking and the worker nodes are not assigned floating IPs.
-
Pods CIDR: Specifies the IP range from which Pod IPs will be allocated.
- Services CIDR: Defines the IP range for internal Kubernetes services.
- Node CIDR Mask Size: Specifies the subnet mask size assigned per Node for Pod IPs. It has to be larger than the provided Pods CIDR prefix length.
-
Node Local DNS Cache: If enabled, deploys NodeLocal DNSCache, a local DNS caching agent on each worker node. It reduces DNS lookup latency and improves reliability by serving DNS queries locally. Recommended for most setups.
-
Ingress (Cilium only): Enables Cilium's native ingress controller.
Warning
Cilium Ingress is not compatible with cert-manager's HTTP01 implementation due to a known issue and might have other limitations.
- Edit CNI Values (Cilium only): Allows you to override the default Helm chart values for Cilium by providing custom YAML input. This is intended for advanced users who need to fine-tune their network configuration.
SSH Keys
Select SSH keys that will be injected into all worker nodes at the cluster creation to allow secure remote access.
Use the SSH Keys dropdown menu to choose from the list of public keys that have already been added to your project. You can also add a new SSH key using the Add SSH Key button. SSH keys can also be managed globally in the Access > SSH Keys tab of the left-hand menu.
You can also add SSH keys after the cluster creation from the Resources > Cluster tab. However, we strongly advise adding them during cluster creation, as this step is often forgotten later and you may no longer be able to add them when SSH access is needed to debug worker node issues.
Important
In SCK, customers are responsible for maintaining the operating system on worker nodes. To do this, you might want to ensure SSH access is configured — without SSH access, it might be difficult to patch, secure, or manage your worker nodes at the OS level.
Specification
Control Plane Version: Select the desired Kubernetes version for the control plane from the Control Plane Version dropdown. We recommend sticking with the default, which is usually set to the latest available, unless you have a specific need to choose a different version.
Info
Please keep in mind that we regularly deprecate and remove the oldest minor version as it reaches its end of life.
Container Runtime: The container runtime is currently set to containerd
by default and cannot be modified.
Admission Plugins
Use the Admission Plugins dropdown to enable optional admission control plugins that intercept and validate requests to the Kubernetes API server before they are persisted.
-
Event Rate Limit: Enables rate limiting of event creation to avoid overwhelming the API server or log sinks with excessive event traffic.
-
Pod Node Selector: Adds default node selectors to Pods at admission time based on the Namespace they are created in. This restricts which worker nodes Pods can be scheduled on, based on node labels. Useful for enforcing multi-tenancy and node placement policies across namespaces.
Additional Features
Below the Admission Plugins dropdown, you can enable additional optional components using checkboxes:
-
Audit Logging: Logs all Kubernetes API server requests for audit and compliance purposes.
-
Disable CSI Driver: Disables the default Container Storage Interface (CSI) driver, which provides dynamic volume provisioning. Do not use this option unless you plan to configure storage manually or use a custom CSI driver. Most users should leave this unchecked.
-
Kubernetes Dashboard: Deploys the Kubernetes Dashboard for managing the cluster (enabled by default).
-
User SSH Key Agent: Automatically adds or removes SSH keys on all worker nodes when SSH keys are added to or removed from the cluster. We strongly recommend enabling this option.
Labels and Annotations
Optionally, you can define metadata for your cluster using labels and annotations:
-
Labels: Key-value pairs used for categorization, selection, and automation.
-
Annotations: Additional metadata used by tooling or scripts, without impacting label-based selection.
Click the trash icon to remove a key-value pair if needed.
Settings
Instead of providing your personal credentials, we strongly suggest using the provided Provider Preset. This will use a service account to deploy the cluster into the Switch Cloud Compute project with which the Switch Cloud Kubernetes project is linked in the Switch Cloud Portal. The primary benefit, other than the convenience, is that the service account is not tied to a specific person.
- Enable Ingress Hostname: When enabled, this feature uses static DNS hostnames instead of IP addresses for all public IPs. This feature is required for a workaround which is not necessary in our setup. We suggest leaving it disabled, although there should be no harm in enabling it.
If you leave the Ingress Hostname Suffix field empty, the default suffix nip.io
will be used for the static DNS hostnames mentioned above. Any service that provides <ip>.<service_domain>
to <ip>
DNS resolution can be used.
Warning
Please avoid changing other settings unless you are confident in their purpose and impact.
Initial Nodes
In SCK, worker nodes are managed in groups called MachineDeployments, which control how they are created, scaled, and updated. Use this section to configure the initial MachineDeployment, which provisions the worker nodes during cluster creation.
Basic Settings
-
Name: Optional name for the initial MachineDeployment. Use the Generate name button located to the right of the Name field or leave empty to generate a name automatically.
-
Replicas: Defines the number of worker nodes to be created in the initial MachineDeployment.
You can choose the operating system used for worker nodes by selecting a relevant tile with the desired OS name.
- Upgrade system on first boot: If enabled, the OS package manager will update all packages during the first boot. This is recommended for ensuring a secure and up-to-date base system.
Operating System Profile: An Operating System Profile (OSP) is a collection of configuration files and scripts used to prepare virtual machines to run as worker nodes. We recommend sticking with the default profile, unless you have a specific reason to choose otherwise.
Cluster Autoscaling
-
Enable Cluster Autoscaler Application: Enable this to deploy the Cluster Autoscaler Application which automatically scales the number of worker nodes based on workload demand.
-
Min Replicas: The minimum number of worker nodes for autoscaling. The Cluster Autoscaler will not scale the MachineDeployment below this number.
-
Max Replicas: The maximum number of worker nodes for autoscaling. The Cluster Autoscaler will not scale the MachineDeployment above this number.
Node Customization
-
Node Labels: Attach labels to all worker nodes in this MachineDeployment. These can be used for scheduling, topology, or grouping purposes.
-
Node Annotations: Optional key-value metadata that can be used by cluster tools or automation. Not used for selection.
-
Node Taints: Define taints that affect Pod scheduling. Only Pods with matching tolerations will be scheduled on tainted nodes. You must specify a Key, Value, and Effect.
-
Machine Deployment Annotations: Metadata attached to the MachineDeployment resource itself (not to worker nodes). Useful for tooling integration.
OpenStack Settings
These settings define how your worker nodes are provisioned on the Switch Cloud Compute platform (OpenStack).
-
Flavor: Select the desired virtual machine type. This defines the size and performance of each worker node.
-
Disk Size in GB: Enter the disk size allocated to each worker node. The minimum required by all OS images provided by Switch Cloud Compute is 20 GB.
Warning
Although the UI permits smaller values of disk size than 20 GB, the provisioning will fail during worker node creation unless you are using a custom OS image that supports a lower minimum disk size.
-
Image: Specify the name of an OS image that exists in the OpenStack project. This can be either a standard image provided by Switch Cloud Compute, or a custom image uploaded by a user. The image must be compatible with the selected operating system and fit within the resource limits of the chosen flavor. For guidance on custom images, see the Custom Image Templates section in the SCC documentation.
-
Availability Zone: Typically left empty. Modify only with a specific need.
-
Allocate Floating IP: If checked, a public IPv4 will be assigned to each worker node. This is required for direct SSH access in an IPv4-only cluster.
Node Readiness Checks
Instance Ready Check Period: Defines how often (in seconds) KKP checks whether the worker node is ready after provisioning begins. A lower value means more frequent checks, which may speed up readiness detection.
Instance Ready Check Timeout: Sets the maximum time (in seconds) KKP will wait for a worker node to become ready. If this time elapses and the worker node is not ready, the worker node will be replaced with a new one.
Provider Tags
- Provider Tags: Optional key-value metadata applied directly to the underlying worker node instances in OpenStack.
Applications
You can add third-party applications into your SCK cluster. If you enabled the Enable Cluster Autoscaler Application feature in the previous step, you will see the Cluster autoscaler app on this screen.
Important
These third-party applications are provided for your convenience only and are not covered by SCK support.
Summary
This screen presents a read-only overview of all cluster settings configured in the previous steps. You can use the Save Cluster Template to store the current configuration as a reusable template. Click Create Cluster to start the provisioning.