Create and Deploy Managed K8s

Objective

This document provides instructions on how to create managed K8s cluster and deploy it on the VoltStack site. Managed K8s cluster is similar in principle to regular K8s and you can use tools like kubectl to perform operations that are common to regular K8s. Volterra provides mechanism to easily deploy applications using managed K8s across VoltStack sites forming DC clusters. To know more about deploying VoltStack site, see Create VoltStack Site.

Using the instructions provided in this guide, you can create a managed K8s cluster, associate it with a VoltStack site, and deploy applications using its Kubeconfig.

Note: Managed K8s is also known as physical K8s.


Virtual K8s and Managed K8s

You can use both Volterra Virtual K8s (vK8s) and managed K8s for your applications. However, the major difference between Volterra vK8s and managed K8s is that you can create managed K8s only on VoltStack sites. The vK8s can be created in all types of Volterra sites, including Volterra Regional Edge (RE) sites. Also, you can deploy managed K8s across all namespaces of VoltStack site and manage the K8s operations using a single kubeconfig. The vK8s is created per namespace per site. While you can use vK8s and managed K8s in the same site, operations on same namespace using both is not supported. See Restrictions for more information.


Reference Architecture

The following image shows reference architecture for the CI/CD jobs for app deployments to production and development enviornments. The production and development environments are established in different VoltStack sites. Each site has CI/CD jobs and apps deployed in separate namespaces. The git-ops-prod and git-ops-dev namespaces have CI/CD jobs such as ArgoCD, Concourse CI, Harbor, etc. These are integrated using the in-cluster service account. Services such as ArgoCD UI can be advertised on site local network and user can access it for monitoring the CD dashboard.

ci cd pk8s
Figure: Reference Deployment Using Managed K8s

The following image shows different types of service advertisements and communication paths for services deployed in different namespaces in the sites.

pk8s comms
Figure: Intra Namespace and Remote Communication for Managed K8s

Note: You can disable communication between services of different namespaces.


Prerequisites


Restrictions

The following restrictions apply :

  • Using vK8s and managed K8s in the same site is supported. However, if the namespace is already used local to a managed K8s cluster, then any object creation in that namespace using vK8s is not supported for that site. Conversely, if a namespace is already used by vK8s, then operations on local managed K8s cluster are not supported.
  • Managed K8s is supported only for VoltStack site and not supported for other sites.
  • Managed K8s can be enabled by applying it before provisioning the VoltStack site. It is not supported for enabling by updating existing VoltStack site.
  • Managed K8s cannot be disabled once it is enabled.
  • In case of managed K8s, Role and RoleBinding operations are supported via kubectl. However, ClusterRoleBinding, PodSecurityPolicy, and ClusterRole are not supported for kubectl. These can be configured only through VoltConsole.

Note: As BGP advertisement of VIPs is included by default in Volterra managed K8s, the NodePort service type is not required. Therefore, the NodePort service type is not supported for Volterra managed K8s.


Configuration

Enabling a managed K8s cluster on a VoltStack site requires you to first create it and apply it during VoltStack site creation. You can also create a new managed K8s cluster as part of VoltStsck site creation. This example shows creating a K8s cluster separately and attaches it to VoltStack site during creation.

Create Managed K8s Cluster

Perform the following steps to create managed K8s cluster

Step 1: Log into VoltConsole and start K8s cluster object creation.

Navigate to Manage -> Site Management in the system namespace and select K8s Clusters in the page groups. Click Add K8s Cluster.

add pk8s
Figure: K8s Cluster Creation

Step 2: Configure metadata and access sections.
  • Enter a name in the metadata section. Optionally, set labels and add description.
  • Go to Access section and select Enanble Site Local API Access option for the Site Local Access field. This enables local access to K8s cluster.
  • Enter local domain name for the K8s cluster in the <sitename>.<localdomain> format. The local K8s API server will become accessible via this domain name.
  • Optionally, select Custom K8s Port for the Port for K8s API Server field and enter a port value in the Custom K8s Port field. This example uses the default K8s port option.

pk8s access
Figure: Access Section Configuration

  • Select Enable VoltConsole API Access option for the VoltConsole Access field.

Note: You can download global kubeconfig for the managed K8s cluster only when you enable VoltConsole API access. Also, if you do not enable the API access, monitoring of the cluster is done via metrics.

Step 3: Configure security section.

The security configuration is enabled with default settings for pod security policies, K8s cluster roles, and K8s cluster role bindings. Optionally, you enable custom settings for these fields. Perform the following steps:

Step 3.1: Configure custom pod security policies.

Select Custom Pod Security Policies option for the POD Security Policies field. Click on the Pod Security Policy List field and add a policy from the list of displayed options or create a new policy and attach. This example shows creating a new policy.

Note: Using default pod security policy allows all of your workloads. If you configure custom policy, then everything is disabled and you must explicitly configure the pod security policy rules to allow workloads.

Create a new policy as per the following guidelines:

  • Click Create new pod security policy in the Pod Security Policy List field to open new policy form. Enter a name in the metadata section.
  • Click Configure under the Pod Security Policy Specification field and do the following:
Step 3.1.1: Optionally, configure the privileges and capabilities.

Configure the Privilege and Capabilities section as per the following guidelines:

  • Enable the Privileged, Allow Privilege Escalation, Default Allow Privilege Escalation fields.
  • Select Custom Default Capabilities option for the Change Default Capabilities field, Allowed Add Capabilities option for the Allowed Add Capabilities field, and Drop Capabilities for the Drop from K8s Default Capabilities field.
  • For the custom default capabilities, allowed add capabilities, and drop capabilities fields, click on their respective Capability List fields and select the See Common Choices option to expand the choices. Select an option from the list. You can add more choices using the Add item option.
Step 3.1.2: Optionally, configure the volumes and mounts.

Configure the Volumes and Mounts section as per the following guidelines:

  • Click Add item under Volume, Allowed Flex Volumes, Allowed Host Paths, and Allowed Proc Mounts fields. Enter the values for those fields and you can add multiple entries using the Add item option for each of these fields.

Note: Leaving empty value for Volumes disables any volumes. For rest of the fields, the default values are applied. In case of Host Path Prefix, you can select the Read Only checkbox to mount a read-only volume.

  • Enable Read Only Root Filesystem so that containers run with read-only root file system.
Step 3.1.3: Optionally, configure host access and sysctl.

Configure the Host Access and Sysctl section as per the following guidelines:

  • Enable the Host Network, Host IPC, and Host PID fields to allow the use of host network, host IPC, and host PID in the pod spec.
  • Enter port ranges in the Host Ports Ranges field to expose those host ports.
Step 3.1.4: Optionally, configure security context.

Configure the Security Context section as per the following guidelines:

  • Select Run As User, Run As Group, Supplemental Groups Allowed, and FS Groups Allowed for the Select Runs As User, Select Runs As Group, Select Supplemental Groups, and Select FS Groups fields respectively.
  • For each of the above fields, enter the following configuration:

    • Enter ID values in the Starting ID and Ending ID fields. You can add more ranges using the Add item option.
    • Click on the Rules field and select the See Common Choices option to expand the choices. Select one of the MustRunAs, MayRunAs, or RunAsAny choice.

Click Apply and then click Continue to create and apply the pod security policy to the K8s cluster.

Note: You can add more pod security policies using the Add item option.

Step 3.2: Configure K8s cluster role.

Select Custom K8s Cluster Roles option for the K8s Cluster Roles field and click on the Cluster Role List field. Select a role from the displayed list or click Create new cluster role to create and attach it. This example shows creating a new cluster role. Configure the cluster role object as per the following guidelines:

  • Enter a name in the metadata section.
  • Go to Cluster Role section and select Policy Rule List or Aggregate Rule for the Rule Type field.

    • For Policy Rule List option, select List of Resources or List of Non Resource URL(s) options.
    • For the List of Resources option, do the following:

      • Enter list of API groups in the API Groups field. You can add more than one entry using the Add item option.
      • Enter list of resource types in the Resource Types field. You can add more than one entry using the Add item option.
      • Enter list of resource instances in the Resource Instances field. You can add more than one entry using the Add item option.
      • Enter allowed list of operations in the Allowed Verbs field. You can add more than one entry using the Add item option. Alternatively, you can enter * to allow all operations on the resources.
    • For List of Non Resource URL(s) option, do the following:

      • Enter URLs that do not represent K8s resources in the Non Resource URL(s) field. You can add more than one entry using the Add item option.
      • Enter allowed list of operations in the Allowed Verbs field. You can add more than one entry using the Add item option. Alternatively, you can enter * to allow all operations on the resources.

Note: You can add more than one list of resources in case of Policy Rule List option.

  • For Aggregate Rule option, click on the Selector Expression field and set label expression by doing the following:

    • Select a key or type a custom key and click Assign Custom Key option.
    • Select an operator and select a value or type a custom value and click Assign Custom Value option.

Note: You can add more than one label expressions for the aggregate rule. This will aggregare all rules in the roles selected by the label expression.

  • Click Continue to create and assign the K8s cluster role.

Note: You can add more cluster roles using the Add item option.

Step 3.3: Configure K8s cluster role bindings.

Select K8s Cluster Role Bindings option for the K8s Cluster Role Bindings field and click on the Cluster Role Binding List field. Select a role binding from the displayed list or click Create new cluster role binding to create and attach it. This example shows creating a new cluster role binding. Configure the cluster role binding as per the following guidelines:

  • Enter a name in the metadata section.
  • Click on the K8s Cluster Role field and select the role you created in the previous step.
  • Go to Subjects section and select one of the following options for the Select Subject field:
  • Select User and enter a user in the User field.
  • Select Service Account. Enter a namespace and service account name in the Namespace and Name fields respectively.
  • Select Group and enter a group in the Group field.

Note: You can add more subjects using the Add item option.

  • Click Continue to create and assign the K8s cluster role binding.

Note: You can add more cluster role bindings using the Add item option.

Step 4: Complete creating the K8s cluster.

Click Save and Exit to complete creating the K8s cluster object.


Attach K8s Cluster to VoltStack Site

Attaching K8s cluster is possible only at the time of Perform the following steps:

Step 1: Log into VoltConsole and start creating VoltStack site.

Navigate to Manage -> Site Management and select VoltStack Sites. Click Add VoltStack Site.

Step 2: Attach K8s cluster.
  • Go to Advanced Configuration and enable Show Advanced Fields option.
  • Select Enable Site Local K8s API access for the Site Local K8s API access field.
  • Click on the Enable Site Local K8s API access field and select the K8s cluster created in the previous step.

Note: This example does not show all steps required for VoltStack site creation for brevity. For complete set of steps, see Create VoltStack Site.

Step 3: Complete creating VoltStack site.

Click Save and Exit to complete creating VoltStack site. Install nodes and complete registration for the VoltStack site. For more information, see Perform Registration chapter of the Create VoltStack Site document.

Step 4: Download the kubeconfig for the K8s cluster.
  • Navigate to Sites -> Site List. Click ... for your VoltStack site enabled with managed K8s and do one of the following:
  • Choose Download Local Kubeconfig for managing your cluster locally when the cluster is on an isolated network.
  • Choose Download Global Kubeconfig for managing your cluster remotely from anywhere.

Note: The Download Global Kubeconfig option is enabled only when you enable VoltConsole API access.

  • Save the kubeconfig to your local machine.

You can use this kubeconfig for performing operations on local K8s. This is similar to the regular K8s operations using tools like kubectl.

Note: You may have to manage name resolution for your domain for K8s API access.

Step 5: Deploy applications to the managed K8s cluster.
  • Prepare a deployment manifest for your application and deploy using the kubeconfig downloaded in the previous step.
kubectl apply -f k8s-app-manifest.yaml --kubeconfig k8s-kubecfg.yaml
  • Verify deployment status.
kubectl get pods --kubeconfig k8s-kubecfg.yaml

Note: In case you are using the local kubeconfig to manage the cluster, ensure that you resolve the domain name of the cluster to the IP address of the cluster. You can obtain the domain name from the kubeconfig.


Example: CI/CD Using In-Cluster Service Account

This example shows steps to setup CI/CD jobs for your app deployments using in-cluster service account to access local K8s API for the managed K8s clusters on the VoltStack sites.

Perform the following to setup the gitlab runner for your apps on VoltStack sites using managed K8s:

Step 1: Start creating a K8s cluster object.
  • Navigate to Manage -> Site Management in the system namespace and select K8s Clusters in the page groups. Click Add K8s Cluster. Enter a name in the metadata section.
  • Go to Access section and select Enanble Site Local API Access option for the Site Local Access field.
  • Enter local domain name for the K8s cluster in the <sitename>.<localdomain> format.
  • Select Enable VoltConsole API Access option for the VoltConsole Access field.

cluster meta
Figure: Enable Local K8s and VoltConsole Access

Step 2: Add role and role binding for your service account.

First create role with policy rules to set full permissions to all resources. After that, create role binding for this role to your service account.

Step 2.1: Create K8s cluster role.

Select Custom K8s Cluster Roles option for the K8s Cluster Roles field and click on the Cluster Role List field. Click Create new cluster role to create and attach a role. Configure the cluster role object as per the following guidelines:

  • Enter a name for your cluster role object in the metadata section.
  • Set policy rules in the cluster role sections allowing access to all resources as shown in the following image:

role rules
Figure: K8s Cluster Role Rules

  • Click Continue.
Step 2.2: Create role binding.

Create a role binding and attach your role to the role binding for the service account specified in the system:serviceaccount:$RUNNER_NAMESPACE:default format. This example uses test namespace.

  • Select K8s Cluster Role Bindings option for the K8s Cluster Role Bindings field and click on the Cluster Role Binding List field. Click Create new cluster role binding to create and attach it.
  • Enter a name in the metadata section.
  • Click on the K8s Cluster Role field and select the role you created in the previous step.
  • Go to Subjects section and select Service Account. Enter a namespace and service account name in the Namespace and Name fields respectively. This example sets test namespace and system:serviceaccount:test:default as the service account name.

role binding
Figure: K8s Cluster Role Binding with Service Account as Subject

  • Click Continue to create and assign the K8s cluster role binding.
Step 3: Complete cluster creation.

psp crb
Figure: K8s Cluster with Role and Role Binding

Verify that the K8s cluster role and role binding are applied to the cluster configuration and click Save and Exit.

Step 4: Create VoltStack site attaching the cluster created in previous step.
  • Go to Manage -> Site Management -> VoltStack Sites. Click Add VoltStack Site to start creating VoltStack Site.
  • Configure VoltStack site sections till Storage Configuration section as per the guidelines provided in the Create VoltStack Site guide.
  • Go to Advanced Configuration and enable the Show Advanced Fields option. Select Enable Site Local K8s API access for the Site Local K8s API access field.
  • Click on the Enable Site Local K8s API access field and select the cluster you created in the previous step from the list of clusters displayed.

vstacksite cluster
Figure: VoltStack Site Enabled with Site Local K8s Access

  • Click Save and Exit.
Step 5: Register your VoltStack site and download kubeconfig.
  • Deploy a site matching the name and hardware device you defined in the VoltStack site. Go to Manage -> Site Management -> Registrations and approve registration. Check that your site shows up in the Sites -> Site List view.

sitelist
Figure: VoltStack Site in Site List

  • Click ... -> Kubeconfig to download the kubeconfig for your K8s cluster.
  • Ensure that the domain name you specified in K8s cluster is resolved.

Note: For example, you can add an entry in the /etc/hosts file for your domain with the VIP of the K8s cluster object. You can obtain the FQDN and IP address from the kubeconfig file and VoltConsole (node IP address from your VoltStack site Nodes view) respectively. However, it is recommended that you manage DNS resolution for your domains.

Step 5: Deploy a GitLab runner onto your K8s cluster.
curl https://gitlab.com/gitlab-org/charts/gitlab-runner/-/raw/master/values.yaml > values.yaml
  • Enter your GitLab URL and runner registration token to the values.yaml file.
echo "gitlabUrl: https://gitlab.com/" >> values.yaml

echo "runnerRegistrationToken: foobar" >> values.yaml

Note: Replace foobar with your registration token.

  • Set the KUBECONFIG environment variable to the downloaded Kubeconfig file.
export KUBECONFIG=<PK8S-Kubeconfig>
  • Deploy GitLab runners on to your K8s cluster using the Kubeconfig and the values.yaml file. The commands depend on the Helm version.

For Helm 2:

helm install --namespace <PK8_NAMESPACE> --name gitlab-runner -f values.yaml gitlab/gitlab-runner

For Helm 3:

helm install --namespace <PK8_NAMESPACE> gitlab-runner -f values.yaml gitlab/gitlab-runner

Note: You can create a namespace in your K8s cluster using kubectl.

Step 7: Verify that the runners are operational .
  • Check that the pods are started in VoltConsole. Navigate to Sites -> Site List and click ... -> Monitor K8s Cluster for your VoltStack site. Select Pods tab to check that the runner pods are started.

cluster pods
Figure: Runner Pods in K8s Cluster

  • Go to your GitLab CI/CD page and verify that the same runners are created there. Go to your project in GitLab and navigate to Settings -> CI/CD Click Expand in the Runners section. Check that your runners appear under Available Specific Runners.

runners
Figure: GitLab Runners

Note: The names of runners are same as the runner pod names in VoltConsole.


Concepts


API References