Local cluster setup

EKS Anywhere docker provider deployments

EKS Anywhere supports a Docker provider for development and testing use cases only. This allows you to try EKS Anywhere on your local system before deploying to a supported provider.

To install the EKS Anywhere binaries and see system requirements please follow the installation guide

.

Steps

  1. Generate a cluster config

    CLUSTER_NAME=dev-cluster
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider docker > $CLUSTER_NAME.yaml
    

    The command above creates a file named eksa-cluster.yaml with the contents below in the path where it is executed. The configuration specification is divided into two sections:

    • Cluster
    • DockerDatacenterConfig
    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
    name: dev-cluster
    spec:
    clusterNetwork:
       cniConfig:
          cilium: {}
       pods:
          cidrBlocks:
          - 192.168.0.0/16
       services:
          cidrBlocks:
          - 10.96.0.0/12
    controlPlaneConfiguration:
       count: 1
    datacenterRef:
       kind: DockerDatacenterConfig
       name: dev-cluster
    externalEtcdConfiguration:
       count: 1
    kubernetesVersion: "1.21"
    managementCluster:
       name: dev-cluster
    workerNodeGroupConfigurations:
    - count: 1
       name: md-0
    ---
    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: DockerDatacenterConfig
    metadata:
    name: dev-cluster
    spec: {}
    
    • Apart from the base configuration, you can add additional optional configuration to enable supported features:
  2. Create Cluster: Create your cluster either with or without curated packages:

    • Cluster creation without curated packages installation

      eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
      

      Example command output

      Performing setup and validations
      ✅ validation succeeded {"validation": "docker Provider setup is valid"}
      Creating new bootstrap cluster
      Installing cluster-api providers on bootstrap cluster
      Provider specific setup
      Creating new workload cluster
      Installing networking on workload cluster
      Installing cluster-api providers on workload cluster
      Moving cluster management from bootstrap to workload cluster
      Installing EKS-A custom components (CRD and controller) on workload cluster
      Creating EKS-A CRDs instances on workload cluster
      Installing AddonManager and GitOps Toolkit on workload cluster
      GitOps field not specified, bootstrap flux skipped
      Deleting bootstrap cluster
      🎉 Cluster created!
      
    • Cluster creation with optional curated packages

    • It is optional to install curated packages as part of the cluster creation.
    • eksctl anywhere version version should be later than v0.9.0.
    • If including curated packages during cluster creation, please set the environment variable: export CURATED_PACKAGES_SUPPORT=true
    • Post-creation installation and detailed package configurations can be found here.
  • Discover curated-packages to install

    eksctl anywhere list packages --source registry --kube-version 1.21
    

    Example command output

    Package                 Version(s)                                       
    -------                 ----------                                       
    harbor                  2.5.0-4324383d8c5383bded5f7378efb98b4d50af827b
    
  • Generate a curated-packages config

    The example shows how to install the harbor package from the curated package list

    .

    eksctl anywhere generate package harbor --source registry --kube-version 1.21 > packages.yaml
    
  • Create a cluster

    # Create a cluster with curated packages installation
    eksctl anywhere create cluster -f $CLUSTER_NAME.yaml --install-packages packages.yaml
    

    Example command output

    Performing setup and validations
    ✅ validation succeeded {"validation": "docker Provider setup is valid"}
    Creating new bootstrap cluster
    Installing cluster-api providers on bootstrap cluster
    Provider specific setup
    Creating new workload cluster
    Installing networking on workload cluster
    Installing cluster-api providers on workload cluster
    Moving cluster management from bootstrap to workload cluster
    Installing EKS-A custom components (CRD and controller) on workload cluster
    Creating EKS-A CRDs instances on workload cluster
    Installing AddonManager and GitOps Toolkit on workload cluster
    GitOps field not specified, bootstrap flux skipped
    Deleting bootstrap cluster
    🎉 Cluster created!
    ----------------------------------------------------------------------------------------------------------------
    The EKS Anywhere package controller and the EKS Anywhere Curated Packages
    (referred to as “features”) are provided as “preview features” subject to the AWS Service Terms,
    (including Section 2 (Betas and Previews)) of the same. During the EKS Anywhere Curated Packages Public Preview,
    the AWS Service Terms are extended to provide customers access to these features free of charge.
    These features will be subject to a service charge and fee structure at ”General Availability“ of the features.
    ----------------------------------------------------------------------------------------------------------------
    Installing curated packages controller on workload cluster
    package.packages.eks.amazonaws.com/my-harbor created
    
  • Use the cluster

    Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl get ns
    

    Example command output

    NAME                                STATUS   AGE
    capd-system                         Active   21m
    capi-kubeadm-bootstrap-system       Active   21m
    capi-kubeadm-control-plane-system   Active   21m
    capi-system                         Active   21m
    capi-webhook-system                 Active   21m
    cert-manager                        Active   22m
    default                             Active   23m
    eksa-system                         Active   20m
    kube-node-lease                     Active   23m
    kube-public                         Active   23m
    kube-system                         Active   23m
    

    You can now use the cluster like you would any Kubernetes cluster. Deploy the test application with:

    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section

    .

  • Next steps:

    • See the Cluster management

      section for more information on common operational tasks like scaling and deleting the cluster.

    • See the Package management

      section for more information on post-creation curated packages installation.

    To verify that a cluster control plane is up and running, use the kubectl command to show that the control plane pods are all running.

    kubectl get po -A -l control-plane=controller-manager
    NAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGE
    capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-57b99f579f-sd85g       2/2     Running   0          47m
    capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-79cdf98fb8-ll498   2/2     Running   0          47m
    capi-system                         capi-controller-manager-59f4547955-2ks8t                         2/2     Running   0          47m
    capi-webhook-system                 capi-controller-manager-bb4dc9878-2j8mg                          2/2     Running   0          47m
    capi-webhook-system                 capi-kubeadm-bootstrap-controller-manager-6b4cb6f656-qfppd       2/2     Running   0          47m
    capi-webhook-system                 capi-kubeadm-control-plane-controller-manager-bf7878ffc-rgsm8    2/2     Running   0          47m
    capi-webhook-system                 capv-controller-manager-5668dbcd5-v5szb                          2/2     Running   0          47m
    capv-system                         capv-controller-manager-584886b7bd-f66hs                         2/2     Running   0          47m
    

    You may also check the status of the cluster control plane resource directly. This can be especially useful to verify clusters with multiple control plane nodes after an upgrade.

    kubectl get kubeadmcontrolplanes.controlplane.cluster.x-k8s.io
    NAME                       INITIALIZED   API SERVER AVAILABLE   VERSION              REPLICAS   READY   UPDATED   UNAVAILABLE
    supportbundletestcluster   true          true                   v1.20.7-eks-1-20-6   1          1       1
    

    To verify that the expected number of cluster worker nodes are up and running, use the kubectl command to show that nodes are Ready. This will confirm that the expected number of worker nodes are present. Worker nodes are named using the cluster name followed by the worker node group name (example: my-cluster-md-0)

    kubectl get nodes
    NAME                                           STATUS   ROLES                  AGE    VERSION
    supportbundletestcluster-md-0-55bb5ccd-mrcf9   Ready    <none>                 4m   v1.20.7-eks-1-20-6
    supportbundletestcluster-md-0-55bb5ccd-zrh97   Ready    <none>                 4m   v1.20.7-eks-1-20-6
    supportbundletestcluster-mdrwf                 Ready    control-plane,master   5m   v1.20.7-eks-1-20-6
    

    To test a workload in your cluster you can try deploying the hello-eks-anywhere

    .