This is the multi-page printable view of this section. Click here to print.
Workload management
- 1: Deploy test workload
- 2: Add an external load balancer
- 2.1: RECOMMENDED: Kube-Vip for Service-type Load Balancer
- 2.1.1: Kube-Vip ARP Mode
- 2.1.2: Kube-Vip BGP Mode
- 2.2: Alternative: MetalLB Service-type Load Balancer
- 3: Add an ingress controller
- 4: Secure connectivity with CNI and Network Policy
1 - Deploy test workload
We’ve created a simple test application for you to verify your cluster is working properly. You can deploy it with the following command:
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
To see the new pod running in your cluster, type:
kubectl get pods -l app=hello-eks-a
Example output:
NAME READY STATUS RESTARTS AGE
hello-eks-a-745bfcd586-6zx6b 1/1 Running 0 22m
To check the logs of the container to make sure it started successfully, type:
kubectl logs -l app=hello-eks-a
There is also a default web page being served from the container. You can forward the deployment port to your local machine with
kubectl port-forward deploy/hello-eks-a 8000:80
Now you should be able to open your browser or use curl
to http://localhost:8000
to view the page example application.
curl localhost:8000
Example output:
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
Thank you for using
███████╗██╗ ██╗███████╗
██╔════╝██║ ██╔╝██╔════╝
█████╗ █████╔╝ ███████╗
██╔══╝ ██╔═██╗ ╚════██║
███████╗██║ ██╗███████║
╚══════╝╚═╝ ╚═╝╚══════╝
█████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗
██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝
███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗
██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝
██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝
You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-qp6bg
For more information check out
https://anywhere.eks.amazonaws.com
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
If you would like to expose your applications with an external load balancer or an ingress controller, you can follow the steps in Adding an external load balancer .
2 - Add an external load balancer
A production-quality Kubernetes cluster requires planning and preparation for various networking features.
The purpose of this document is to walk you through getting set up with a recommended Kubernetes Load Balancer for EKS Anywhere. Load Balancing is essential in order to maximize availability and scalability. It enables efficient distribution of incoming network traffic among multiple backend services.
2.1 - RECOMMENDED: Kube-Vip for Service-type Load Balancer
We recommend using Kube-Vip cloud controller to expose your services as service-type Load Balancer. Detailed information about Kube-Vip can be found here .
There are two ways Kube-Vip can manage virtual IP addresses on your network. Please see the following guides for ARP or BGP mode depending on your on-prem networking preferences.
Setting up Kube-Vip for Service-type Load Balancer
Kube-Vip Service-type Load Balancer can be set up in either ARP mode or BGP mode
2.1.1 - Kube-Vip ARP Mode
In ARP mode, kube-vip will perform leader election and assign the Virtual IP to the leader. This node will inherit the VIP and become the load-balancing leader within the cluster.
Setting up Kube-Vip for Service-type Load Balancer in ARP mode
-
Enable strict ARP in kube-proxy as it’s required for kube-vip
kubectl get configmap kube-proxy -n kube-system -o yaml | \ sed -e "s/strictARP: false/strictARP: true/" | \ kubectl apply -f - -n kube-system
-
Create a configMap to specify the IP range for load balancer. You can use either a CIDR block or an IP range
CIDR=192.168.0.0/24 # Use your CIDR range here kubectl create configmap --namespace kube-system kubevip --from-literal cidr-global=${CIDR}
IP_START=192.168.0.0 # Use the starting IP in your range IP_END=192.168.0.255 # Use the ending IP in your range kubectl create configmap --namespace kube-system kubevip --from-literal range-global=${IP_START}-${IP_END}
-
Deploy kubevip-cloud-provider
kubectl apply -f https://kube-vip.io/manifests/controller.yaml
-
Create ClusterRoles and RoleBindings for kube-vip Daemonset
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
-
Create the kube-vip Daemonset
An example manifest has been included at the end of this document which you can use in place of this step.
alias kube-vip="docker run --network host --rm plndr/kube-vip:v0.3.5" kube-vip manifest daemonset --services --inCluster --arp --interface eth0 | kubectl apply -f -
-
Deploy the Hello EKS Anywhere test application.
kubectl apply -f https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml
-
Expose the hello-eks-a service
kubectl expose deployment hello-eks-a --port=80 --type=LoadBalancer --name=hello-eks-a-lb
-
Describe the service to get the IP. The external IP will be the one in CIDR range specified in step 4
EXTERNAL_IP=$(kubectl get svc hello-eks-a-lb -o jsonpath='{.spec.loadBalancerIP}')
-
Ensure the load balancer is working by curl’ing the IP you got in step 8
curl ${EXTERNAL_IP}
You should see something like this in the output
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡
Thank you for using
███████╗██╗ ██╗███████╗
██╔════╝██║ ██╔╝██╔════╝
█████╗ █████╔╝ ███████╗
██╔══╝ ██╔═██╗ ╚════██║
███████╗██║ ██╗███████║
╚══════╝╚═╝ ╚═╝╚══════╝
█████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗
██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝
███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗
██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝
██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝
You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr
For more information check out
https://anywhere.eks.amazonaws.com
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡
Here is an example manifest for kube-vip from step 5. Also available here
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
name: kube-vip-ds
namespace: kube-system
spec:
selector:
matchLabels:
name: kube-vip-ds
template:
metadata:
creationTimestamp: null
labels:
name: kube-vip-ds
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: vip_interface
value: eth0
- name: port
value: "6443"
- name: vip_cidr
value: "32"
- name: svc_enable
value: "true"
- name: vip_startleader
value: "false"
- name: vip_addpeerstolb
value: "true"
- name: vip_localpeer
value: ip-172-20-40-207:172.20.40.207:10000
- name: vip_address
image: plndr/kube-vip:v0.3.5
imagePullPolicy: Always
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
- SYS_TIME
hostNetwork: true
serviceAccountName: kube-vip
updateStrategy: {}
status:
currentNumberScheduled: 0
desiredNumberScheduled: 0
numberMisscheduled: 0
numberReady: 0
2.1.2 - Kube-Vip BGP Mode
In BGP mode, kube-vip will assign the Virtual IP to all running Pods. All nodes, therefore, will advertise the VIP address.
Prerequisites
- BGP-capable network switch connected to EKS-A cluster
- Vendor-specific BGP configuration on switch
Required BGP settings on network vendor equipment are described in BGP Configuration on Network Switch Side section below.
Setting up Kube-Vip for Service-type Load Balancer in BGP mode
-
Create a configMap to specify the IP range for load balancer. You can use either a CIDR block or an IP range
CIDR=192.168.0.0/24 # Use your CIDR range here kubectl create configmap --namespace kube-system kubevip --from-literal cidr-global=${CIDR}
IP_START=192.168.0.0 # Use the starting IP in your range IP_END=192.168.0.255 # Use the ending IP in your range kubectl create configmap --namespace kube-system kubevip --from-literal range-global=${IP_START}-${IP_END}
-
Deploy kubevip-cloud-provider
kubectl apply -f https://kube-vip.io/manifests/controller.yaml
-
Create ClusterRoles and RoleBindings for kube-vip Daemonset
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
-
Create the kube-vip Daemonset
alias kube-vip="docker run --network host --rm plndr/kube-vip:latest" kube-vip manifest daemonset \ --interface lo \ --localAS <AS#> \ --sourceIF <src interface> \ --services \ --inCluster \ --bgp \ --bgppeers <bgp-peer1>:<peerAS>::<bgp-multiphop-true-false>,<bgp-peer2>:<peerAS>::<bgp-multihop-true-false> | kubectl apply -f -
Explanation of the options provided above to kube-vip for manifest generation:
--interface — This interface needs to be set to the loopback in order to suppress ARP responses from worker nodes that get the LoadBalancer VIP assigned --localAS — Local Autonomous System ID --sourceIF — source interface on the worker node which will be used to communicate BGP with the switch --services — Service Type LoadBalancer (not Control Plane) --inCluster — Defaults to looking inside the Pod for the token --bgp — Enables BGP peering from kube-vip --bgppeers — Comma separated list of BGP peers in the format <address:AS:password:multihop>
Below is an example Daemonset creation command.
kube-vip manifest daemonset \ --interface $INTERFACE \ --localAS 65200 \ --sourceIF eth0 \ --services \ --inCluster \ --bgp \ --bgppeers 10.69.20.2:65000::false,10.69.20.3:65000::false
Below is the manifest generated with these example values.
apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null name: kube-vip-ds namespace: kube-system spec: selector: matchLabels: name: kube-vip-ds template: metadata: creationTimestamp: null labels: name: kube-vip-ds spec: containers: - args: - manager env: - name: vip_arp value: "false" - name: vip_interface value: lo - name: port value: "6443" - name: vip_cidr value: "32" - name: svc_enable value: "true" - name: cp_enable value: "false" - name: vip_startleader value: "false" - name: vip_addpeerstolb value: "true" - name: vip_localpeer value: docker-desktop:192.168.65.3:10000 - name: bgp_enable value: "true" - name: bgp_routerid - name: bgp_source_if value: eth0 - name: bgp_as value: "65200" - name: bgp_peeraddress - name: bgp_peerpass - name: bgp_peeras value: "65000" - name: bgp_peers value: 10.69.20.2:65000::false,10.69.20.3:65000::false - name: bgp_routerinterface value: eth0 - name: vip_address image: ghcr.io/kube-vip/kube-vip:v0.3.7 imagePullPolicy: Always name: kube-vip resources: {} securityContext: capabilities: add: - NET_ADMIN - NET_RAW - SYS_TIME hostNetwork: true serviceAccountName: kube-vip updateStrategy: {} status: currentNumberScheduled: 0 desiredNumberScheduled: 0 numberMisscheduled: 0 numberReady: 0
-
Manually add the following to the manifest file as shown in the example above
- name: bgp_routerinterface value: eth0
-
Deploy the Hello EKS Anywhere test application.
kubectl apply -f https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml
-
Expose the hello-eks-a service
kubectl expose deployment hello-eks-a --port=80 --type=LoadBalancer --name=hello-eks-a-lb
-
Describe the service to get the IP. The external IP will be the one in CIDR range specified in step 4
EXTERNAL_IP=$(kubectl get svc hello-eks-a-lb -o jsonpath='{.spec.externalIP}')
-
Ensure the load balancer is working by curl’ing the IP you got in step 8
curl ${EXTERNAL_IP}
You should see something like this in the output
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
Thank you for using
███████╗██╗ ██╗███████╗
██╔════╝██║ ██╔╝██╔════╝
█████╗ █████╔╝ ███████╗
██╔══╝ ██╔═██╗ ╚════██║
███████╗██║ ██╗███████║
╚══════╝╚═╝ ╚═╝╚══════╝
█████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗
██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝
███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗
██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝
██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝
You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr
For more information check out
https://anywhere.eks.amazonaws.com
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
BGP Configuration on Network Switch Side
BGP configuration will vary depending upon network vendor equipment and local network environment. Listed below are the basic conceptual configuration steps for BGP operation. Included with each step is a sample configuration from a Cisco Switch (Cisco Nexus 9000) running in NX-OS mode. You will need to find similar steps in your network vendor equipment’s manual for BGP configuration on your specific switch.
-
Configure BGP local AS, router ID, and timers
router bgp 65000 router-id 10.69.5.1 timers bgp 15 45 log-neighbor-changes
-
Configure BGP neighbors
BGP neighbors can be configured individually or as a subnet
a. Individual BGP neighbors
Determine the IP addresses of each of the EKS-A nodes via VMWare console or DHCP server allocation.
In the example below, node IP addresses are 10.69.20.165, 10.69.20.167, and 10.69.20.170.
Note that remote-as is the AS used as the bgp_as value in the generated example manifest above.neighbor 10.69.20.165 remote-as 65200 address-family ipv4 unicast soft-reconfiguration inbound always neighbor 10.69.20.167 remote-as 65200 address-family ipv4 unicast soft-reconfiguration inbound always neighbor 10.69.20.170 remote-as 65200 address-family ipv4 unicast soft-reconfiguration inbound always
b. Subnet-based BGP neighbors
Determine the subnet address and netmask of the EKS-A nodes. In this example the EKS-A nodes are on 10.69.20.0/24 subnet. Note that remote-as is the AS used as the bgp_as value in the generated example manifest above.
neighbor 10.69.20.0/24 remote-as 65200 address-family ipv4 unicast soft-reconfiguration inbound always
-
Verify bgp neighbors are established with each node
switch% show ip bgp summary information for VRF default, address family IPv4 Unicast BGP router identifier 10.69.5.1, local AS number 65000 BGP table version is 181, IPv4 Unicast config peers 7, capable peers 7 32 network entries and 63 paths using 11528 bytes of memory BGP attribute entries [16/2752], BGP AS path entries [6/48] BGP community entries [0/0], BGP clusterlist entries [0/0] 3 received paths for inbound soft reconfiguration 3 identical, 0 modified, 0 filtered received paths using 0 bytes Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.69.20.165 4 65200 34283 34276 181 0 0 5d20h 1 10.69.20.167 4 65200 34543 34531 181 0 0 5d20h 1 10.69.20.170 4 65200 34542 34530 181 0 0 5d20h 1
-
Verify routes learned from EKS-A cluster match the external IP address assigned by kube-vip LoadBalancer configuration
In the example below, 10.35.10.13 is the external kube-vip LoadBalancer IP
switch% show ip bgp neighbors 10.69.20.165 received-routes Peer 10.69.20.165 routes for address family IPv4 Unicast: BGP table version is 181, Local Router ID is 10.69.5.1 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-injected Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup, 2 - best2 Network Next Hop Metric LocPrf Weight Path *>e10.35.10.13/32 10.69.20.165 0 65200 i
2.2 - Alternative: MetalLB Service-type Load Balancer
The purpose of this document is to walk you through getting set up with MetalLB Kubernetes Load Balancer for your cluster. This is suggested as an alternative if your networking requirements do not allow you to use Kube-Vip .
MetalLB is a native Kubernetes load balancing solution for bare-metal Kubernetes clusters. Detailed information about MetalLB can be found here .
Prerequisites
You will need Helm installed on your system as this is the easiest way to deploy MetalLB. Helm can be installed from here . MetalLB installation is described here
Steps
-
Enable strict ARP as it’s required for MetalLB
kubectl get configmap kube-proxy -n kube-system -o yaml | \ sed -e "s/strictARP: false/strictARP: true/" | \ kubectl apply -f - -n kube-system
-
Pull helm repo for metalLB
helm repo add metallb https://metallb.github.io/metallb
-
Create an override file to specify LB IP range
LB-IP-RANGE can be a CIDR block like 198.18.210.0/24 or range like 198.18.210.0-198.18.210.10
cat << 'EOF' >> values.yaml configInline: address-pools: - name: default protocol: layer2 addresses: - <LB-IP-range> EOF
-
Install metalLB on your cluster
helm install metallb metallb/metallb -f values.yaml
-
Deploy the Hello EKS Anywhere test application.
kubectl apply -f https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml
-
Expose the hello-eks-a deployment
kubectl expose deployment hello-eks-a --port=80 --type=LoadBalancer --name=hello-eks-a-lb
-
Get the load balancer external IP
EXTERNAL_IP=$(kubectl get svc hello-eks-a-lb -o jsonpath='{.spec.externalIP}')
-
Hit the external ip
curl ${EXTERNAL_IP}
3 - Add an ingress controller
A production-quality Kubernetes cluster requires planning and preparation for various networking features.
The purpose of this document is to walk you through getting set up with a recommended Kubernetes Ingress Controller for EKS Anywhere. Ingress Controller is essential in order to have routing rules that decide how external users access services running in a Kubernetes cluster. It enables efficient distribution of incoming network traffic among multiple backend services.
Current Recommendation: Emissary-ingress
We currently recommend using Emissary-ingress Kubernetes Ingress Controller by Ambassador. Emissary-ingress allows you to route and secure traffic to your cluster with an Open Source Kubernetes-native API Gateway. Detailed information about Emissary-ingress can be found here .
Setting up Emissary-ingress for Ingress Controller
-
Deploy the Hello EKS Anywhere test application.
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
Set up kube-vip service type: Load Balancer in your cluster by following the instructions here . Alternatively, you can set up MetalLB Load Balancer by following the instructions here
-
Install Ambassador CRDs and ClusterRoles and RoleBindings
kubectl apply -f "https://www.getambassador.io/yaml/ambassador/ambassador-crds.yaml" kubectl apply -f "https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml"
-
Create Ambassador Service with Type LoadBalancer.
kubectl apply -f - <<EOF --- apiVersion: v1 kind: Service metadata: name: ambassador spec: type: LoadBalancer externalTrafficPolicy: Local ports: - port: 80 targetPort: 8080 selector: service: ambassador EOF
-
Create a Mapping on your cluster. This Mapping tells Emissary-ingress to route all traffic inbound to the /backend/ path to the quote Service.
kubectl apply -f - <<EOF --- apiVersion: getambassador.io/v2 kind: Mapping metadata: name: hello-backend spec: prefix: /backend/ service: hello-eks-a EOF
-
Store the Emissary-ingress load balancer IP address to a local environment variable. You will use this variable to test accessing your service.
export EMISSARY_LB_ENDPOINT=$(kubectl get svc ambassador -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}")
-
Test the configuration by accessing the service through the Emissary-ingress load balancer.
curl -Lk http://$EMISSARY_LB_ENDPOINT/backend/
NOTE: URL base path will need to match what is specified in the prefix exactly, including the trailing ‘/’
You should see something like this in the output
⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢ Thank you for using ███████╗██╗ ██╗███████╗ ██╔════╝██║ ██╔╝██╔════╝ █████╗ █████╔╝ ███████╗ ██╔══╝ ██╔═██╗ ╚════██║ ███████╗██║ ██╗███████║ ╚══════╝╚═╝ ╚═╝╚══════╝ █████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗ ██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝ ███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗ ██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝ ██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗ ╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝ You have successfully deployed the hello-eks-a pod hello-eks-a-c5b9bc9d8-fx2fr For more information check out https://anywhere.eks.amazonaws.com ⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
4 - Secure connectivity with CNI and Network Policy
EKS Anywhere uses Cilium for pod networking and security.
Cilium is installed by default as a Kubernetes CNI plugin and so is already running in your EKS Anywhere cluster.
This section provides information about:
-
Understanding Cilium components and requirements
-
Validating your Cilium networking setup.
-
Using Cilium to securing workload connectivity using Kubernetes Network Policy.
Cilium Components
The primary Cilium Agent runs as a DaemonSet on each Kubernetes node. Each cluster also includes a Cilium Operator Deployment to handle certain cluster-wide operations. For EKS Anywhere, Cilium is configured to use the Kubernetes API server as the identity store, so no etcd cluster connectivity is required.
In a properly working environment, each Kubernetes node should have a Cilium Agent pod (cilium-WXYZ
) in “Running” and ready (1/1) state.
By default there will be two
Cilium Operator pods (cilium-operator-123456-WXYZ
) in “Running” and ready (1/1) state on different Kubernetes nodes for high-availability.
Run the following command to ensure all cilium related pods are in a healthy state.
kubectl get pods -n kube-system | grep cilium
Example output for this command in a 3 node environment is:
kube-system cilium-fsjmd 1/1 Running 0 4m
kube-system cilium-nqpkv 1/1 Running 0 4m
kube-system cilium-operator-58ff67b8cd-jd7rf 1/1 Running 0 4m
kube-system cilium-operator-58ff67b8cd-kn6ss 1/1 Running 0 4m
kube-system cilium-zz4mt 1/1 Running 0 4m
Network Connectivity Requirements
To provide pod connectivity within an on-premises environment, the Cilium agent implements an overlay network using the GENEVE tunneling protocol. As a result, UDP port 6081 connectivity MUST be allowed by any firewall running between Kubernetes nodes running the Cilium agent.
Allowing ICMP Ping (type = 8, code = 0) as well as TCP port 4240 is also recommended in order for Cilium Agents to validate node-to-node connectivity as part of internal status reporting.
Validating Connectivity
Cilium includes a connectivity check YAML that can be deployed into a test namespace in order to validate proper installation and connectivity within a Kubernetes cluster. If the connectivity check passes, all pods created by the YAML manifest will reach “Running” and ready (1/1) state. We recommend running this test only once you have multiple worker nodes in your environment to ensure you are validating cross-node connectivity.
It is important that this test is run in a dedicated namespace, with no existing network policy. For example:
kubectl create ns cilium-test
kubectl apply -n cilium-test -f https://docs.isovalent.com/v1.10/public/connectivity-check-eksa.yaml
Once all pods have started, simply checking the status of pods in this namespace will indicate whether the tests have passed:
kubectl get pods -n cilium-test
Successful test output will show all pods in a “Running” and ready (1/1) state:
NAME READY STATUS RESTARTS AGE
echo-a-d576c5f8b-zlfsk 1/1 Running 0 59s
echo-b-787dc99778-sxlcc 1/1 Running 0 59s
echo-b-host-675cd8cfff-qvvv8 1/1 Running 0 59s
host-to-b-multi-node-clusterip-6fd884bcf7-pvj5d 1/1 Running 0 58s
host-to-b-multi-node-headless-79f7df47b9-8mzbp 1/1 Running 0 58s
pod-to-a-57695cc7ff-6tqpv 1/1 Running 0 59s
pod-to-a-allowed-cnp-7b6d5ff99f-4rhrs 1/1 Running 0 59s
pod-to-a-denied-cnp-6887b57579-zbs2t 1/1 Running 0 59s
pod-to-b-intra-node-hostport-7d656d7bb9-6zjrl 1/1 Running 0 57s
pod-to-b-intra-node-nodeport-569d7c647-76gn5 1/1 Running 0 58s
pod-to-b-multi-node-clusterip-fdf45bbbc-8l4zz 1/1 Running 0 59s
pod-to-b-multi-node-headless-64b6cbdd49-9hcqg 1/1 Running 0 59s
pod-to-b-multi-node-hostport-57fc8854f5-9d8m8 1/1 Running 0 58s
pod-to-b-multi-node-nodeport-54446bdbb9-5xhfd 1/1 Running 0 58s
pod-to-external-1111-56548587dc-rmj9f 1/1 Running 0 59s
pod-to-external-fqdn-allow-google-cnp-5ff4986c89-z4h9j 1/1 Running 0 59s
Afterward, simply delete the namespace to clean-up the connectivity test:
kubectl delete ns cilium-test
Kubernetes Network Policy
By default, all Kubernetes workloads within a cluster can talk to any other workloads in the cluster, as well as any workloads outside the cluster. To enable a stronger security posture, Cilium implements the Kubernetes Network Policy specification to provide identity-aware firewalling / segmentation of Kubernetes workloads.
Network policies are defined as Kubernetes YAML specifications that are applied to a particular namespaces to describe that connections should be allowed to or from a given set of pods. These network policies are “identity-aware” in that they describe workloads within the cluster using Kubernetes metadata like namespace and labels, rather than by IP Address.
Basic network policies are validated as part of the above Cilium connectivity check test.
For next steps on leveraging Network Policy, we encourage you to explore:
-
A hands-on Network Policy Intro Tutorial .
-
The visual Network Policy Editor .
-
The #networkpolicy channel on Cilium Slack .
-
Other resources on networkpolicy.io .
Additional Cilium Features
Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including: Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing.
Please contact the EKS Anywhere team if you are interested in leveraging these advanced features along with EKS Anywhere.