Contents

CKA 筆記 [7,8,9]: Security, Storage, Networking

Section 7 Security

TLS Basics

確保用戶跟Server之間的通訊是加密的

Symmetric Encryption: 加解密用同一支密鑰,必須在sender跟receiver之間交換,Hacker解密數據的風險較高。

Asymmetric Encryption:

  • Private Key: 用密鑰解密
  • Public Lock: 用公鎖加密
    ssh-keygen
    id_rsa  id_rsa.pub
    

PKI - Public Key Infrastructure

Solution Certificates API [Lab]

pwd
ls
cat akshay.csr
cat akshay.csr | base64 
# prints a single line 
cat akshay.csr | base64 -w 0
# create certificate signing request
cat > akshay.yaml
vi akshay.csr
vi akshay.yaml

kubectl create -f akshay.yaml
  certificatesigningrequest.certificates.k8s.io/akshay created
kubectl get csr
kubectl certificate approve akshay
kubectl get csr agent-smith -o yaml
kubectl certificate deny agent-smith
kubectl get csr
kubectl delete csr agent-smith

Create certificate signing request

  • akshay.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    apiversion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
      name: akshay
    spec:
      request: # paste the single line
      signerName: kubernetes.io/kube-apiserver-client
      # expirationSeconds: 86400 # one day
      usages:
      - client auth
    

Kubeconfig

kubectl get pods
    --server my-kube-playground:6443
    --client-key admin.key
    --client-certificate admin.crt
    --certificate-authority ca.cr
  • Move above information into a kubeconfig file

  • $HOME/.kube/config

  • format:

    • Clusters
      • Development
      • Production
      • Google
      • MyKubePlayground
    • Contexts
      • Admin@Production
      • Dev@Google
      • MyKubeAdmin@MyKubePlayground
    • Users
      • Admin
      • Dev User
      • Prod User
      • MyKubeAdmin
  • example:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    
    apiVersion: v1
    kind: Config
    
    current-context: dev-user@google
    
    clusters:
    - name:
      cluster: my-kube-playground
        certificate-authority: ca.crt
        server: https://my-kube-playground:6443
    - name: development
    - name: production
    - name: google
    
    contexts:
    - name: my-kube-admin@my-kube-playground
      context:
        cluster: my-kube-playground
        user: my-kube-admin
        namespaces: finaces
    - name: dev-user@google
    - name: prod-user@production
    
    users:
    - name: my-kube-admin
      user:
        client-certificate: admin.crt
        client-key: admin.key
    - name: admin
    - name: dev-user
    - name: prod-user
    

    cluster -> users -> contexts user current-context

  • commands:

    kubectl config view
    kubectl config view --kubeconfig=my-kube-config
    # 變更context
    kubectl config use-context prod-user@production
    kubectl config -h
    
  • namespaces (specified in context)

  • usecertificate-authority-data(optionally, providing the content) as alternative to certificate-authority (specifying path)

    • cat ca.crt | base64
    • 要貼上去的content須base64編碼過

Solution Kubeconfig

echo $HOME
ls .kube/
ls .kube//config
kubectl config view
kubectl get nodes

API Groups

curl https://kube-master:6443/version     # version
curl https://kube-master:6443/api/v1/pods # api
  • Multiple groups
    • /metrics
    • /healthz
    • /version
    • /api core group
      • /api/v1/{sub-group}, example of sub-group: namespaces, pods, rc, events endpoints, nodes, bindings, PV, PVC, configmaps, secrets, services
    • /apis named group
      • API Groups: apps, extensions, networking.k8s.io, storage.k8s.io, authentication.k8s.io, certificates.k8s.io
        • /apis/apps/{version}/{resources}:
          • /deployments
            • list of actions associated with it, such as list, get, create, delete, update, watch (verb)
          • /replicasets
          • /statefulsets
        • /apis/extensions/{version}/{resources}:
          • /networkpolicies
        • /apis/networking.k8s.io/{version}/{resources}:
        • /apis/storage.k8s.io/{version}/{resources}:
        • /apis/authentication.k8s.io/{version}/{resources}:
        • /apis/certificates.k8s.io/{version}/{resources}:
          • /certificatesigningrequests
    • /logs
# to list all the available paths
curl http://localhost:6443 -k

curl http://localhost:6443/apis -k | grep "name"

curl http://localhost:6443 -k
    --key admin.key
    --cert admin.crt
    --cacert ca.crt

Lauch kubectl proxy on 127.0.0.1:8001

  • Kube proxy
    • to enable connectivity between pods and services across different nodes in the cluster
  • Kubectl proxy
    • An ACTP proxy service created by Kubectl

Authorization

Authorization mechanism:

  1. Node
    • Node authorizer
  2. ABAC (attribute-based authorization)
    • can view/can create/can delete
      1
      2
      3
      4
      5
      6
      7
      8
      
      {
          "kind": "Policy", 
          "spec": {
              "user": "dev-user", 
              "namespace": "*", 
              "resource": "pods", 
              "apiGroup": "*"}
      }
      
  3. RBAC (role-based authorization)
    • Instead of directly associating an user or group with a set of permissions, we define a role with a set of permissions required for developers, then associate all dev-users to that role.
    • More standard approach to manage access withing K8S cluster
  4. Webhook
    • Outsource all mechanism
    • Open Policy Agent
  5. AlwaysAllow
  6. AlwaysDeny

RBAC

Role:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list", "get", "create", "update", "delete"]
- apiGroups: [""]
  resources: ["ConfigMap"]
  verbs: ["create"]

RoleBinding:

  • links an user object to a role
  • kubectl create -f devuser-developer-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: devuser-developer-binding
subjects:
- kind: User
  name: dev-user
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer
  apiGroup: rbac.authorization.k8s.io

View RBAC

kubectl get roles
kubectl get rolebinding
kubectl describe rolebinding devuser-developer-binding

Check Access

  • use the kubectl auth can i command to check if you can take certain actions
    kubectl auth can-i create deployments
    kubectl auth can-i delete nodes
    
  • if you are an administrator, you can impersonate:
     kubectl auth can-i create deployments --as dev-user
     yes
     kubectl auth can-i create pods --as dev-user --namespace test
     no
    

Solution Role based access controls

commands

cat /etc/kubernetes/manifests/kube-apiserver.yaml
ps -aux | grep authorization

k get roles
k get roles -A 
k get roles -A --no-headers
# wordcount
k get roles -A --no-headers | wc -l

k describe role kube-proxy -n kube-system

k get rolebindings -n kube-system
k describe rolebindings kube-proxy -n kube-system

k config view
k get pods --as dev-user 
k create role --help
k create role developer \
    --verb=list,create,delete
    --resource=pods
k describe role developer

k create rolebinding --help
k create rolebinding dev-user-binding \
    --role=developer
    --user=dev-user
k describe rolebinding dev-user-binding

k --as dev-user get pod dark-blue-app -n blue
pod "dark-blue-app" is forbidden
k get roles -n blue
k get rolebindings -n blue
k describe role developer -n blue

k edit role developer -n blue
# change resrouceNames into dark-blue-app
k --as dev-user get pod dark-blue-app -n blue

k --as dev-user create deployment nginx --image=nginx -n blue
k edit role developer -n blue
# add another rules, apiGroups: "apps", resources: deployments

Cluster Roles and cluster rolebindings

namespaced v.s. cluster scoped

  • namespaced
    • if not specified, default namespace is used
    • resources:
      • pods
      • replicasets
      • jobs
      • deployments
      • services
      • secrets
      • roles
      • rolebindings
      • configmaps
      • PVC
  • cluster scoped
    • resources:
      • nodes
      • PV (persistent volumes)
      • clusterroles
      • clusterrolebindings
      • certificatesigningrequests
      • namespaces

clusterRole -> clusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-administrator
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list", "get", "create", "delete"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-role-binding
subjects:
- kind: User
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-administrator
  apiGroup: rbac.authorization.k8s.io

Solution cluster roles

commands used

k get clusterroles
k get clusterroles --no-headers | wc -l #wordcount newlines

k get clusterrolebindings --no-headers | wc -l 
k get clusterrolebindings | grep cluster-admin
k describe clusterrolebindings cluster-admin

k describe clusterrole cluster-admin
# *.* stands for any and all resources

k get nodes --as michelle
k create clusterrole --help
k create clusterrole michelle-role \
    --verb=get,list,watch \
    --resource=nodes
    
k create clusterrolebinding --help
k create clusterrolebinding michelle-role-binding \
    --clusterrole=michelle-role \
    --user=michelle 
    
k describe clusterrole michelle-role
k describe clusterrolebinding michelle-role-binding

kubectl api-resources
# lists all the resrources as well as their short names

k create clusterrole storage-admin \
    --resource=persistentvolumes,storageclasses \
    --verb=list,create,get,watch
k describe clusterrole storage-admin

k get clusterrole storage-admin -o yaml
k create clusterrolebinding michelle-storage-admin \
    --user=michelle \
    --clusterrole=storage-admin
    
k describe clusterrolebinding michelle-storage-admin
k --as michelle get storageclass

Service Accounts

  • Two types of accounts: user account, service account

    • user account: human
    • service account: used by application to interact with k8s, such as Prometheus and Jenkins
  • When a service account is created, a service account token is created automatically

k create serviceaccount dashboard-sa
k get serviceaccrount
k describe servicearrount dashboard-sa
...
Tokens:    dashboard-sa-token-kbbdm
  • For every namespace in kubernetes, a service account named default is automatically created.
  • Whenever a pod is created, the default service account and its token are automatically mounted to that pod as a volume mount.
kubectl exec -it my-kubernetes-dashboard -- ls /var/run/secrets/kubernetes.io/serviceaccount`

kubectl exec -it my-kubernetes-dashboard cat /var/run/secrets/kubernetes.io/serviceaccount/token
  • If you’d like to use a different service account such as the one we just created, modify the pod definition file to include a service account field:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-kubernetes-dashboard
    spec:
      containers:
        - name: my-kubernetes-dashboard
          image: my-kubernetes-dashboard
      serviceAccountName: dashboard-sa
    
  • You cannot modify the service account of an existing pod. You must delete and recreate the pod.

Solution Security

commands

kubectl create secret
kubectl get deploy
kubectl describe deploy web
k edit deploy web
# modify image and save
k describe deploy web
k get pods

k create secret docker-registry -h
k create secret docker-registry private-reg-cred \
    --docker-server=myprivateregistry.com:5000 \
    --docker-username=dock_user \
    --docker-password=dock_password \
    --docker-email=dock_user@myprivateregistry.com

k get po

Docker Security

Docker implements linux capabilities. Please refer to /usr/include/linux/capability.h docker run --privileged docker run --cap-add docker run --cap-drop

Security Context

  • In K8S, containers are encapsulated in pod.
  • If you configure setting on both container and pod, the setting on the container will override the setting on the pod.
apiVersion: v1
kind: Pod
metadata: 
  name: web-pod
spec:
  # pod level 
  securityContext:
    runAsUser: 1000
    
  containers:
       - name: ubuntu
         image: ubuntu
         command: ["sleep", "3600"]
         # container level
         securityContext:
           runAsUser: 1000
           capabilities:
               add: ["MAC_ADMIN"]

Solution

command

whami

k get pod
k exec ubuntu-sleeper -- whoami
kubectl get pod ubuntu-sleeper -o yaml > ubuntu-sleeper.yaml
vi ubuntu-sleeper.yaml
kubectl delete pod ubuntu-sleeper --force
k apply -f ubuntu-sleeper.yaml
k get pod

Network Policies

  • Traffic
    • ingress: incoming from user
    • egress: outgoing request to database server
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-policy

spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          name: api-pod
    ports:
    - protocol: TCP
      port: 3306

Note:

  • Solutions that support network policies:
    • Kube-router
    • Calico
    • Romana
    • Weave-net
  • Solutions that do not support network policies:
    • Flannel

Developing network policies

  • Pod selector: to select pods by labels
  • namespace selector: to select namespaces by labels
  • ipBlock selector: to select ip ranges
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-policy
spec: 
  podSelector:
    matchLabels:
      role: db 
  policyTypes:
  - Ingress
  
  ingress:
  - from:
    # Traffic coming from either pod/namespace or certain IP are allowed to passthrough
    # Within the first rule, we have two selectors and traffic of thid kind must meet both selectors. 
    # If added dash - before the namespaceSelector, these become three traffic rules.
    - podSelector:
          matchLabels:
            name: api-pod
      namespaceSelector:
          matchLabels:
            name: prod
    - ipBlock:
          cidr: 192.168.5.10/32 
    ports:
    - protocol: TCP
      port: 3306
      
  egress:
  - to:
    - ipBlock:
          cidr: 192.168.5.10/32
    ports: 
    - protocol: TCP
      port: 80

Solution network policies

commands

alias k=kubectl
k get pods
k get service
k get networkpolicies
k get netpol
k describe netpol payroll-policy
vi internalpolicy.yaml

internalpolicy.yaml

  • Policy name: internal-policy
  • Policy Type: Egress
  • Egress Allow: payroll
  • Payroll Port: 8080
  • Egress Allow: mysql
  • MySQL Port: 3306
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: test-network
      namespace: default
    spec:
      podSelector:
        matchLabels:
          role: db
      policyTypes:
        -Ingress
        -Egresss
    

Section 8 Storage

This chapter covers storage such as persistent volumes, persistent volume claims, configure App with persistent storage, access modes for volumes, Kubernetes storage object, etc.

Two concepts of storages in Docker: Storage driver, Volume driver

Storage in Docker

  • File system

    • /var/lib/docker
      • aufs
      • containers
      • image
      • volumes
  • Layered architecture

    • 當Docker build images的時候,是以分層式架構(layered architecture) build image
    • Dockfile example
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      # layer 1. base ubuntu layer
      FROM Ubuntu
      # layer 2. changes in apt packages
      RUN apt-get update && apt-get -y install python
      # layer 3. changes in pip packages
      RUN pip install flask flask-mysql
      # layer 4. source code
      COPY . /opt/source-code
      # layer 5. update entrypoint
      ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
      
    • docker build Dockerfile -t wycwyz/custom-app
    • 如果前三行使用的作業系統跟dependencies一樣,就會用cache裡面的,減少build image所需時間,也節省disk space
    • 以上的 layer 1 ~ layer 5 都是 Read Only (image layers)
    • Layer 6 (container layer) 則是 Read Write
  • Copy on Write mechanism

    • 如果連進container要修改程式,會先從image layer複製一份到container layer,原本的image不會異動,除非用 docker rebuild
    • 在刪掉container之後,container的暫存layer也會被刪掉
  • Volumes

    • 保留由 container 建立的數據
    • /var/lib/docker/volumes/data_volume
      docker run -v data_volume:/var/lib/mysql mysql
      docker run -v data_volume2:/var/lib/mysql mysql
      docker run -v /data/mysql:/var/lib//mysql mysql
      
    • 兩種類型的掛載(mounting):
      1. Volume mounting 卷掛載
        • mounts volume from the volume directory
      2. Bind mount 綁定掛載
        • mounts a directory from any location on the Docker host (在Docker主機的任何位置掛載)
    • 舊的flag-v、新版用-mount方法或--mount(best)
      docker run \
        --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysql
      
  • Storage Driver:

    • 工作內容:
      • 維護 layered architecture
      • 創建 writable layer
      • 跨層移動文件來實現 copy and write
    • 常見存儲驅動程序
      • AUFS
      • ZFS
      • BTRFS
      • Device Mapper
      • Overlay
      • Overlay2

Storage Drivers

  • Storage driver 不處理 volume,而是由 Volume driver 處理 volume

    • 常見Storage driver: AUFS, ZFS, BTRFS, DEVICE MAPPER, OVERLAY
  • 預設的 volume driver 是Local,還有其他選項

    • 常見Volume driver: Local, Azure File Storage, Convoy, DigitalOcean Block Storage, Flocker, gce-docker, GlusterFS, NetApp, RexRay, Portworx, VMware vSphere Storage
      • RexRay: 適用於AWS EBS, EMC, Google Persistent Disk, OpenStack Cinder
  • Run volume driver 的時候指定類型 把資料持久化的儲存在AWS cloud,當container exit時,資料存雲端

    docker run -it \
        --name mysql
        --volume-driver rexray/ebs
        --mount src=ebs-vol,target=/var/lib/mysql \
        mysql
    

Container Storage Interface (CSI)

以前,K8s的container runtime engine只有Docker,所有跟Docker交互的程式都刻在Kubernetes源碼。

後來隨著其他container runtimes (例如: rkt, cri-o) 的出現,再用相同的embedded方法不太適合,因此出現了Container Runtime Interface。

  • Container Networking Interface (CNI)
    • flannel, cilium, weaveworks
  • Container Storage Interface (CSI)
    • portworx, Amazon EBS, DELL EMC, GlusterFS
    • 可以自建driver來讓你的storage與K8s交互

Volumes

  • Docker container is meant to be transient in nature.
  • To attached persistent data to container, we attach a volume to the containers when they are created.
  • Pod created in K8s is transcient. The data generated by pod is stored in volume.
apiVersion: v2
kind: Pod
metadata:
  name: random-number-generator
spec:
  containers:
  - image: alpine
    name: alpine
    command: ["/bin/sh", "-c"]
    args: ["shut -i 0-100 -n 1 >> /opt/number.out"]
    volumeMounts:
    - mountPath: /opt
      name: data-volume
    
  volumes:
  - name: data-volume
    hostPath:
      path: /data
      type: Directory

Persistent Volumes

  • 在先前提的volume中,Volume是定義在pod definition file,但如果你很多個PODS,這樣所有的PODS都需要做這些配置,若一有變更,所有的pod definition files都需要跟著修改,為了中心化管理,可以用Persistent Volume達成
    [Pod1]  [Pod2]  [Pod3]  [Pod4]
      |       |       |       |
      |       |       |       |
      [Peristent Volume Claim (PVC)]   
      |       |       |       |
    [ [PVC]  [PVC]   [PVC]   [PVC] [] [] [] ]
           Persistent Volumes (PVs)
    
  • Base Template pv-definition.yaml
    • accessModes三種: ReadOnlyMany, ReadWriteOnce, ReadWriteMany
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      apiVersion: v
      kind: PersistentVolume
      metadata:
        name: pv-voll
      spec:
        accessModes:
            - ReadWriteOnce  
        capacity:
            storage: 1Gi
        hostPath:
            path: /tmp/data
      
    • kubectl create -f pv-definition.yaml
    • kubectl get persistentvolume

Persistent volume claims

  • Persistent Volume 以及 Persistent Volume Claim 是 K8s 命名空間裡面兩個不同的物件。管理員建立 persistent volume 的集合,使用者建立 persistent volume claims 來使用storage
  • 一旦建立了 persistent volume claims,K8S 就會根據 request 以及 volume 設定的 properties,將 PV 綁定到相對應的 PVC
  • 每個 PVC 綁定一個 PV
  • 也可以用 label & selector 做綁定條件
  • Example of pvc-definition.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: myclaim
    spec:
      accessModes:
         - ReadWriteOnc
    
      resources:
         requests:
           storage: 500Mi
    
  • Example of pv-definition.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-voll
    spec:
      accessModes:
          - ReadWriteOnce  
      capacity:
          storage: 1Gi
      awsElasticBlockStore:
        volumeID: <volume-id>
        fsType: ext4
    
  • 刪除 PVCs
    kubectl delete persistentvolumeclaims myclaim
    
    • 刪除了之後,這個PVC預設會變成retain,指的是persistent volume仍然存在,直到管理員將volume刪除
      persistentVolumeReclaimPolicy: Retain
      
    • 可以調整設定讓persistent volume自動隨著pvc刪除而跟著刪掉釋放空間
      persistentVolumeReclaimPolicy: Delete
      
    • 另外一種Recycle指的是data volume的資料會被抹除,再給其它claim使用
      persistentVolumeReclaimPolicy: Recycle
      

Solution PV and PVC

commands

k get pods
k exec webapp -- cat /log/app.log   #exec <pod-name> 
k get pods
k describe pod webapp 
ls /var/log/webapp
# set up volumes
k edit pod webapp
  • webapp-pod-definition.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    ...
    spec:
      containers:
      - env:
        ...
        volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: kube-api-access-lmntb
          readOnly: true
        # mount log
        - mountPath: /log
          name: log-volume
      ...
      volumes:
      - name: log-volume # add log volume
        hostPath:
           path: /var/log/webapp
      - name: kube-api-access-lmntb
        ...
    

commands

cd ~
vi pv-definition.yaml
k create -f pv.yaml
k get pv
  • pv-definition.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-log
    spec:
      capacity:
        storage: 100Mi ✨Ref
      # volumeMode: Filesystem cmment不用
      accessModes:
        - ReadWriteMany 🎉Ref
      persistentVolumeReclaimPolicy: Retain
      hostPath:
        path: /pv/log
    

commands

vi pvc.yaml
k create -f pvc.yaml
k get pvc #check status
  • pvc.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: claim-log-1 🎆Ref
    spec:
      accessModes:
        - ReadWriteMany 🎉Ref
      resources:
        requests:
          storage: 100Mi✨Ref
    

command

# To change the access modes on PVC, to rewritemany 
vi pvc.yaml
# delete and replace
k replace --force -f pvc.yaml
k get pv
k get pvc
# replace hostpath with PV
ls /pv/log
k edit pod webapp
k replace --force -f /tmp/kubectl-edit-56385166.yaml
ls /pv/log 
k get pv pv-log
k delete pvc 




k describe pvc claim-log-1
k delete pod webapp
k get pvc
  • webapp-pod-definition.yaml
    1
    2
    3
    4
    
      ...
      volumes:
      - persistentVolumeClaim:
            claimName: claim-log-1 🎆Ref
    

Storage Class

  • Static Provisioning
    • 從Google Cloud建立pv,在建立pv之前必須先在GoogleCloud手動建立disk,才能手動建立persistent volume
      gcloud beta compute disks create pd-disk \
          --size 1GB --region us-east1
      
  • Dynamic Provisioning
    • 建立以下template sc-definition.yaml
      1
      2
      3
      4
      5
      
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
         name: google-storage 🎁Ref
      provisioner: kubernetes.io/gce-pd
      
    • 可以拔掉persistent volume,再來修pvc.yaml, pod.yaml
      1
      2
      3
      4
      
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: 
      
    • pv的設定在建立sc時就會隨之建立,所以不用pv.yaml
    • pvc-definition.yaml更新設定
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: claim-log-1 🎆Ref
      spec:
        accessModes:
           - ReadWriteOnce
        storageClassName: google-storage 🎁Ref
        # 這樣pvc才知道要使用哪個StorageClass
        resources:
          requests:
            storage: 500Mi
      
    • 這樣下次如果建立了一個pvc,與他關聯的sc就會使用defined provisioner來配置所需大小的new disk,然後建立 persistent volume並綁定pvc到這個volume
    • StorageClass 仍然會建立一個PV
    • 有其它的provisioner,像是AWS EBS(elastic block store), AzureFile, AzureDisk, CephFS, Portworx, ScaleIO, etc.
    • 每個provisioner可以傳入額外參數,例如要provision的磁碟類型、replication type 等等,以下是拿GCE為例
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      
      # standard class
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
         name: silver
      provisioner: kubernetes.io/gce-pd
      
      parameters:
        type: pd-standard
      ---
      # gold class: with ssd
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
         name: gold
      provisioner: kubernetes.io/gce-pd
      
      parameters:
        type: pd-ssd
        replication-type: none
      ---
      # platinum class: with replication
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
         name: platinum
      provisioner: kubernetes.io/gce-pd
      
      parameters:
        type: pd-ssd
        replication-type: regional-pd
      

Solution Storage Class

command

k get storageclass
k get sc
# check volumebindingmode
k get pv
# RWO (accessmodes)
k get pvc
vi pvc.yaml
# paste below pv.yaml file
k create -f pvc.yaml
k get pvc
k describe pvc local-pvc
k run nginx --image=nginx:alpine --dry-run=client -o yaml
k run nginx --image=nginx:alpine --dry-run=client -o yaml > nginx.yaml
vi nginx.yaml
# paste below nginx configuration
k create -f nginx.yaml
k get pod
k get pvc
vi delayed-volume-sc.yaml
# paste below delayed-volume-sc.yaml
k create -f delayed-volume-sc.yaml

k get sc

example of pv.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
  storageClassName: local-storage

regarding the nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx:alpine
    name: nginx
    resources: {}
    volumeMount:
      - mountPath: "/var/www/html"
        name: local-pvc-volume
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
  - name: local-pvc-volume
    persistentVolumeClaim:
        claimName: local-pvc
status: {}

regarding the delayed-volume-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: delayed-volume-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Section 9 Networking

This section focuses on the networking concepts.

  • Pre-requisites:
    • Switching, Routing
    • CoreDNS
    • Tools
    • DNS
    • CNI
    • Networking in Docker
  • Concepts:
    • Networking configuration on cluster nodes
    • POD networking concepts
    • Service networking
    • Cluster DNS
    • Network loadbalancer
    • Ingress

Linux Networking Basics: Switching, Routing

  • Networking pre-requisites

    • Switching and Routing
      • Switching
      • Routing
      • Default Gateway
    • DNS
      • DNS Configurations on Linux
      • CoreDNS Introduction
    • Network Namespaces
    • Docker Networking
  • Switching

    ip link
    ip addr add <ip_of_the_other_host>/24 dev eth0
    ping <ip_of_the_switch>
    
  • Routing

    • A router helps connect two networks together
    • Think of it as another server with many network ports
  • Gateway

    • 當systemB要傳封包到systemC,要如何知道這router在network哪裡,才能知道封包要往哪投,但network上面可能有很多這樣的devices,因此我們會用 Gateway 或 route 來配置系統
    • 如果network比喻成房間,那麼gateway就是連到外部世界的門,會接到其它網路或連到外部網路

查看系統既有的routing configuration: route 增加一個route讓他連到network 2.0:

ip route add <network_2.0>/24 via <ip_of_host_B_of_1st_network>

增加一個路由條目:告訴host_A通往network_2的門/網關是通過host_B

ip route add <ip_address_to_network_2>/24 via <ip_from_A_to_host_B>

host_C同時也要找到如何傳回值的路徑,所以也要加route,透過host_B找到通往host_a的門

ip route add <ip_address_to_1st_network>/24 via <ip_from_C_to_host_B>

Take aways

  1. ip link 用來列出和修改host主機上的interface
  2. ip addr 用來查看分配給這些interface的IP address
  3. ip addr add <ip_address>/24 dev eth0 用來在interface裡面設定IP address