CKA试题

第一题 2. Set configuration context $ kubectl config use-context k8s Monitor the logs of Pod foobar and

Extract log lines corresponding to error file-not-found Write them to /opt/KULM00201/foobar Question weight 5%

kubectl config use-context k8s

kubectl logs <podname> | grep -v file-not-found >  /opt/KULM00201/foobar

第二题 3. Set configuration context $ kubectl config use-context k8s

List all PVs sorted by name saving the full kubectl output to /opt/KUCC0010/my_volumes . Use kubectl’s own functionally for sorting the output, and do not manipulate it any further.

 kubectl get pv --sort-by=.metadata.name > /opt/KUCC0010/my_volumes

第三题 4. Set configuration context $ kubectl config use-context k8s

Ensure a single instance of Pod nginx is running on each node of the kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place.

Use Daemonsets to complete this task and use ds.kusc00201 as Daemonset name. Question weight 3%

参考官方:daemonset

kubectl config use-context k8s

$ vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds.kusc00201
spec:
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
kubeclt apply -f daemonset.yaml
kubectl get pod -o wide  #确认每个node是否有‘ds.kusc00201’ pod

第四题 5. Set configuration context $ kubectl config use-context k8s Perform the following tasks

Add an init container to lumpy–koala (Which has been defined in spec file /opt/kucc00100/pod-spec-KUCC00100.yaml) The init container should create an empty file named /workdir/calm.txt If /workdir/calm.txt is not detected, the Pod should exit Once the spec file has been updated with the init container definition, the Pod should be created. Question weight 7% 官方参考:initcontainer emptyDir

cp /opt/kucc00100/pod-spec-KUCC00100.yaml   ./    (注意:不要直接修改提供的yaml文件)

kubectl config use-context k8s

apiVersion: v1
kind: Pod
metadata:
  name: lumpy--koala
  labels:
    app: myapp
spec:
  volumes:
  - name: workdir
    emptyDir: {}
  containers:
  - name: myapp-containers
    image: radial/busyboxplus
    volumeMounts:
    - name: workdir
      mountPath: /workdir
    command: ['sh', '-c', 'if [ -f /workdir/calm.txt ] ; then sleep 3600000 ; fi']
  initContainers:
  - name: init-containers
    image: radial/busyboxplus
    volumeMounts:
    - name: workdir
      mountPath: /workdir
    command: ['sh', '-c', 'touch /workdir/calm.txt']
kubectl apply -f inicon.yaml
kubectl get pods
kubectl exec -ti lumpy--koala -c myapp-containers sh
ls /workdir/calm.txt

第五题

  1. Set configuration context $ kubectl config use-context k8s Create a pod named kucc4 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul

Question weight: 4%

参考官网:pod

kubectl config use-context k8s

apiVersion: v1
kind: Pod
metadata:
  name: kucc4
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
     image: redis
  - name: memcached
     image: memcached
  - name: consul
     image: consul
kubectl apply -f manypod.yaml
kubectl get pods 
kubect describe pods kucc4 

第六题 7. Set configuration context $ kubectl config use-context k8s Schedule a Pod as follows:

Name: nginx-kusc00101 Image: nginx Node selector: disk=ssd Question weight: 2%

参考官网:nodeselector

kubectl config use-context k8s

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00101
spec:
  containers:
  - name: nginx-kusc00101
    image: nginx
  nodeSelector:
    disk: ssd
kubectl apply -f selector-pod.yaml
kubectl get pods -o wide

第七题 8. Set configuration context $ kubectl config use-context k8s Create a deployment as follows

Name: nginx-app Using container nginx with version 1.10.2-alpine The deployment should contain 3 replicas Next, deploy the app with new version 1.13.0-alpine by performing a rolling update and record that update.

Finally, rollback that update to the previous version 1.10.2-alpine

Question weight: 4%

参考官方:deployment

$ vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10.2-alpine
kubectl apply -f nginx-deployment.yaml  #创建
kubectl set image deployment/nginx-app nginx=nginx:1.12.0 --record #升级
kubectl rollout status deployment.v1.apps/nginx-app #查看升级状态,可略
kubectl rollout status deployment/nginx-app #查看升级状态,可略
kubectl rollout status deployment nginx-app #查看升级状态,可略
kubectl get deployments
kubectl get rs #检查副本
kubectl get pods --show-labels #查看pod状态
kubectl rollout history deployment.v1.apps/nginx-app  #查看更新历史
kubectl rollout history deployment.v1.apps/ngin-app --revision=2 #(可略)查看详细更新
kubectl rollout undo deployment nginx-app  #回滚
或者执行
kubectl rollout undo deployment.v1.apps/nginx-app #回滚
kubectl get pod ngin-app-7f5f57f857-75pbr -o yaml #查看pod镜像版本是否回滚成功

第八题 9. Set configuration context $ kubectl config use-context k8s

Create and configure the service front-end-service so it’s accessible through NodePort and routes to the existing pod named front-end

Question weight: 4% 创建一个测试pod --front-end[考试不需要创建,已有pod–front-end] 参考官方:pod

$ cat nginx-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: front-end
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx

command method:

kubectl expose pod from-end --name=front-end-service --port=80 --type=NodePort

yaml method: 参考官方:service

第九题 10. Set configuration context $ kubectl config use-context k8s Create a Pod as follows:

Name: jenkins Using image: jenkins In a new Kubenetes namespace named website-frontend Question weight 3%

参考官方:pod

kubectl create namespace website-frontend
kubectl get ns
vim jenkins-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: jenkins
  namespace: website-frontend
spec:
  containers:
  - name: jenkins
    image: jenkins
 kubectl apply -f jenkins-pod.yaml 
 kubectl get pod -n website-frontend -w

第十题 11. Set configuration context $ kubectl config use-context k8s Create a deployment spec file that will:

Launch 7 replicas of the redis image with the label: app_env_stage=dev Deployment name: kual00201 Save a copy of this spec file to /opt/KUAL00201/deploy_spec.yaml (or .json)

When you are done, clean up (delete) any new k8s API objects that you produced during this task

Question weight: 3% command method:

docker run kual00201 --image=redis --labels=app_env_stage=dev --dry-run -o yaml > /opt/KUAL00201/deploy_spec.yaml

yaml method: 参考官方:deployment

vim redis-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  kual00201
  labels:
    app_env_stage: dev 
spec:
  replicas: 7
  selector:
    matchLabels:
      app_env_stage: dev
  template:
    metadata:
      labels:
        app_env_stage: dev
    spec:
      containers:
      - name: redis
        image: redis
kubectl apply -f redis-deployment.yaml --dry-run -o yaml > /opt/KUAL00201/deploy_spec.yaml

第十一题 12. Set configuration context $ kubectl config use-context k8s

Create a file /opt/KUCC00302/kucc00302.txt that lists all pods that implement Service foo in Namespace production.

The format of the file should be one pod name per line.

Question weight: 3%

kubectl config use-context k8s

kubectl get pods -n procduction --show-labels
kubectl --namespace=production describe service foo
kubectl --namespace=production get pod -l 'run=foo' | awk '{print $1}' | grep -v NAME > /opt/KUCC00302/kucc00302.txt

第十二题 13. Set configuration context $ kubectl config use-context k8s Create a Kubernetes Secret as follows:

Name: super-secret Credential: alice or username:bob Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets

Create a second Pod named pod-secrets-via-env using the redis image, which exports credential as TOPSECRET

Question weight: 9% 参考官网:secret

kubectl config use-context k8s
 
root@master01:~# echo -n 'bob' | base64      
Ym9i
 
```bash
root@master01:~# vim usersecret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: super-secret
data:
  username: Ym9i
kubectl apply -f usersecret.yaml

vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-secrets-via-file
spec:
  containers:
    - name: pod-secrets-via-file
      image: redis
      volumeMounts:
        - name: foo
          mountPath: /secrets
  volumes:
    - name: foo
      secret:
        secretName: super-secret
 
$ kubectl apply -f pod1.yaml
$ kubectl get pods
$ kubectl exec -ti pod1 -- sh
$ cat /secret/username  //bob
 
apiVersion: v1       
kind: Pod
metadata:
  name: pod-secrets-via-env
spec:
  containers:
    - name: pod-secrets-via-env
      image: redis
      env:
      - name: TOPSECRET
        valueFrom:
          secretKeyRef:
            name: super-secret
            key: username
$ kubectl apply -f pod2.yaml
$ kubectl get pods
$ kubectl exec -ti pod2 -- sh
$ env        //ABC=bob
答案2:  //版本问题会报错v1.18.0
kubectl create secret generic super-secret --from-literal=username=Ym9i --dry-run -o yaml > super-secret.yml

第十三题 14. Set configuration context $ kubectl config use-context k8s Create a pad as follows:

Name: non-persistent-redis Container image: redis Named-volume with name: cache-control Mount path: /data/redis It should launch in the pre-prod namespace and the volume MUST NOT be persistent.

Question weight: 4% 参考官网: persistentvolume

kubectl config use-context k8s
 
apiVersion: v1
kind: Pod
metadata:
  name: non-persistent-redis
  namespace: pre-prod
spec:
  containers:
  - name: redis
    image: redis
    volumeMounts:
    - name: cache-control
      mountPath: /data/redis
  volumes:
  - name: cache-control
    emptyDir: {}
$ kubectl apply -f pv-pod.yaml

第十四题 15. Set configuration context $ kubectl config use-context k8s Scale the deployment webserver to 6 pods

Question weight: 1%

kubectl config use-context k8s

kubectl scale deployment webserver --replicas=6

第十五题 16. Set configuration context $ kubectl config use-context k8s

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum

Question weight: 2%

kubectl config use-context k8s

Kubectl get nodes.  (确保ready)
kubectl describe nodes | grep Taints | grep -v NoSchedule | wc -l  >/opt/nodenum

第十六题 17. Set configuration context $ kubectl config use-context k8s

From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the Pod consuming most CPU to the file /opt/cpu.txt (which already exists)

Question weight: 2%

kubectl config use-context k8s

kubectl top pod -l name=cpu-utilizer --all-namespaces --sort-by=cpu > /opt/cpu.txt

第十七题 18. Set configuration context $ kubectl config use-context k8s Create a deployment as follows

Name: nginx-dns Exposed via a service: nginx-dns Ensure that the service & pod are accessible via their respective DNS records The container(s) within any Pod(s) running as a part of this deployment should use the nginx image Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively.

Ensure you use the busybox:1.28 image(or earlier) for any testing, an the latest release has an unpstream bug which impacts thd use of nslookup.

Question weight: 7%

kubectl config use-context k8s

kubectl create deployment nginx-dns --image=nginx
kubectl expose deployment nginx-dns --target-port=80 --port=80
kubectl  run test --image=busybox:1.28  --command -- sleep 3600000
kubectl exec -it test-7785f75955-j6rsp -- nslookup 10.96.24.23 > /opt/service.dns
kubectl exec -it test-7785f75955-j6rsp -- nslookup  192.168.196.168 > /opt/pod.dns

答案方法2: 或者命令行创建deployment和service

kubectl create deployment nginx-dnss --image=registry.local/nginx -o yaml --dry-run > nginx-dns.yml
kubectl expose deployment nginx-dns --target-port=80 --port=80 --dry-run -o yaml > nginx-svc.yml

第十八题 19. No configuration context change required for this item

Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db

The etcd instance is running etcd version 3.1.10

The following TLS certificates/key are supplied for connecting to the server with etcdctl

CA certificate: /opt/KUCM00302/ca.crt Client certificate: /opt/KUCM00302/etcd-client.crt Clientkey:/opt/KUCM00302/etcd-client.key Question weight: 7%

参考官网:etcdctl

环境准备:
kubectl -n kube-system cp etcd-master01:/usr/local/bin/etcdctl /usr/local/bin/etcdctl
或者 
find / -name etcdctl #以绝对路径执行
答案:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 \
--cacert=/opt/KUCM00302/ca.crt \
--cert=/opt/KUCM00302/etcd-client.crt \
--key=/opt/KUCM00302/etcd-client.key \
snapshot save /data/backup/etcd-snapshot.db

root@master:~/pods/13$ ETCDCTL_API=3 /var/lib/docker/overlay2/92f166baeeda8792cdccc209d472b83e164af5629bac4f3411331969126eb739/merged/usr/local/bin/etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save /data/backup/etcd-snapshot.db
{"level":"info","ts":1596987691.210552,"caller":"snapshot/v3_snapshot.go:110","msg":"created temporary db file","path":"/data/backup/etcd-snapshot.db.part"}
{"level":"warn","ts":"2020-08-09T08:41:31.384-0700","caller":"clientv3/retry_interceptor.go:116","msg":"retry stream intercept"}
{"level":"info","ts":1596987691.3850393,"caller":"snapshot/v3_snapshot.go:121","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":1596987692.1225362,"caller":"snapshot/v3_snapshot.go:134","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","took":0.911907615}
{"level":"info","ts":1596987692.1228416,"caller":"snapshot/v3_snapshot.go:143","msg":"saved","path":"/data/backup/etcd-snapshot.db"}
Snapshot saved at /data/backup/etcd-snapshot.db

第十九题 20. Set configuration context $ kubectl config use-context ek8s

Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.

Question weight: 4%

kubectl config use-context ek8s

kubectl get nodes --show-labels | grep name=ek8s-node-1
kubectl cordon node01
kubectl drain node01

第二十题 21. Set configuration context $ kubectl config use-context wk8s

A Kubernetes worker node, labelled with name=wk8s-node-0 is in state NotReady . Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.

Hints:

You can ssh to the failed node using $ ssh wk8s-node-0 You can assume elevated privileges on the node with the following command $ sudo -i Question weight: 4%

kubectl config use-context wk8s

ssh wk8s-node-0
systemctl restart kubelet.service
systemctl enable kubelet.service

第二十一题 22. Set configuration context $ kubectl config use-context wk8s

Configure the kubelet systemd managed service, on the node labelled with name=wk8s-node-1, to launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.

Hints:

You can ssh to the failed node using $ ssh wk8s-node-1 You can assume elevated privileges on the node with the following command $ sudo -i Question weight: 4%

参考官网:static pod

kubectl config use-context wk8s
$ systemctl status kubectl //找配置文件绝对路径
$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf //查找配置文件或者环境变量文件中是否有--pod-manifest-path=/etc/kubernetes/manifests/,如果没有添加
$ 
$ ssh wk8s-node-1
$ vim  /etc/kubernetes/manifests/static-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myservice
spec:
  containers:
  - name: myservice
    image: nginx
systemctl restart kubelet
kubectl get pods  

第二十二题 23. Set configuration context $ kubectl config use-context ik8s

In this task, you will configure a new Node, ik8s-node-0, to join a Kubernetes cluster as follows:

Configure kubelet for automatic certificate rotation and ensure that both server and client CSRs are automatically approved and signed as appropnate via the use of RBAC. Ensure that the appropriate cluster-info ConfigMap is created and configured appropriately in the correct namespace so that future Nodes can easily join the cluster Your bootstrap kubeconfig should be created on the new Node at /etc/kubernetes/bootstrap-kubelet.conf (do not remove this file once your Node has successfully joined the cluster) The appropriate cluster-wide CA certificate is located on the Node at /etc/kubernetes/pki/ca.crt . You should ensure that any automatically issued certificates are installed to the node at /var/lib/kubelet/pki and that the kubeconfig file for kubelet will be rendered at /etc/kubernetes/kubelet.conf upon successful bootstrapping Use an additional group for bootstrapping Nodes attempting to join the cluster which should be called system:bootstrappers:cka:default-node-token Solution should start automatically on boot, with the systemd service unit file for kubelet available at /etc/systemd/system/kubelet.service To test your solution, create the appropriate resources from the spec file located at /opt/…/kube-flannel.yaml This will create the necessary supporting resources as well as the kube-flannel -ds DaemonSet . You should ensure that this DaemonSet is correctly deployed to the single node in the cluster.

Hints:

kubelet is not configured or running on ik8s-master-0 for this task, and you should not attempt to configure it. You will make use of TLS bootstrapping to complete this task. You can obtain the IP address of the Kubernetes API server via the following command $ ssh ik8s-node-0 getent hosts ik8s-master-0 The API server is listening on the usual port, 6443/tcp, and will only server TLS requests The kubelet binary is already installed on ik8s-node-0 at /usr/bin/kubelet . You will not need to deploy kube-proxy to the cluster during this task. You can ssh to the new worker node using $ ssh ik8s-node-0 You can ssh to the master node with the following command $ ssh ik8s-master-0 No further configuration of control plane services running on ik8s-master-0 is required You can assume elevated privileges on both nodes with the following command $ sudo -i Docker is already installed and running on ik8s-node-0 Question weight: 8% 略

第二十三题 24. Set configuration context $ kubectl config use-context bk8s

Given a partially-functioning Kubenetes cluster, identify symptoms of failure on the cluster. Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.

The worker node in this cluster is labelled with name=bk8s-node-0 Hints:

You can ssh to the relevant nodes using $ ssh $(NODE) where (NODE)isoneofbk8s−master−0or bk8s−node−0 Youcanassumeelevatedprivilegesonanynodeintheclusterwiththefollowingcommand(NODE)isoneofbk8s−master−0orbk8s−node−0Youcanassumeelevatedprivilegesonanynodeintheclusterwiththefollowingcommand sudo -i Question weight: 4%

排错:1.kubectl get;2.服务重启;3.enable;4.检查配置路径存在

$ kubectl get nodes
$ systemctl restart api-server
$ kubeclt get cs
$ systemctl restart kube-controller-manager

第二十四题 25. Set configuration context $ kubectl config use-context hk8s

Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config

Question weight: 3%

参考官网:persistent volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /srv/app-config
kubectl apply -f pv-config.yaml

最后更新于