CKS考试

1. 要求

考试模式:线上考试
考试时间:2小时
认证有效期:2年
软件版本:Kubernetes v1.19
系统:Ubuntu 18.4
有效期:考试资格自购买之日起12个月内有效
重考政策:可接受1次重考
经验水平:中级

2. 考点内容

2.1 集群安装:10%

  • 使用网络安全策略来限制集群级别的访问
  • 使用CIS基准检查Kubernetes组件(etcd, kubelet, kubedns, kubeapi)的安全配置
  • 正确设置带有安全控制的Ingress对象
  • 保护节点元数据和端点
  • 最小化GUI元素的使用和访问
  • 在部署之前验证平台二进制文件
课程:

2.2 集群强化:15%

  • 限制访问Kubernetes API
  • 使用基于角色的访问控制来最小化暴露
  • 谨慎使用服务帐户,例如禁用默认设置,减少新创建帐户的权限
  • 经常更新Kubernetes
课程:

2.3 系统强化:15%

  • 最小化主机操作系统的大小(减少攻击面)
  • 最小化IAM角色
  • 最小化对网络的外部访问
  • 适当使用内核强化工具,如AppArmor, seccomp
课程:

2.4 微服务漏洞最小化:20%

  • 设置适当的OS级安全域,例如使用PSP, OPA,安全上下文
  • 管理Kubernetes机密
  • 在多租户环境中使用容器运行时 (例如gvisor, kata容器)
  • 使用mTLS实现Pod对Pod加密
课程:

2.5 供应链安全:20%

  • 最小化基本镜像大小
  • 保护您的供应链:将允许的注册表列入白名单,对镜像进行签名和验证
  • 使用用户工作负载的静态分析(例如kubernetes资源,Docker文件)
  • 扫描镜像,找出已知的漏洞
课程:

2.6 监控、日志记录和运行时安全:20%

  • 在主机和容器级别执行系统调用进程和文件活动的行为分析,以检测恶意活动
  • 检测物理基础架构,应用程序,网络,数据,用户和工作负载中的威胁
  • 检测攻击的所有阶段,无论它发生在哪里,如何扩散
  • 对环境中的不良行为者进行深入的分析调查和识别
  • 确保容器在运行时不变
  • 使用审计日志来监视访问
课程:

3. 需要掌握内容

3.1 群集设置 10%

1.使用网络安全策略限制群集级别的访问
2.使用CIS基准来检查Kubernetes组件(etcd,kubelet,kubedns,kubeapi)的安全配置
3.配置ingress的安全设置
4.保护节点元数据
5.最大限度地减少对dashboard的使用和访问
6.部署前验证kubernetes二进制文件

3.2 群集强化 15%

1.限制对Kubernetes API的访问
2.使用RBAC最大程度的减少资源暴露
3.SA的安全设置,例如禁用默认值,最小化对新创建sa的权限
4.更新Kubernetes

3.3 系统强化 15%

1.服务器的安全设置
2.最小化IAM角色
3.最小化外部网络访问
4.适当使用内核强化工具,例如AppArmor,seccomp

3.4 最小化微服务漏洞 20%

1.使用PSP,OPA,安全上下文提高安全性
2.管理Kubernetes secret
3.在多租户环境中使用沙箱运行容器(例如gvisor,kata容器)
4.使用mTLS实施Pod到Pod的加密

3.5 供应链安全 20%

1.减小image的大小
2.保护供应链:将允许的镜像仓库列入白名单,对镜像进行签名和验证
3.分析文件及镜像安全隐患(例如Kubernetes的yaml文件,Dockerfile)
4.扫描图像,找出已知的漏洞

3.6 监控、审计和runtime 20%

1.分析容器系统调用,以检测恶意进程
2.检测物理基础设施、应用程序、网络、数据、用户和工作负载中的威胁
3.检测攻击的所有阶段,无论它发生在哪里,如何传播
6.Kubernetes审计

4. 考试资料

相关阅读:

5. 考试命令

NetworkPolicy

1
k run frontend --image=nginx
2
k run backend --image=nginx
3
k expose pod frontend --port 80
4
k expose pod backend --port 80
5
k get pods,svc
6
k exec frontend -- curl backend
7
k exec backend -- curl frontend
8
9
vim default-deny.yaml
10
apiVersion: networking.k8s.io/v1
11
kind: NetworkPolicy
12
metadata:
13
name: deny
14
namespace: default
15
spec:
16
podSelector: {}
17
policyTypes:
18
- Egress
19
- Ingress
20
21
22
vim frontend.yaml
23
# allows frontend pods to communicate with backend pods
24
apiVersion: networking.k8s.io/v1
25
kind: NetworkPolicy
26
metadata:
27
name: frontend
28
namespace: default
29
spec:
30
podSelector:
31
matchLabels:
32
run: frontend
33
policyTypes:
34
- Egress
35
egress:
36
- to:
37
- podSelector:
38
matchLabels:
39
run: backend
40
41
42
vim backend.yaml
43
# allows backend pods to have incoming traffic from frontend pods
44
apiVersion: networking.k8s.io/v1
45
kind: NetworkPolicy
46
metadata:
47
name: backend
48
namespace: default
49
spec:
50
podSelector:
51
matchLabels:
52
run: backend
53
policyTypes:
54
- Ingress
55
ingress:
56
- from:
57
- podSelector:
58
matchLabels:
59
run: frontend
60
61
62
k exec frontend -- curl 192.168.104.27
63
k exec backend -- curl 192.168.166.179
64
65
66
kubectl create ns cassandra
67
kubectl edit ns cassandra
68
apiVersion: v1
69
kind: Namespace
70
metadata:
71
creationTimestamp: "2021-04-20T07:19:22Z"
72
name: cassandra
73
resourceVersion: "533198"
74
uid: 766ae069-4dc9-4acd-a4db-ce852c293cc6
75
labels: #添加
76
ns: cassandra #添加
77
spec:
78
finalizers:
79
- kubernetes
80
status:
81
phase: Active
82
83
84
k -n cassandra run cassandra --image=nginx
85
k -n cassandra get pod -owide
86
k exec backend -- curl 192.168.104.26
87
vim backend.yaml
88
apiVersion: networking.k8s.io/v1
89
kind: NetworkPolicy
90
metadata:
91
name: backend
92
namespace: default
93
spec:
94
podSelector:
95
matchLabels:
96
run: backend
97
policyTypes:
98
- Ingress
99
- Egress
100
ingress:
101
- from:
102
- podSelector:
103
matchLabels:
104
run: frontend
105
egress:
106
- to:
107
- namespaceSelector:
108
matchLabels:
109
ns: cassandra
110
111
k exec backend -- curl 192.168.104.26
112
113
cat cassandra-deny.yaml
114
# deny all incoming and outgoing traffic from all pods in namespace cassandra
115
apiVersion: networking.k8s.io/v1
116
kind: NetworkPolicy
117
metadata:
118
name: cassandra-deny
119
namespace: cassandra
120
spec:
121
podSelector: {}
122
policyTypes:
123
- Ingress
124
- Egress
125
126
k exec backend -- curl 192.168.104.26
127
(通)
128
129
cat cassandra-deny.yaml
130
# deny all incoming and outgoing traffic from all pods in namespace cassandra
131
apiVersion: networking.k8s.io/v1
132
kind: NetworkPolicy
133
metadata:
134
name: cassandra-deny
135
namespace: cassandra
136
spec:
137
podSelector: {}
138
policyTypes:
139
- Ingress
140
- Egress
141
142
k exec backend -- curl 192.168.104.26
143
(拒绝)
144
145
vim cassandra.yaml
146
apiVersion: networking.k8s.io/v1
147
kind: NetworkPolicy
148
metadata:
149
name: cassandra
150
namespace: cassandra
151
spec:
152
podSelector:
153
matchLabels:
154
run: cassandra
155
policyTypes:
156
- Ingress
157
ingress:
158
- from:
159
- namespaceSelector:
160
matchLabels:
161
ns: default
162
163
164
k edit ns default
165
apiVersion: v1
166
kind: Namespace
167
metadata:
168
creationTimestamp: "2021-01-19T03:27:58Z"
169
labels: #添加
170
ns: default #添加
171
name: default
172
resourceVersion: "541475"
173
uid: 2d566715-f0a4-49b3-b590-dfa7df30d0ba
174
spec:
175
finalizers:
176
- kubernetes
177
status:
178
phase: Active
179
180
181
k exec backend -- curl 192.168.104.26
Copied!

Dashboard

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
2
namespace/kubernetes-dashboard created
3
4
k -n kubernetes-dashboard get pod,svc
5
6
k -n kubernetes-dashboard edit deploy kubernetes-dashboard
7
.....
8
containers:
9
- args:
10
- --auto-generate-certificates
11
- --namespace=kubernetes-dashboard
12
image: kubernetesui/dashboard:v2.1.0
13
imagePullPolicy: Always
14
......
15
改为
16
spec:
17
containers:
18
- args:
19
- --namespace=kubernetes-dashboard
20
- --insecure-port=9090
21
image: kubernetesui/dashboard:v2.1.0
22
23
24
k -n kubernetes-dashboard get pod,svc
25
26
k -n kubernetes-dashboard edit svc kubernetes-dashboard
27
apiVersion: v1
28
kind: Service
29
metadata:
30
annotations:
31
kubectl.kubernetes.io/last-applied-configuration: |
32
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
33
creationTimestamp: "2021-04-21T02:55:03Z"
34
labels:
35
k8s-app: kubernetes-dashboard
36
name: kubernetes-dashboard
37
namespace: kubernetes-dashboard
38
resourceVersion: "557996"
39
uid: bd515d85-4dc6-4ac0-9890-ca2a711a7b26
40
spec:
41
clusterIP: 10.99.150.161
42
clusterIPs:
43
- 10.99.150.161
44
ports:
45
- port: 9090 #443改为9090
46
protocol: TCP
47
targetPort: 9090 #8443改为9090
48
selector:
49
k8s-app: kubernetes-dashboard
50
sessionAffinity: None
51
type: NodePort #ClusterIP改为NodePort
52
status:
53
loadBalancer: {}
54
55
k -n kubernetes-dashboard get svc
56
57
#RBAC for the Dashboard
58
59
k -n kubernetes-dashboard get sa
60
k get clusterroles |grep view
61
k -n kubernets-dashboard create rolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard --clusterrole view
62
63
k -n kubernetes-dashboard create clusterrolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard --clusterrole view
Copied!

Secure Ingress

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/baremetal/deploy.yaml
2
3
4
k get pod,svc -n ingress-nginx
5
6
cat secure-ingress.yaml
7
apiVersion: networking.k8s.io/v1
8
kind: Ingress
9
metadata:
10
name: secure-ingress
11
annotations:
12
nginx.ingress.kubernetes.io/rewrite-target: /
13
spec:
14
rules:
15
- http:
16
paths:
17
- path: /service1
18
pathType: Prefix
19
backend:
20
service:
21
name: service1
22
port:
23
number: 80
24
- path: /service2
25
pathType: Prefix
26
backend:
27
service:
28
name: service2
29
port:
30
number: 80
31
32
33
k create -f secure-ingress.yaml
34
k get ing
35
k run pod1 --image=nginx
36
k run pod2 --image=httpd
37
k expose pod pod1 --port 80 --name service1
38
k expose pod pod2 --port 80 --name service2
39
curl http://192.168.211.40:31459/service1
40
curl http://192.168.211.40:31459/service2
41
42
43
curl https://192.168.211.40:32300/service1 -kv
44
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
45
k create secret tls secure-ingress --cert=cert.pem --key=key.pem
46
k get sec
47
k get secret
48
49
vim secure-ingress.yaml
50
apiVersion: networking.k8s.io/v1
51
kind: Ingress
52
metadata:
53
name: secure-ingress
54
annotations:
55
nginx.ingress.kubernetes.io/rewrite-target: /
56
spec:
57
tls:
58
- hosts:
59
- secure-ingress.com
60
secretName: secure-ingress
61
rules:
62
- host: secure-ingress.com
63
http:
64
paths:
65
- path: /service1
66
pathType: Prefix
67
backend:
68
service:
69
name: service1
70
port:
71
number: 80
72
73
- path: /service2
74
pathType: Prefix
75
backend:
76
service:
77
name: service2
78
port:
79
number: 80
80
81
k apply -f secure-ingress.yaml
82
curl https://secure-ingress.com:32300/service2 -kv --resolv secure-ingress.com:32300:192.168.211.41
Copied!

Node Metadata

1
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"
2
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/0/" -H "Metadata-Flavor: Google"
3
k run nginx --image=nginx
4
k get pods
5
k exec -ti nginx bash
6
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"
7
cat deny.yaml
8
# all pods in namespace cannot access metadata endpoint
9
apiVersion: networking.k8s.io/v1
10
kind: NetworkPolicy
11
metadata:
12
name: cloud-metadata-deny
13
namespace: default
14
spec:
15
podSelector: {}
16
policyTypes:
17
- Egress
18
egress:
19
- to:
20
- ipBlock:
21
cidr: 0.0.0.0/0
22
except:
23
- 169.254.169.254/32
24
25
k create -f deny.yaml
26
k exec -ti nginx bash
27
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google" ## 卡住
28
29
cat allow.yaml
30
apiVersion: networking.k8s.io/v1
31
kind: NetworkPolicy
32
metadata:
33
name: cloud-metadata-allow
34
namespace: default
35
spec:
36
podSelector:
37
matchLabels:
38
role: metadata-accessor
39
policyTypes:
40
- Egress
41
egress:
42
- to:
43
- ipBlock:
44
cidr: 169.254.169.254/32
45
46
47
k create -f allow.yaml
48
k label pod nginx role=metadata-accessor
49
k get pods nginx --show-labels
50
k exec -ti nginx bash
51
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google" #正常访问
52
53
54
k edit pod nginx
55
metadata:
56
annotations:
57
cni.projectcalico.org/podIP: 192.168.104.31/32
58
creationTimestamp: "2021-04-22T03:17:45Z"
59
labels:
60
role: metadata-accessor #删除
61
run: nginx
62
name: nginx
63
namespace: default
64
65
66
k exec -ti nginx bash
67
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google" #卡住无法访问
Copied!

CIS Benchmarks

1
kubectl get nodes
2
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.20
3
4
useradd etcd
5
chown etcd:etcd /var/lib/etcd
6
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.20
7
8
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest node --version 1.20
9
10
11
./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml master
12
kube-bench --config-dir /data/software/kube-bench/cfg --config /data/software/kube-bench/cfg/config.yaml node
Copied!

Verify

1
sha512sum kubernetes-server-linux-arm64.tar.gz > compare
2
sha512sum kubernetes/server/bin/kube-apiserver
3
k -n kube-system get pod | grep api
4
k -n kube-system get pod kube-apiserver-master -o yaml | grep image
5
docker cp 0fb5321dfd57:/ container-fs
6
find container-fs/ | grep kube-apiserver
7
sha512sum container-fs/usr/local/bin/kube-apiserver
Copied!

Restrict API Access

1
curl https://localhost:6443
2
curl https://localhost:6443 -k
3
vim /etc/kubernetes/manifests/kube-apiserver.yaml
4
...
5
- kube-apiserver
6
- --advertise-address=192.168.211.40
7
- --allow-privileged=true
8
- --authorization-mode=Node,RBAC
9
- --client-ca-file=/etc/kubernetes/pki/ca.crt
10
- --enable-admission-plugins=NodeRestriction
11
- --enable-bootstrap-token-auth=true
12
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
13
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
14
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
15
- --etcd-servers=https://127.0.0.1:2379
16
- --insecure-port=8080 #0改成8080
17
.....
18
19
curl http://localhost:8080
20
curl https://192.168.211.40:6443 --cacert ca --cert ca.crt --key ca.key
Copied!

ServiceAccounts

1
k get sa,secrets
2
k describe sa default
3
k create sa accessor
4
k describe secret accessor-token-bnd4s
5
k run accessor --image=nginx --dry-run=client -oyaml > accessor.yaml
6
cat accessor.yaml
7
apiVersion: v1
8
kind: Pod
9
metadata:
10
creationTimestamp: null
11
labels:
12
run: accessor
13
name: accessor
14
spec:
15
serviceAccountName: accessor #添加此行
16
containers:
17
- image: nginx
18
name: accessor
19
resources: {}
20
dnsPolicy: ClusterFirst
21
restartPolicy: Always
22
status: {}
23
24
25
k create -f accessor.yaml
26
k exec -ti accessor -- bash
27
mount |grep sec
28
cd /run/secrets/kubernetes.io/serviceaccount
29
cat token
30
curl https://kubernetes
31
curl https://kubernetes -k
32
33
34
cat accessor.yaml
35
apiVersion: v1
36
kind: Pod
37
metadata:
38
creationTimestamp: null
39
labels:
40
run: accessor
41
name: accessor
42
spec:
43
serviceAccountName: accessor
44
automountServiceAccountToken: false #添加此行
45
containers:
46
- image: nginx
47
name: accessor
48
resources: {}
49
dnsPolicy: ClusterFirst
50
restartPolicy: Always
51
status: {}
52
53
k -f accessor.yaml replace --force
54
k exec -ti accessor -- bash
55
mount |grep ser
56
k get pod
57
k auth can-i delete secrets --as system:serviceaccount:default:accessor
58
k create clusterrolebinding accessor --clusterrole edit --serviceaccount default:accessor
59
k auth can-i delete secrets --as system:serviceaccount:default:accessor
Copied!

RBAC

1
k create ns red
2
k create ns blue
3
k -n red create role secret-manager -verb=get --resource=secrets -oyaml --dry-run=client
4
apiVersion: rbac.authorization.k8s.io/v1
5
kind: Role
6
metadata:
7
creationTimestamp: null
8
name: secret-manager
9
namespace: red
10
rules:
11
- apiGroups:
12
- ""
13
resources:
14
- secrets
15
verbs:
16
- get
17
18
19
k -n red create rolebinding secret-manager --role=secret-manager --user=jane
20
21
k -n red auth can-i get secrets --as jane
22
k -n red auth can-i get secrets --as tom
23
k -n red auth can-i delete secrets --as jane
24
k -n red auth can-i list secrets --as jane
25
k -n blue auth can-i list secrets --as jane
26
k -n blue auth can-i get secrets --as jane
27
k -n blue auth can-i get pods --as jane
28
29
k create clusterrole deploy-deleter --verb delete --resource deployments
30
31
k create clusterrolebinding deploy-deleter --user jane --clusterrole deploy-deleter
32
33
k -n red create rolebinding deploy-deleter --user jim --clusterrole deploy-deleter
34
k auth can-i delete deployments --as jane
35
k auth can-i delete deployments --as jane -n default
36
k auth can-i delete deployments --as jane -n red
37
k auth can-i delete pods --as jane -n red
38
k auth can-i delete deployments --as jim -n default
39
k auth can-i delete deployments --as jim -A
40
k auth can-i delete deployments --as jim -n red
Copied!

Upgrade Kubernetes

1
k drain master --ignore-daemonsets
2
k get nodes
3
apt-cache show kubeadm |grep 1.20
4
apt-get install kubeadm=1.20.2-00 kubectl=1.20.2-00 kubelet=1.20.2-00
5
kubeadm upgrade plan
6
kubeadm upgrade apply v1.20.6
7
k get nodes
8
9
k drain node1 --ignore-daemonsets
10
k uncordon node1
11
kubeadm version
12
apt-cache show kubeadm |grep -e '1.20'
13
apt-get install kubeadm=1.20.2-00 kubectl=1.20.2-00 kubelet=1.20.2-00
14
kubeadm version
15
kubectl version
16
kubelet version
17
k uncordon node1
Copied!

securityContext与podsecurityPolicies

1
k run pod --image=busybox --command -oyaml --dry-run=client > pod.yaml -- sh -c 'sleep 1d'
2
cat pod.yaml
3
apiVersion: v1
4
kind: Pod
5
metadata:
6
creationTimestamp: null
7
labels:
8
run: pod
9
name: pod
10
spec:
11
securityContext:
12
runAsUser: 1000
13
runAsGroup: 3000
14
containers:
15
- command:
16
- sh
17
- -c
18
- sleep 1d
19
image: busybox
20
name: pod
21
resources: {}
22
dnsPolicy: ClusterFirst
23
restartPolicy: Always
24
status: {}
25
26
k exec -ti pod -- sh
27
/ $ id
28
uid=1000 gid=3000
29
30
/ $ touch test
31
touch: test: Permission denied
32
/ $ cd /tmp
33
/tmp $ touch test
34
/tmp $ ls -lh
35
total 0
36
-rw-r--r-- 1 1000 3000 0 May 15 15:00 test
Copied!

seccomp and apparmor

1
$ cat /etc/apparmor.d/docker-nginx
2
$ apparmor_parser /etc/apparmor.d/docker-nginx
3
$ aa-status
4
$ docker run nginx
5
$ docker run --security-opt apparmor=docker-default nginx
6
$ docker run --security-opt apparmor=docker-nginx nginx
7
/docker-entrypoint.sh: 13: /docker-entrypoint.sh: cannot create /dev/null: Permission denied
8
/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration
9
10
$ docker run --security-opt apparmor=docker-nginx -d nginx
11
$ docker exec -ti f608a4a126e2e2b145dcf094b41c29bea1f7b8beeb38871178e0ea0ae8eab061 bash
12
$ touch /root/test
13
touch: cannot touch '/root/test': Permission denied
14
$ sh
15
bash: /bin/sh: Permission denied
16
$ touch /test
17
18
19
$ apparmor_parser /etc/apparmor.d/docker-nginx
20
$ aa-status
21
$ k run secure --image=nginx -oyaml --dry-run=client > pod.yaml
22
$ cat pod.yaml
23
apiVersion: v1
24
kind: Pod
25
metadata:
26
creationTimestamp: null
27
annotations: #添加此行
28
container.apparmor.security.beta.kubernetes.io/secure: localhost/hello #添加此行
29
labels:
30
run: secure
31
name: secure
32
spec:
33
containers:
34
- image: nginx
35
name: secure
36
resources: {}
37
dnsPolicy: ClusterFirst
38
restartPolicy: Always
39
status: {}
40
41
$ k create -f pod.yaml
42
$ k get pods secure
43
NAME READY STATUS RESTARTS AGE
44
secure 0/1 Blocked 0 6s
45
$ k describe pod secure
46
nnotations: container.apparmor.security.beta.kubernetes.io/secure: localhost/hello
47
Status: Pending
48
Reason: AppArmor
49
Message: Cannot enforce AppArmor: profile "hello" is not loaded
50
51
52
53
$ cat pod.yaml
54
apiVersion: v1
55
kind: Pod
56
metadata:
57
creationTimestamp: null
58
annotations:
59
container.apparmor.security.beta.kubernetes.io/secure: localhost/docker-nginx #修改此行
60
labels:
61
run: secure
62
name: secure
63
spec:
64
containers:
65
- image: nginx
66
name: secure
67
resources: {}
68
dnsPolicy: ClusterFirst
69
restartPolicy: Always
70
status: {}
71
72
73
$ k create -f pod.yaml
74
$ k get pod secure
75
NAME READY STATUS RESTARTS AGE
76
secure 1/1 Running 0 10s
Copied!