切换集群 kubectl config use-context k8s
context
A container image scanner is set up on the cluster,but It’s not yet fully integrated into the cluster’s configuration When complete,the container image scanner shall scall scan for and reject the use of vulnerable images.
task:
You have to complete the entire task on the cluster’s master node,where all services and files have been prepared and placed
Glven an incomplete configuration in directory /etc/kubernetes/aa and a functional container image scanner with HTTPS sendpitont http://192.168.26.60:1323/image_policy
1.enable the necessary plugins to create an image policy
2.validate the control configuration and chage it to an implicit deny
3.Edit the configuration to point the provied HTTPS endpoint correctiy
Finally,test if the configurateion is working by trying to deploy the valnerable resource /csk/1/web1.yaml
解题思路
ImagePolicyWebhook
关键字:image_policy,deny
1. 切换集群,查看master,sshmaster
2. ls /etc/kubernetes/xxx
3. vi /etc/kubernetes/xxx/xxx.yaml 更改 true 为 false
vi /etc/kubernetes/xxx/xxx.yaml 中 https的地址
volume需要挂载进去
4. 启用ImagePolicyWebhook和- --admission-control-config-file=
5. systemctl restart kubelet
6.kubectl run pod1 --image=nginx
$ ls /etc/kubernetes/aa/
admission_config.yaml apiserver-client-cert.pem apiserver-client-key.pem external-cert.pem external-key.pem kubeconf
$ cd /etc/kubernetes/aa
$ cat kubeconf
apiVersion: v1
kind: Config
# clusters refers to the remote service.
clusters:
- cluster:
certificate-authority: /etc/kubernetes/aa/external-cert.pem # CA for verifying the remote service.
server: http://192.168.26.60:1323/image_policy # URL of remote service to query. Must use 'https'.
name: image-checker
contexts:
- context:
cluster: image-checker
user: api-server
name: image-checker
current-context: image-checker
preferences: {}
# users refers to the API server's webhook configuration.
users:
- name: api-server
user:
client-certificate: /etc/kubernetes/aa/apiserver-client-cert.pem # cert for the webhook admission controller to use
client-key: /etc/kubernetes/aa/apiserver-client-key.pem # key matching the cert
$ cat admission_config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/aa/kubeconf
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false
#修改api-server配置
$ cat /etc/kubernetes/manifests/kube-apiserver.yaml
...............
- command:
- kube-apiserver
- --admission-control-config-file=/etc/kubernetes/aa/admission_config.yaml #添加此行
- --advertise-address=192.168.211.40
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook # #修改此行
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
...........
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/aa #添加此行
name: k8s-admission #添加此行
readOnly: true #添加此行
..............
- hostPath: #添加此行
path: /etc/kubernetes/aa #添加此行
type: DirectoryOrCreate #添加此行
name: k8s-admission #添加此行
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
$ k get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.20.1
node1 Ready <none> 9d v1.20.1
node2 Ready <none> 9d v1.20.1
#创建pod失败
$ k run test --image=nginx
Error from server (Forbidden): pods "test" is forbidden: Post "https://external-service:1234/check-image?timeout=30s": dial tcp: lookup external-service on 8.8.8.8:53: no such host
#修改admission_config.yaml 配置
$ vim /etc/kubernetes/aa/admission_config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/aa/kubeconf
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true #修改此行为true
#重启api-server
$ ps -ef |grep api
root 78871 39023 0 20:17 pts/3 00:00:00 grep --color=auto api
$ mv ../kube-apiserver.yaml .
#创建pod成功
$ k run test --image=nginx
pod/test created
2. sysdig检测pod
切换集群 kubectl config use-context k8s
you may user you brower to open one additonal tab to access sysdig’s documentation ro Falco’s documentaion
Task:
user runtime detection tools to detect anomalous processes spawning and executing frequently in the sigle container belorging to Pod redis.
Tow tools are avaliable to use:
sysdig
falico
the tools are pre-installed on the cluster’s worker node only;the are not avaliable on the base system or the master node.
using the tool of you choice(including any non pre-install tool) analyse the container’s behaviour for at lest 30 seconds,using filers that detect newly spawing and executing processes store an incident file at /opt/2/report,containing the detected incidents one per line in the follwing format:
切换集群 kubectl config use-context k8s
context
A Role bound to a pod’s serviceAccount grants overly permissive permission
Complete the following tasks to reduce the set of permissions.
task
Glven an existing Pod name web-pod running in the namespace monitoring Edit the Roleebound to the Pod’s serviceAccountsa-dev-1 to only allow performing list operations,only on resources of type Endpoints
create a new Role named role-2 in the namespaces monitoring which only allows performing update operations,only on resources of type persistentvoumeclaims.
create a new Rolebind name role role-2-bindding binding the newly created Roleto the Pod’s serviceAccount
切换集群 kubectl config use-context k8s
Context
AppArmor is enabled on the cluster’s worker node. An AppArmor profile is prepared, but not enforced yet. You may use your browser to open one additional tab to access
theAppArmor documentation. Task
On the cluster’s worker node, enforce the prepared AppArmor profile located at /etc/apparmor.d/nginx_apparmor . Edit the prepared manifest file located at /cks/4/pod1.yaml to apply the AppArmor profile. Finally, apply the manifest file and create the pod specified in it
切换集群 kubectl config use-context k8s63
context
A PodsecurityPolicy shall prevent the create on of privileged Pods in a specific
namespace. Task
Create a new PodSecurityPolicy named prevent-psp-policy , which prevents the creation of privileged Pods.
Create a new ClusterRole named restrict-access-role , which uses the newly created PodSecurityPolicy prevent-psp-policy .
Create a new serviceAccount named pspdenial-sa in the existing namespace development .
Finally, create a new clusterRoleBinding named dany-access-bind ,which binds the newly created ClusterRole restrict-access-role to the newly created serviceAccount
切换集群 kubectl config use-context k8s
create a NetworkPolicy named pod-access to restrict access to Pod products-service running in namespace development . only allow the following Pods to connect to Pod productsservice :
Pods in the namespace testing
Pods with label environment: staging , in any namespace Make sure to apply the NetworkPolicy. You can find a skelet on manifest file at/cks/6/p1.yaml
切换集群 kubectl config use-context k8s
Task
Analyze and edit the given Dockerfile (based on the ubuntu:16.04 image) /cks/7/Dockerfile fixing two instructions present in the file being prominent security/best-practice issues.
Analyze and edit the given manifest file /cks/7/deployment.yaml
fixing two fields present in the file being prominent security/best-practiceissues.
# build container stage 1
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y golang-go=2:1.13~1ubuntu2
COPY app.go .
RUN pwd
RUN CGO_ENABLED=0 go build app.go
# app container stage 2
FROM alpine:3.12.0
RUN addgroup -S appgroup && adduser -S appuser -G appgroup -h /home/appuser
RUN rm -rf /bin/*
COPY --from=0 /app /home/appuser/
USER appuser
CMD ["/home/appuser/app"]
8. pod安全
切换集群 kubectl config use-context k8s
context
lt is best-practice to design containers to best teless and immutable. Task
lnspect Pods running in namespace testing and delete any Pod that is either not stateless or not immutable. use the following strict interpretation of stateless and immutable:
Pods being able to store data inside containers must be treated as not stateless.
You don’t have to worry whether data is actually stored inside containers or not already. Pods being configured to be privileged in any way must be treated as potentially not stateless and not immutable.
解题思路
关键字:stateless immutable
1. get 所有pod
2. 查看是否有特权 privi*
3. 查看是否有volume
4. 把特权网络和volume都删除
$ kubectl get pod pod1 -n testing -o jsonpath={.spec.volumes} | jq
$ kubectl get pod sso -n testing -o yaml |grep "privi.*: true"
$ kubectl delete pod xxxxx -n testing
9. 创建SA
切换集群 kubectl config use-context k8s
context
A Pod fails to run because of an incorrectly specified ServiceAcccount.
Task
create a new ServiceAccount named frontend-sa in the existing namespace qa ,which must not have access to any secrets.lnspect the Pod named frontend running inthe namespace qa . Edit the Pod to use the newly created serviceAccount
关键字: ServiceAccount "must not have access to any secrets"
1.获取sa模板
$ kubectl create serviceaccount frontend-sa -n qa --dry-run -o yaml
2.通过官方文档查找自动挂载
$ k edit pod frontend -n qa
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: frontend
name: frontend
spec:
serviceAccountName: frontend-sa #添加此行
automountServiceAccountToken: false #添加此行
containers:
- image: nginx
name: frontend
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
3.修改pod中serviceAccountName
4.创建pod删除其他sa
10. trivy检测镜像安全
切换集群 kubectl config use-context k8s
Task
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace yavin . Look for images with High or Critical severity vulnerabilities,and delete the Pods that use those images. Trivy is pre-installed on the cluster’s master node only; it is not available on the base system or the worker nodes. You’ll have to connect to the cluster’s master node to use Trivy
关键字:Trivy scanner High or Critical
1. 切换集群,ssh到对应的master
2. get pod 把对应的image都扫描一下,不能有High or Critical
$ kubectl get pods --namespace yavin --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"
#trivy检查镜像
$ trivy image nginx:latest |grep 'High|Critical'
或者
$ trivy image -s HIGH,CRITICAL nginx:1.14.2
3. 把有问题的镜像pod删除
$ kubectl delete -n yavin xxx --force
11. 创建secret
切换集群 kubectl config use-context k8s
Task
Retrieve the content of the existing secret named db1-test in the istio-system namespace. store the username field in a file named /cks/11/old-username.txt ,and the password field in a
file named /cks/11/old-pass.txt. You must create both files; they don’t existyet. Do not use/modify the created files in!the following steps, create new temporaryfiles if needed. Create a new secret named test-workflow in the istio-system namespace, with the followingcontent:
username : thanos
password : hahahaha
Finally, create a new Pod that has access to the secret test-workflow via avolume:
切换集群 kubectl config use-context k8s65
context
ACIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately. Task
Fix all issues via configuration and restart theaffected components to ensure the
new settings take effect. Fix all of the following violations that were found against the API server:
Ensure that the
1.2.7 --authorization-mode FAIL argument is not set to AlwaysAllow
Ensure that the
1.2.8 --authorization-mode FAIL argument includes Node
Ensure that the
1.2.9 --authorization-mode FAIL argument includes RBAC
Ensure that the
1.2.18 --insecure-bind-address FAIL argument is not set
Ensure that the
1.2.19 --insecure-port FAIL argument is set to 0
Fix all of the following violations that were found against the kubelet:
Ensure that the
4.2.1 anonymous-auth FAIL argument is set to false
Ensure that the
4.2.2 --authorization-mode FAIL argument is not set to AlwaysAllow
$ ps -ef | grep kube-apiserver
$ cd /etc/kubernetes/manifests/
$ vim kube-apiserver.yaml
--authorization-mode=Node,RBAC #添加
--insecure-bind-address #删除
--insecure-port=0 #添加或修改
$ mv kube-apiserver.yaml ../
$ mv ../kube-apiserver.yaml .
$ systemctl status kubelet
配置--anonymous-auth与--authorization-mode
$ vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--anonymous-auth=false"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_AUTHZ_ARGS
$ systemctl daemon-reload
$ systemctl restart kubelet
$ systemctl status kubelet
$ ps -ef |grep kubelet |grep -v api
root 95185 1 13 19:44 ? 00:00:00 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --anonymous-auth=false --authorization-mode=Webhook
root 95621 80455 0 19:44 pts/0 00:00:00 grep --color=auto kubelet
$ docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.20
......
[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
.............
换集群 kubectl config use-context k8s67
context
This cluster uses containerd as CRl runtime. Containerd’s default runtime handler is runc . Containerd has been prepared to support an additional runtime handler ,runsc (gVisor). Task:
Create a RuntimeClass named untrusted using the prepared runtime handler named runsc . Update all Pods in the namespace client to run on gvisor, unless they are
already running on anon-default runtime handler. You can find a skeleton manifest file at /cks/13/rc.yaml
关键词:gVisor
1.切换集群 用官网文档创建一个runtimeclass
$ vim rc.yaml
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: untrusted
handler: runsc
$ k -f rc.yaml create
2.再更具题目要求创建pod使用这个runtime
$ k edit pod mypod -n client
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: client
spec:
runtimeClassName: untrusted
.......
14. 审计
切换集群 kubectl config use-context k8s
Task
Enable audit logs in the cluster. To do so, enable the log backend, and ensure that:
logs are stored at /var/log/kubernetes/audit-logs.txt
log files are retained for 5 days at maximum, a number of 10 auditlog files are retained
A basic policy is provided at /etc/kubernetes/logpolicy/sample-policy.yaml . it only specifies what not to log. The base policy is located on the cluster’s master node. Edit and extend the basic policy to log:
namespaces changes at RequestResponse level
the request body of pods changes in the namespace front-apps
configMap and secret changes in all namespaces at the Metadata level
Also, add a catch-all rule to log all other requests at the Metadata level. Don’t forget to apply
切换集群 kubectl config use-context k8s
context
A default-deny NetworkPolicy avoids to accident all y expose a Pod in a namespace that doesn’t have any other NetworkPolicy defined. Task
Create a new default-deny NetworkPolicy named denynetwork in the namespace development for all traffic of type Ingress . The new NetworkPolicy must deny all lngress
traffic in the namespace development . Apply the newly created default-deny NetworkPolicy to all Pods running in namespace
development . You can find a skeleton manifest file
关键字:NetworkPolicy defined
1.观察清楚是默认拒绝所有还是其他条件,更具题目要求官方文档来写yaml
$ cat denynetwork.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: denynetwork
namespace: development
spec:
podSelector: {}
policyTypes:
- Ingress
$ k create -f denynetwork.yaml
16. falco 检测输出日志格式
$ ssh node1
$ systemctl stop falco
$ falco
$ cd /etc/falco/
$ ls
falco_rules.local.yaml falco_rules.yaml falco.yaml k8s_audit_rules.yaml rules.available rules.d
$ grep -r "A shell was spawned in a container with an attached terminal" *
falco_rules.yaml: A shell was spawned in a container with an attached terminal (user=%user.name user_loginuid=%user.loginuid %container.info
#更新配置
root@node1:/etc/falco# cat falco_rules.local.yaml
- rule: Terminal shell in container
desc: A shell was used as the entrypoint/exec point into a container with an attached terminal.
condition: >
spawned_process and container
and shell_procs and proc.tty != 0
and container_entrypoint
and not user_expected_terminal_shell_in_container_conditions
output: >
%evt.time,%user.name,%container.name,%container.id
shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty container_id=%container.id image=%container.image.repository)
priority: WARNING
tags: [container, shell, mitre_execution]
$ falco
Mon May 24 00:07:13 2021: Falco version 0.28.1 (driver version 5c0b863ddade7a45568c0ac97d037422c9efb750)
Mon May 24 00:07:13 2021: Falco initialized with configuration file /etc/falco/falco.yaml
Mon May 24 00:07:13 2021: Loading rules from file /etc/falco/falco_rules.yaml:
Mon May 24 00:07:13 2021: Loading rules from file /etc/falco/falco_rules.local.yaml: #配置生效
Mon May 24 00:07:13 2021: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
Mon May 24 00:07:14 2021: Starting internal webserver, listening on port 8765
00:07:30.297671117: Warning Shell history had been deleted or renamed (user=root user_loginuid=-1 type=openat command=bash fd.name=/root/.bash_history name=/root/.bash_history path=<NA> oldpath=<NA> k8s_apache_apache_default_3ece2efb-fe49-4111-899f-10d38a61bab6_0 (id=84dd6fe8a9ad))
格式改变
00:07:33.763063865: Warning 00:07:33.763063865,root,k8s_apache_apache_default_3ece2efb-fe49-4111-899f-10d38a61bab6_0,84dd6fe8a9ad shell=bash parent=runc cmdline=bash terminal=34816 container_id=84dd6fe8a9ad image=httpd)