- Pod详解
- Pod配置
- 最简单的配置
- 增加镜像拉取策略
- 增加环境变量和启动命令
- 增加端口设置
- 增加容器资源限制
- pod生命周期
- 容器探针
- 重启策略
- pod的调度
- 自动调度
- 根据nodeName定向调度
- 根据nodeSelector定向调度
- 根据亲和性调度
- 污点和容忍
- pod控制器
- ReplicaSet(RS)
- Deoloyment(Deploy)
- 创建deployment
- deploy扩缩容
- deployment 更新策略
- 版本回滚和更新
- HPA自动扩缩容
- DaemonSet(DS)
- Job
- CronJob(CJ)
- Service
- clusterIP类型的service
- HeadLine类型Service(无头服务)
- NodePort类型Service
- LoadBalancer类型的Service
- ExternalName类型service
- Pod跨namespace命名空间通过service的Name访问Service服务
- ingress和ingressController
- Pod详解
# Pause容器是每个pod容器的根容器 判断pod健康状态,实现pod内部网络通信
# 对照yaml文件理解下列命令
kubectl explain pod # 列出资源一级属性
kubectl explain spec
kubectl explain pod.metadata # 列出一级属性下的子属性 针对 Resource
kubectl explain pod.metadata.labels
kubectl explain spec.containers
# *******按层理解yaml文件*********
kubectl api-versions # 获取所有api版本信息
kubectl api-resources # 获取所有api资源信息
# *******在kubernetes中,基本所有资源的【一级】属性都是一样的,主要包括5部分******
1 apiVersion <string> 版本
2 kind <string> 类型
3 metadata <object> 源数据,主要是资源标识和说明,常用的有name,namespace,labels
4 spec <object> 描述说明,这是配置中最重要的部分,里面是对各种资源配置单详细描述
5 status <object> # 状态信息,不需要定义,由kubernetes自动生成
# 以上5个属性[spec]属性是重点,它的常用子属性
1 container <[]object> 容器列表,用于定义容器的详细信息
2 nodeName <string> 根据nodeName的值,将pod调度到指定的Node节点
3 nodeSelector <map[]> 根据
4 hostNetwork [boolean] 是否使用主机网络模式,默认为false, 如果设置true,表示使用宿主网络
5 volumes <object> 存储卷,用于定义pod上面挂载的存储信息
6 restartPolicy <string> 重启策略,表示pod在遇到故障时的处理策略
-
Pod配置
-
最简单的配置
apiVersion: v1 # 版本
kind: Pod # 类型
metadata: # 元数据
name: pod-test # pod名称
namespace: devlop # 命名空间
labels: # 标签
user: "aikey"
spec: # 规格配置
containers:
- name: nginx220 # 容器命令
image: nginx:latest # 镜像名称
# 查看pod描述
kubectl describe pod pod-test -n devlop
- 增加镜像拉取策略
apiVersion: v1
kind: Pod
metadata:
name: pod-magepullpolicy
namespace: devlop
labels:
pullimage: always
spec:
containers:
- name: nginx2022
image: nginx:latest
imagePullPolicy: IfNotPresent # 镜像拉取策略有三种,Always,IfNotPresent,Never
- name: tomcat2021
image: tomcat:latest
imagePullPolicy: IfNotPresent
# 默认值说明
# Always: 每次创建Pod都会重新拉取一次镜像
# IfNotPresent: 镜像在宿主机上不存在时才拉取
# Never: Pod永远不会主动拉取这个镜像
# 如果镜像tag是具体版本,默认策略是IfNotPresent
# 如果镜像tag是latest,默认策略是Always
- 增加环境变量和启动命令
apiVersion: v1
kind: Pod
metadata:
name: pod-command
namespace: devlop
labels:
type: command-test
spec:
containers:
- name: nginx2024
image: nginx:1.8
imagePullPolicy: IfNotPresent
- name: centos2021
image: centos:7.9.2009
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-ce", "tail -f /dev/null"] # 防止容器自动退出
env: # 设置环境变量列表
- name: "username"
value: "admin"
- name: "password"
value: "123456"
# 进入pod中的容器
kubectl exec pod名称 -n 命令空间 -it -c 容器名称 /bin/bash 在容器内部执行的命令
# kubectl exec pod-command -n devlop -it -c nginx2024 /bin/bash
kubectl exec -it pod/pod-command -n devlop -- /bin/bash
echo $username # 打印环境变量 admin
echo $password # 打印环境变量 123456
- 增加端口设置
apiVersion: v1
kind: Pod
metadata:
name: pod-port
namespace: devlop
labels:
type: command-test
spec:
containers: # 设置容器参数
- name: nginx2026
image: nginx:1.8
imagePullPolicy: IfNotPresent
- name: centos2026
image: centos:7.9.2009
imagePullPolicy: IfNotPresent
command: ["/bin/bash","-ce", "tail -f /dev/null"] # 防止容器退出
env:
- name: "username"
value: "admin"
- name: "password"
value: "123456"
ports: # 设置容器暴露端口列表 ports<[]object>
- name: pod-port # 容器端口名称,如果指定必须保证name在pod中的唯一性
containerPort: 80 # 容器要监听的端口
protocol: TCP # 端口协议
# hostPort: # 一般省略,设置会独占宿主机某个端口
# hostIP: # 一般省略,要将外部端口绑定到的主机ip
- 增加容器资源限制
apiVersion: v1
kind: Pod
metadata:
name: pod-port
namespace: devlop
labels:
type: command-type
spec:
containers:
- name: nginx2026
image: nginx:1.8
- name: centos2026
image: centos:7.9.2009
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-ce", "tail -f /dev/null"]
env:
- name: "username"
value: "admin"
- name: "password"
value: "123456"
ports:
- name: pod-port1 # 容器端口名称,如果指定必须保证name在pod中唯一性
containerPort: 80 # 容器要监听的端口
protocol: TCP # 端口协议,必须是UDP,TCP,或者SCTP。默认TCP
resources: # 资源配额限制
limits: # 资源-上限, 如果超过容器将停止并重启
cpu: "2" # CPU限制,单位是core数
memory: "10Gi" # 内存限制
requests: # 请求资源-下限,如果资源不够,无法启动容器
cpu: "1" # CPU限制,单位是core数
memory: "6Gi" # 内存限制
- Pod生命周期
# Pod生命周期
5种状态: 挂起pending
运行running
成功succeded
失败failed
未知unknown
- 容器探针
# 存活性探针 liveness probes 检查容器是否正常运行,如果不是,则k8s重启容器
# 就绪性探针 readness probes 检查容器是否可以接收请求,如果不是,k8s不会转发流量
# livenessProbes 决定是否重启容器,readnessProbe决定是否转发流量给容器
# 探针方式1 exec
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness
namespace: devlop
labels:
type: portstart-test
spec:
containers:
- name: nginx2032
image: nginx:1.8
ports:
- name: pod-port1
containerPort: 80
protocol: TCP
livenessProbe:
exec:
command:
- "/bin/cat"
- "/tmp/hello.txt"
---
# 探针方式2 TCPSocket
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness
namespace: devlop
labels:
type: portstart-test
spec:
containers:
- name: nginx2032
image: nginx:1.8
ports:
- name: port-port1
containerPort: 80
livenessProbe:
tcpSocket:
port: 8080
---
# 探针方式3 httpGet
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness
namespace: devlop
labels:
type: portstart-test
spec:
containers:
- name: nginx2032
image: nginx:1.8
ports:
- name: pod-port1
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 80
path: /index2.html
initialDelaySeconds: 10
timeoutSeconds: 20
- Pod重启策略
# pod重启策略 Always默认值 OnFailure容器终止运行且退出码不为0时重启 Never不论何种状态都不重启
# 重启延迟时长, 10s, 20s, 40s, 80s, 160s, 300s, 300s是最大重启延迟时长
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness
namespace: devlop
labels:
type: portstart-test
spec:
containers: #设置容器参数列表 containers<[]object>
- name: nginx2032
image: nginx:1.8
ports: # 设置容器暴露端口列表 ports<[]obect>
- name: pod-port1 # 容器端口名称,如果指定必须保证name在pod中唯一性
containerPort: 80 # 容器要监听的端口
livenessProbe: # 存活探针
httpGet:
scheme: HTTP #支持的协议http或者https
host: 127.0.0.1 # 主机地址
port: 80 # 端口号
path: /index2.html # URL地址
initialDelaySeconds: 10
timeoutSeconds: 20
restartPolicy: Always # OnFailure Never
- Pod调度
# 自动调度
# 定向调度 NodeName NodeSelector
# 亲和性调度 NodeAffinity PodAffinity PodAntiAffinity-反亲和力
# 污点(容忍)调度 Taints-污点 Toleration-容忍
9.1 自动调度
# 不指定调度条件,即为自动调度
apiVersion: v1
kind: Pod
metadata:
name: autoselectnode
namespace: dev
spec:
containers:
- name: nginx-auto
image: nginx:1.20.0
imagePullPolicy: IfNotPresent
restartPolicy: Always
# 没有指定任何调度条件,就会自动调度到任一节点
9.2 根据nodeName定向调度
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: devlop
spec:
containers:
- name: nginx36
image: nginx:1.8
nodeName: k8s-node1 # 指定调度到node1
9.3 根据nodeSelector定向调度
# 2 定向调度 nodeSelector 需要给node打标签
[root@k8s-master1 yml]# kubectl label node k8s-node2 "web=nginx" # 给节点打标签
node/k8s-node2 labeled
# 示例10 podtest12.yml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: devlop
spec:
containers:
- name: nginx37
image: nginx:1.8
nodeSelector:
web: nginx # 指定调度到具有web=nginx标签的节点上
9.4 根据亲和性调度
# 亲和性调度 nodeAffinity[node亲和性] podAffinity[Pod亲和性] podAntiAffinity[pod反亲和性]
# 亲和性--频繁交互的pod越近越好;反亲和性--分布式多副本部署,高可用性
# 三种亲和性的用法参数基本相同,这里以nodeAffinity为例演示说明
[1] nodeAffinity
[root@k8s-master1 yml]# kubectl explain pod.spec.affinity.nodeAffinity
(1)preferredDuringSchedulingIgnoredDuringExecution <[]Object> # 优先调度到满足指定的规则的node,相当于软限制(倾向)
- weight 倾向权重 ,在范围1-100
preference # 一个节点选择器项,与相应的权重相关联 preference <Object>
matchFields # 按节点字段列出节点选择器要求的列表
matchExpressions # 按标签列出的节点选择器要求的列表--推荐
- key 键
values 值
operator 关系符 支持Exits DoesNoExist In NotIn Gt-大于 Lt-小于
(2)requiredDuringSchedulingIgnoredDuringExecution <Object> # ndoe节点必须满足指定的所有规则才可以,相当于硬限制
nodeSelectorTerms # 节点选择列表 nodeSelectorTerms <[]Object>
matchFields # 按节点字段列出节点选择器要求的列表
matchExpressions # 按标签列出的节点选择器要求的列表--推荐
- key 键
values 值
operator 关系符 支持Exits DoesNoExist In NotIn Gt-大于 Lt-小于
---
# 示例11 podtest13.yml [01]requiredDuringSchedulingIgnoredDuringExecution【硬限制】
apiVersion: v1
kind: Pod
metadata:
name: pod-affinity
namespace: devlop
spec:
containers:
- name: nginx38
image: nginx:1.8
affinity: # 亲和性设置
nodeAffinity: #设置node亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制,匹配不到则无法调度
nodeSelectorTerms:
- matchExpressions: # 匹配web属性在["xxxx","yyyy"]中的标签
- key: web
operator: In
values: ["nginx","12112"]
# ---------------------------------------------------------------------
# 示例12 podtest14.yml [02]preferredDuringSchedulingIgnoredDuringExecution 【软限制】
apiVersion: v1
kind: Pod
metadata:
name: pod-affinity
namespace: devlop
spec:
containers:
- name: nginx39
image: nginx:1.8
affinity: # 亲和性设置
nodeAffinity: #设置node亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软限制,匹配到则优先调度,没有也可以调度
- weight: 1
preference:
matchExpressions: # 匹配web属性在["xxxx","yyyy"]中的标签-当前环境没有
- key: web
operator: In
values: ["pppp","12112"]
9.5 污点和容忍
# 污点和容忍
# 污点格式key=value:effect key和value是污点的标签
# effect支持三个选项
# [1] PreferNoSchedule kubernetes将尽量避免把pod调度到具有该污点的node上,除非没有其他节点可以调度
# [2] NoSchedule kubernetes将不会把pod调度到具有该污点的node上,但不影响已经存在于该节点的pod
# [3] NoExecute-不执行 kubernetes将不会把pod调度到具有该污点的node上,同时会把node节点上已经存在的pod驱离
# 设置污点
# 命令格式 kubectl taint nodes node1 key=value:effect
# 去除污点
# 命令格式 kubectl taint nodes node1 key:effect-
# 去除所有污点
# 命令格式 kubectl taint nodes node1 key-
# 示例13 podtest15.yml
[root@k8s-master1 yml]# kubectl taint nodes k8s-node1 test=pro:PreferNoSchedule # 给节点设置污点
node/k8s-node1 tainted
[root@k8s-master1 yml]# kubectl run taint-nginx35 --image=nginx:1.8 -n devlop
pod/taint-nginx35 created # 容器不会在node1创建,除非没有其他节点可以调度
[root@k8s-master1 yml]# kubectl taint nodes k8s-node1 test=pro:PreferNoSchedule- # 取消污点
node/k8s-node1 untainted # 取消PreferNoSchedule污点
[root@k8s-master1 yml]# kubectl taint nodes k8s-node1 test=pro:NoSchedule
node/k8s-node1 tainted # 设置污点NoSchedule
[root@k8s-master1 yml]# kubectl run taint-nginx38 --image=nginx:1.8 -n devlop
pod/taint-nginx38 created # 不会调度pod到node1上,同时也不会影响node1上已运行的pod
[root@k8s-master1 yml]# kubectl taint nodes k8s-node1 test- # 取消所有污点
node/k8s-node1 untainted # 去掉node1上所有污点
[root@k8s-master1 yml]# kubectl get pods -n devlop -o wide # 查看node3上有运行的pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx2023-8566f5fd46-x9x4t 1/1 Running 0 26h 10.10.3.5 k8s-node3 <none> <none>
pod-affinity 1/1 Running 0 44m 10.10.1.31 k8s-node1 <none> <none>
taint-nginx35 1/1 Running 0 11m 10.10.3.12 k8s-node3 <none> <none>
taint-nginx38 1/1 Running 0 6m5s 10.10.3.13 k8s-node3 <none> <none>
[root@k8s-master1 yml]# kubectl taint nodes k8s-node3 test=pre:NoExecute # node3增加污点NoExecute
node/k8s-node3 tainted
[root@k8s-master1 yml]# kubectl get pods -n devlop -o wide # node3上的pod被驱离
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx2023-8566f5fd46-tcspl 1/1 Running 0 10s 10.10.2.21 k8s-node2 <none> <none>
pod-affinity 1/1 Running 0 45m 10.10.1.31 k8s-node1 <none> <none>
taint-nginx38 0/1 Terminating 0 7m19s 10.10.3.13 k8s-node3 <none> <none>
[root@k8s-master1 yml]# kubectl run nginx333 --image=nginx:1.8 -n devlop # 新的pod不能被调度到node3
pod/nginx333 created
[root@k8s-master1 yml]# kubectl get pods -n devlop -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx2023-8566f5fd46-tcspl 1/1 Running 0 2m27s 10.10.2.21 k8s-node2 <none> <none>
nginx333 1/1 Running 0 3s 10.10.2.22 k8s-node2 <none> <none>
pod-affinity 1/1 Running 0 48m 10.10.1.31 k8s-node1 <none> <none>
# 容忍 toleration 查询语法 kubectl explain pod.spec.tolerations
# 示例13 podtest15.yml
apiVersion: v1
kind: Pod
metadata:
name: pod-toleration
namespace: devlop
spec:
containers:
- name: nginx39
image: nginx:1.8
tolerations: # 添加容忍
- key: "test" # 要容忍的污点的key
operator: "Equal" # 操作符
value: "pre" # 要容忍的污点的value
effect: "NoExecute" # 添加要容忍的规则,这里必须和标记的污点规则相同
nodeName: k8s-node3 # 指定调度到ndoe3
- pod控制器
# pod控制器
# [自主式pod] 删除不能重建
# [控制器创建pod] 删除后能自动重建
10.1 ReplicaSet(RS)
# 保证一定数量的pod正常运行,支持pod数量扩缩容,版本镜像升级
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: pc-replicaset
namespace: devlop
spec:
replicas: 9 # 设置副本数量,修改副本数量实现扩缩容
selector:
matchlabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx2039
image: nginx:1.8
-----------------------------------------------------------------------
[root@k8s-master1 yml]# kubectl get rs pc-replicaset -n devlop -o wide # 查看
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
pc-replicaset 9 9 9 24s nginx2039 nginx:1.8 app=nginx-pod
[root@k8s-master1 yml]# kubectl delete rs pc-replicaset -n devlop # 删除
replicaset.apps "pc-replicaset" deleted
[root@k8s-master1 yml]# kubectl edit rs pc-replicaset -n devlop # 命令编辑 rs实现自动扩缩容和版本升降
replicaset.apps/pc-replicaset edited
[root@k8s-master1 yml]# kubectl scale rs pc-replicaset --replicas=9 -n devlop # 命令扩缩容
replicaset.apps/pc-replicaset scaled
[root@k8s-master1 yml]# kubectl set image rs pc-replicaset nginx2039=nginx:1.7 -n devlop # 命令镜像版本升/降级
replicaset.apps/pc-replicaset image updated
[root@k8s-master1 yml]# kubectl delete -f podtest16.yml # 使用yml删除rs--推荐方便管理
replicaset.apps "pc-replicaset" deleted
10.2 Deployment(Deploy)
10.2.1 创建deployment
# Deployment(Deploy) 通过管理ReplicaSet管理pod及版本回滚和更新
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: devlop
spec:
replicas: 9 # 设置副本数,修改副本数量实现扩缩容
selector:
matchLables:
app: nginx-pod
template: # 设置模板
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx2040
image: nginx:1.8
10.2.2 deploy扩缩容
[root@k8s-master1 yml]# kubectl scale deploy pc-deployment --replicas=6 -n devlop # 命令行扩缩容
deployment.apps/pc-deployment scaled
[root@k8s-master1 yml]# kubectl edit deploy pc-deployment -n devlop # 命令编辑扩缩容/镜像变更
deployment.apps/pc-deployment edited
10.2.3 deployment 更新策略
# deployment 更新策略 重建更新和滚动更新
# 示例18 podtest18.yml
# 1 重建更新
apiVersion: apps/v1
kind: Deployment # 设置类型
metadata:
name: pc-deployment
namespace: devlop
spec:
strategy: # 策略
type: Recreate # 重建更新策略
replicas: 9 # 设置副本数量,修改副本数量实现扩缩容
selector:
matchLabels:
app: nginx-pod
template: # 设置模板
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx2040
image: nginx:1.8
[root@k8s-master1 yml]# kubectl apply -f podtest17.yml --record=true
deployment.apps/pc-deployment configured
[root@k8s-master1 yml]# kubectl set image deploy pc-deployment nginx2040=nginx:1.7 -n devlop # 命令行更新
deployment.apps/pc-deployment image updated
# -----------------------------------------------------#
# 2 滚动更新
# 示例19 podtest19.yml
apiVersion: apps/v1
kind: Deployment # 设置类型
metadata:
name: pc-deployment
namespace: devlop
spec:
strategy: # 策略
type: RollingUpdate # 滚动更新策略
rollingUpdate:
maxUnavailable: 25% # 默认值
maxSurge: 25% # 默认值
replicas: 9 # 设置副本数量,修改副本数量实现扩缩容
selector:
matchLabels:
app: nginx-pod
template: # 设置模板
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx2040
image: nginx:1.8
[root@k8s-master1 yml]# kubectl apply -f podtest17.yml --record=true
deployment.apps/pc-deployment configured
[root@k8s-master1 yml]# kubectl set image deploy pc-deployment nginx2040=nginx:1.8 -n devlop # 命令行更新
deployment.apps/pc-deployment image updated
10.2.4 版本回滚与更新
# ============版本回退============
# kubectl rollout # 版本升级相关功能,支持以下选项
# status 显示当前升级状态
# history 显示升级历史记录
# pause 暂停版本升级过程
# resume 继续已经暂停的版本升级过程
# restart 重启版本升级过程
# undo 回滚到上以及版本[可以使用--to-revision回滚到指定版本]
[root@k8s-master1 yml]# kubectl apply -f podtest17.yml --record=true # 添加记录
deployment.apps/pc-deployment created
[root@k8s-master1 yml]# kubectl rollout status deploy pc-deployment -n devlop
deployment "pc-deployment" successfully rolled out
[root@k8s-master1 yml]# kubectl rollout history deploy pc-deployment -n devlop
deployment.apps/pc-deployment
# 版本号 # 改变原因
REVISION CHANGE-CAUSE
1 kubectl apply --filename=podtest17.yml --record=true
2 kubectl apply --filename=podtest17.yml --record=true
[root@k8s-master1 yml]# kubectl rollout undo deploy pc-deployment --to-revision=1 -n devlop # 回滚到版本1
deployment.apps/pc-deployment rolled back
[root@k8s-master1 yml]# kubectl rollout history deploy pc-deployment -n devlop # 回滚后再次查看版本号
deployment.apps/pc-deployment
REVISION CHANGE-CAUSE
2 kubectl apply --filename=podtest17.yml --record=true
3 kubectl apply --filename=podtest17.yml --record=true
# 金丝雀发布(发布新版本)
[root@k8s-master1 yml]# kubectl set image deploy pc-deployment nginx2040=nginx:1.8 -n devlop && kubectl rollout pause deploy pc-deployment -n devlop # 更新的同时暂停更新,应用场景:测试新的pod,没问题继续更新,有问题直接回滚
deployment.apps/pc-deployment paused
[root@k8s-master1 yml]# kubectl get rs -n devlop -o wide # 查看rs
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
pc-deployment-7bb84c497b 7 7 7 13m nginx2040 nginx:1.7 app=nginx-pod,pod-template-hash=7bb84c497b
pc-deployment-8b5965d45 5 5 5 14m nginx2040 nginx:1.8 app=nginx-pod,pod-template-hash=8b5965d45
[root@k8s-master1 yml]# kubectl rollout status deploy pc-deployment -n devlop # 查看更新状态
Waiting for deployment "pc-deployment" rollout to finish: 5 out of 9 new replicas have been updated...
[root@k8s-master1 yml]# kubectl set image deploy pc-deployment nginx2040=nginx:1.7 -n devlop && kubectl rollout resume deploy pc-deployment -n devlop # 继续暂停的更新
deployment.apps/pc-deployment image updated
deployment.apps/pc-deployment resumed
- HPA自动扩缩容
# [1] 安装metrices-server 插件
yum install -y git
git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server
kubectl apply -f ./
[root@k8s-master1 1.8+]# kubectl top nodes # 查看node资源使用情况
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master1 202m 5% 1332Mi 36%
k8s-node1 108m 2% 455Mi 26%
k8s-node2 93m 2% 469Mi 27%
k8s-node3 107m 2% 662Mi 38%
- DaemonSet(DS)
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pc-daemon
namespace: devlop
spec:
selector:
matchLabels:
app: nginx-daemon
template:
metadata:
labels:
app: nginx-daemon
spec:
containers:
- name: nginx2042
image: nginx:1.8
- Job
apiVersion: batch/v1
kind: Job
metadata:
name: pc-job
namespace: devlop
spec:
manualSelector: true
completions: 9 # 指定job需要成功运行pods的次数,默认值为1
parallelism: 3 # 指定job在任一时刻应该并发运行pods的数量,默认值为1
selector:
matchLabels:
app: job-pod
template:
metadata:
labels:
app: job-pod
spec:
restartPolicy: Never
containers:
- name: job-test2022
image: centos:7.9.2009
command:
- "/bin/sh"
- "-c"
- "for i in $(seq 1 10); do echo ${i}; sleep 0.5; done"
- CronJob(CJ)定时任务
apiVersion: batch/v1
kind: CronJob
metadata:
- name: pc-cronjob
namespace: devlop
labels:
controller: cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate: # 这里和之前的控制器不一样
metadata:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: cronjob2043
image: centos:7.9.2009
command: # ["/bin/sh","-c","for i in 5 4 3 2 1;do echo ${i};sleep 3;done"]
- "/bin/sh"
- "-c"
- "for i in `seq 1 10`;do echo ${i};sleep 1;done"
- Service
测试的deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-test
namespace: devlop
spec:
replicas: 3
selector:
matchLabels:
app: nginx-web
template:
metadata:
labels:
app: nginx-web
spec:
containers:
- name: nginx2049
image: nginx:1.8
lifecycle: # 启动后执行命令生成一个页面
postStart:
exec:
command:
- "/bin/bash"
- "-c"
- "echo `date +%F' '%T' '%A ;echo ${RANDOM}${RANDOM} |md5sum` > /usr/share/nginx/html/index.html"
ports:
- name: web-port
containerPort: 80
protocol: TCP
- clusterIP类型service
apiVersion: v1
kind: Service
metadata:
name: service-clusterip
namespace: devlop
spec:
sessionAffinity: ClientIP
selector:
app: nginx-web # service 通过 selector 和 pod 建立关联。 k8s 会根据 service 关联到 pod 的 podIP 信息组合成一个 endpoint。 如果 service 没有 selector 字段,当一个 service 被创建的时候, endpoint controller 不会自动创建 endpoint。
clusterIP: # service的ip地址,如果不写,默认生成一个
type: ClusterIP
ports:
- port: 80 # Service端口
targetPort: 80 # pod端口,必须是服务程序运行的端口,否则无法访问
- HeadLine类型的Service(无头服务)
apiVersion: v1
kind: Service
metadata:
name: service-headline
namespace: devlop
spec:
selector:
app: nginx-web
clusterIP: None # 设置为None,创建headline
type: ClusterIP
ports:
- port: 80 # Service端口
targetPort: 80 # pod端口,必须是服务程序运行的端口,否则无法访问
- NodePort类型Service
apiVersion: v1
kind: Service
metadata:
name: service-nodeport
namespace: devlop
spec:
selector:
app: nginx-web
type: NodePort
ports:
- port: 80 # Service端口
nodePort: # 指定绑定的node端口(默认取值范围是:30000-32767),如果不指定会自动分配
targetPort: 80 # pod端口,必须是服务程序运行的端口,否则无法访问
protocol: TCP # 协议
- LoadBalancer类型的Service
从外部访问Service的第二种方式,适用于公有云上的kubernetes服务,可以指定一个LoadBalancer类型的Service
service允许为其分配一个公有IP,注意 :k8s的externalIP只是提供了给了一个外部访问的ip,但如果没有pod开放对应接口是无法获取到内容的,因此需要与实际pod相对应。此IP不纳入Kubectl管理,无法通过外部节点访问,若需访问则要将其设置在节点的真实网卡上。
apiVersion: v1
kind: Service
metadata:
name: myloadbalancer
namespace: devlop
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx-web
type: LoadBalancer
externalIPs:
- 10.0.0.16 # service允许为其分配一个公有IP,公有云可绑定公网ip实现对外暴露服务
- ExternalName类型的Service
# 使用ExternalName将外部服务映射到内部服务。访问service等于访问外部服务
# ExternalName类型的Service用于引入集群外部的服务,它通过externalName 属性指定外部一个服务的地址,然后在集群内部访问此service就可以访问到外部的服务了。
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: devlop
spec:
selector:
app: nginx-web
type: ExternalName
externalName: www.my.com
- Pod跨namespace命名空间通过service的Name访问Service服务
利用ExternalName类型的Service实现跨命名空间
# 实现 mynsone 名称空间的pod,访问 mynstwo 名称空间的Service:myapp-clusterip2
apiVersion: v1
kind: Service
metadata:
name: myapp-clusterip1-externalname
namespace: mynsone
spec:
selector:
app: "myapp"
release: "v1"
type: ExternalName
externalName: myapp-clusterip2.mynstwo.svc.cluster.local
ports:
- name: http
port: 80
targetPort: 80
---
# 实现 mytnstwo 名称空间的Pod,访问 mynsone 名称空间的Service:myapp-clusterip1
apiVersion: v1
kind: Service
metadata:
name: myapp-clusterip2-externalname
namespace: mynstwo
spec:
selector:
app: "myapp"
release: "v2"
type: ExternalName
externalName: myapp-clusterip1.mynsone.svc.cluster.local
ports:
- name: http
port: 80
targetPort: 80
- ingress
k8s对外暴露服务(service)主要有三种方式ClusterIP、NodePort和LoadBalance,这几种方式都是在service的维度提供的,此外externalIPs也可以使各类service对外提供服务,service的作用体现在两个方面,对集群内部,它不断跟踪pod的变化,更新endpoint中对应pod的对象,提供了ip不断变化的pod的服务发现机制,对集群外部,他类似负载均衡器,可以在集群内外部对pod进行访问。但是,单独用service暴露服务的方式,在实际生产环境中不太合适:
ClusterIP的方式只能在集群内部访问。
NodePort方式的话,测试环境使用还行,当有几十上百的服务在集群中运行时,NodePort的端口管理是灾难。
LoadBalance方式受限于云平台,且通常在云平台部署ELB还需要额外的费用;LB方式最大的缺点则是每个service一个LB又有点浪费和麻烦,并且需要k8s之外的支持。
所以出现了ingress, ingress可以理解成service的service 7层架构