nfs关机后master查看状态为running
AI-摘要
KunKunYu GPT
AI初始化中...
介绍自己
生成本文简介
推荐相关文章
前往主页
前往tianli博客
nfs关机后master查看状态为running
出现原因:
nfs服务器为单独部署,通过客服端和服务器调用方式链接,因操作nfs扩容,关机未启动状态下
master的检查结果为正常,日常情况应为断开。受影响的pod监控和状态不更新无法打开。
[root@master1 ~]# kubectl get pods --all-namespaces |grep nfs
kube-system eip-nfs-nfs-5d56c88d67-2kvzj 1/1 Running 1 2d22h
尝试1:
通过命令行查看日志 查到name
[root@master1 ~]# kubectl describe pod eip-nfs-nfs-5d56c88d67-2kvzj --namespace=kube-system
Name: eip-nfs-nfs-5d56c88d67-2kvzj
Namespace: kube-system
Priority: 0
Node: node1/192.168.0.142
Start Time: Fri, 17 Apr 2020 17:57:02 +0800
Labels: app=eip-nfs-nfs
pod-template-hash=5d56c88d67
Annotations: cni.projectcalico.org/podIP: 10.100.166.136/32
Status: Running
IP: 10.100.166.136
IPs:
IP: 10.100.166.136
Controlled By: ReplicaSet/eip-nfs-nfs-5d56c88d67
Containers:
nfs-client-provisioner:
Container ID: docker://13fcf177197edea10595da0233b6c5bed25c4328b2bbd9c08b84f1906b872acd
Image: eipwork/nfs-client-provisioner:v3.1.0-k8s1.11
Image ID: docker-pullable://eipwork/nfs-client-provisioner@sha256:4c16495be5b893efea1c810e8451c71e1c58f076494676cae2ecab3a382b6ed0
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 17 Apr 2020 17:57:39 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 17 Apr 2020 17:57:08 +0800
Finished: Fri, 17 Apr 2020 17:57:38 +0800
Ready: True
Restart Count: 1
Environment:
PROVISIONER_NAME: nfs-nfs
NFS_SERVER: 192.168.0.40
NFS_PATH: /data/nfs
Mounts:
/persistentvolumes from nfs-client-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from eip-nfs-client-provisioner-token-4x2sg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-client-root:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pvc-nfs
ReadOnly: false
eip-nfs-client-provisioner-token-4x2sg:
Type: Secret (a volume populated by a Secret)
SecretName: eip-nfs-client-provisioner-token-4x2sg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
查看logs日志
kubectl logs -f eip-nfs-nfs-5d56c88d67-2kvzj --namespace=kube-system
I0417 09:57:39.410045 1 leaderelection.go:185] attempting to acquire leader lease kube-system/nfs-nfs...
I0417 09:57:56.916485 1 leaderelection.go:194] successfully acquired lease kube-system/nfs-nfs
I0417 09:57:56.916611 1 controller.go:631] Starting provisioner controller nfs-nfs_eip-nfs-nfs-5d56c88d67-2kvzj_df5aa8d5-8091-11ea-ac44-86cb56ac02b3!
I0417 09:57:56.917132 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"nfs-nfs", UID:"18e4a095-ed66-44aa-8b41-364eb7f45a17", APIVersion:"v1", ResourceVersion:"653421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' eip-nfs-nfs-5d56c88d67-2kvzj_df5aa8d5-8091-11ea-ac44-86cb56ac02b3 became leader
I0417 09:57:57.017274 1 controller.go:680] Started provisioner controller nfs-nfs_eip-nfs-nfs-5d56c88d67-2kvzj_df5aa8d5-8091-11ea-ac44-86cb56ac02b3!
I0420 06:47:02.753847 1 controller.go:1158] delete "pvc-59189e8e-f34b-4299-b253-423c4b6dc88e": started
I0420 06:47:09.292829 1 controller.go:1186] delete "pvc-59189e8e-f34b-4299-b253-423c4b6dc88e": volume deleted
I0420 06:47:09.320608 1 controller.go:1196] delete "pvc-59189e8e-f34b-4299-b253-423c4b6dc88e": persistentvolume deleted
I0420 06:47:09.320622 1 controller.go:1198] delete "pvc-59189e8e-f34b-4299-b253-423c4b6dc88e": succeeded
I0420 07:06:27.137018 1 controller.go:987] provision "example/monitor-pinpoint-hbase-monitor-pinpoint-hbase-0" class "nfs": started
I0420 07:06:27.182485 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"example", Name:"monitor-pinpoint-hbase-monitor-pinpoint-hbase-0", UID:"772bb8a9-1543-455e-9a3c-763e5cbbbf58", APIVersion:"v1", ResourceVersion:"1466934", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "example/monitor-pinpoint-hbase-monitor-pinpoint-hbase-0"
I0420 07:06:27.247523 1 controller.go:1087] provision "example/monitor-pinpoint-hbase-monitor-pinpoint-hbase-0" class "nfs": volume "pvc-772bb8a9-1543-455e-9a3c-763e5cbbbf58" provisioned
I0420 07:06:27.247680 1 controller.go:1101] provision "example/monitor-pinpoint-hbase-monitor-pinpoint-hbase-0" class "nfs": trying to save persistentvvolume "pvc-772bb8a9-1543-455e-9a3c-763e5cbbbf58"
I0420 07:06:27.323255 1 controller.go:1108] provision "example/monitor-pinpoint-hbase-monitor-pinpoint-hbase-0" class "nfs": persistentvolume "pvc-772bb8a9-1543-455e-9a3c-763e5cbbbf58" saved
I0420 07:06:27.323282 1 controller.go:1149] provision "example/monitor-pinpoint-hbase-monitor-pinpoint-hbase-0" class "nfs": succeeded
I0420 07:06:27.323772 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"example", Name:"monitor-pinpoint-hbase-monitor-pinpoint-hbase-0", UID:"772bb8a9-1543-455e-9a3c-763e5cbbbf58", APIVersion:"v1", ResourceVersion:"1466934", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-772bb8a9-1543-455e-9a3c-763e5
解决方法:
思路错误:
K8S Api 就知道。PV、StorageClass、PVC 的 status
字段里面,只有跟 存储分配状态相关的信息,没有后端存储当前所处状态的信息。所以k8s只负责链接后状态或者未链接的
。
所以导致以上问题出现。
重启nfs服务端。
在界面操作之前无法相应的pod来检测是否恢复,或者直接通过节点nfs客户端命令测试 可参考2-k8s部署——2.5-nfs部署 。
- 感谢你赐予我前进的力量
赞赏者名单
因为你们的支持让我意识到写文章的价值🙏
作者编辑不易,如有转载请注明出处。完整转载来自https://wangairui.com 网站名称:猫扑linux
评论
匿名评论
隐私政策
你无需删除空行,直接评论以获取最佳展示效果