- A+
所属分类:未分类
Kubernetes Dashboard安装好后(安装地址:kubernetes1.9安装地址),会发现一些群集指标没法显示,每次还得登录到服务器用命令查询,不是很人性化,如下图:
其实,kubernetes已经有Heapster组件了:Heapster,这里我就不多说了,我们直接实战吧。
一、安装步骤
下载Heapster,下载地址:https://github.com/kubernetes/heapster/
下载完成上传到服务器后,看安装文档https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md
我们就按照这个安装吧
[root@k4152v /home/xubo-iri/heapster-master]# pwd /home/xubo-iri/heapster-master
修改heapster.yaml中
command:
- /heapster
- --source=kubernetes:https://10.209.3.82:6443 --------改成自己的ip
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
[root@k4152v /home/xubo-iri/heapster-master]# kubectl create -f deploy/kube-config/influxdb/ deployment "monitoring-grafana" created service "monitoring-grafana" created serviceaccount "heapster" created deployment "heapster" created service "heapster" created deployment "monitoring-influxdb" created service "monitoring-influxdb" created 通过命令查看发现部署失败了,查日志发现获取镜像的时候失败了 [root@k4152v /home/xubo-iri/heapster-master/deploy/kube-config/influxdb]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-k4152v.add.bjyt.qihoo.net 1/1 Running 5 4d heapster-5c448886d-djg49 0/1 ImagePullBackOff 0 10m kube-apiserver-k4152v.add.bjyt.qihoo.net 1/1 Running 5 4d kube-controller-manager-k4152v.add.bjyt.qihoo.net 1/1 Running 6 4d kube-dns-6f4fd4bdf-xmh8k 3/3 Running 9 4d kube-flannel-ds-j9wrn 1/1 Running 5 5d kube-flannel-ds-mmcx7 1/1 Running 5 5d kube-flannel-ds-nnjrm 1/1 Running 3 5d kube-proxy-d8xb6 1/1 Running 5 5d kube-proxy-qf9lt 1/1 Running 3 5d kube-proxy-zzkkn 1/1 Running 5 5d kube-scheduler-k4152v.add.bjyt.qihoo.net 1/1 Running 5 4d kubernetes-dashboard-58f5cb49c-w2w6n 1/1 Running 3 5d monitoring-grafana-65757b9656-9gc92 0/1 ImagePullBackOff 0 10m monitoring-influxdb-66946c9f58-wrvml 0/1 ImagePullBackOff 0 10m
[root@k4152v /home/xubo-iri/heapster-master/deploy/kube-config/influxdb]# kubectl describe pod heapster-5c448886d-djg49 -n kube-system Name: heapster-5c448886d-djg49 Namespace: kube-system Node: k4622v.add.bjyt.qihoo.net/10.209.5.43 Start Time: Sat, 14 Apr 2018 16:45:40 +0800 Labels: k8s-app=heapster pod-template-hash=170044428 task=monitoring Annotations: <none> Status: Pending IP: 10.244.1.50 Controlled By: ReplicaSet/heapster-5c448886d Containers: heapster: Container ID: Image: k8s.gcr.io/heapster-amd64:v1.4.2 Image ID: Port: <none> Command: /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from heapster-token-jmt4h (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: heapster-token-jmt4h: Type: Secret (a volume populated by a Secret) SecretName: heapster-token-jmt4h Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 11m default-scheduler Successfully assigned heapster-5c448886d-djg49 to k4622v.add.bjyt.qihoo.net Normal SuccessfulMountVolume 11m kubelet, k4622v.add.bjyt.qihoo.net MountVolume.SetUp succeeded for volume "heapster-token-jmt4h" Warning Failed 9m kubelet, k4622v.add.bjyt.qihoo.net Failed to pull image "k8s.gcr.io/heapster-amd64:v1.4.2": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: i/o timeout Normal SandboxChanged 9m (x4 over 9m) kubelet, k4622v.add.bjyt.qihoo.net Pod sandbox changed, it will be killed and re-created. Normal Pulling 6m (x3 over 11m) kubelet, k4622v.add.bjyt.qihoo.net pulling image "k8s.gcr.io/heapster-amd64:v1.4.2" Warning Failed 4m (x3 over 9m) kubelet, k4622v.add.bjyt.qihoo.net Error: ErrImagePull Warning Failed 4m (x2 over 6m) kubelet, k4622v.add.bjyt.qihoo.net Failed to pull image "k8s.gcr.io/heapster-amd64:v1.4.2": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout Normal BackOff 4m (x6 over 9m) kubelet, k4622v.add.bjyt.qihoo.net Back-off pulling image "k8s.gcr.io/heapster-amd64:v1.4.2" Warning Failed 1m (x13 over 9m) kubelet, k4622v.add.bjyt.qihoo.net Error: ImagePullBackOff
下载镜像失败了,我这里找了个所需镜像,有人已经下载好了
链接:https://pan.baidu.com/s/1dzQyiq 密码:dyvi
下载下来后,导入到自己的镜像库,如下:
[root@k4151v /home/xubo-iri/heapster]# ls heapster-amd64.tar heapster-grafana-amd64.tar heapster-influxdb-amd64.tar [root@k4151v /home/xubo-iri/heapster]# docker load < heapster-amd64.tar 378249bc1640: Loading layer [==================================================>] 73.13 MB/73.13 MB 868ab718120b: Loading layer [==================================================>] 281.1 kB/281.1 kB Loaded image: gcr.io/google_containers/heapster-amd64:v1.4.2 [root@k4151v /home/xubo-iri/heapster]# docker load < heapster-grafana-amd64.tar 6a749002dd6a: Loading layer [==================================================>] 1.338 MB/1.338 MB c6824ad4ffdf: Loading layer [==================================================>] 147.5 MB/147.5 MB bfabb39c3978: Loading layer [==================================================>] 230.4 kB/230.4 kB b22f6c47113d: Loading layer [==================================================>] 2.56 kB/2.56 kB 60ae3052f093: Loading layer [==================================================>] 5.606 MB/5.606 MB Loaded image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3 [root@k4151v /home/xubo-iri/heapster]# docker load < heapster-influxdb-amd64.tar 6654b5b85f67: Loading layer [==================================================>] 11.42 MB/11.42 MB 1f125f5c227b: Loading layer [==================================================>] 4.608 kB/4.608 kB Loaded image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3 [root@k4151v /home/xubo-iri/heapster]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io/google_containers/heapster-influxdb-amd64 v1.3.3 577260d221db 7 months ago 12.5 MB gcr.io/google_containers/heapster-grafana-amd64 v4.4.3 8cb3de219af7 7 months ago 152 MB gcr.io/google_containers/heapster-amd64 v1.4.2 d4e02f5922ca 7 months ago 73.4 MB [root@k4151v /home/xubo-iri/heapster]# docker tag gcr.io/google_containers/heapster-amd64:v1.4.2 10.209.3.81/library/heapster-amd64:v1.4.2 [root@k4151v /home/xubo-iri/heapster]# docker push 10.209.3.81/library/heapster-amd64:v1.4.2 The push refers to a repository [10.209.3.81/library/heapster-amd64] 868ab718120b: Pushed 378249bc1640: Pushed v1.4.2: digest: sha256:bc069b947335d10dc6975a721fbb34dcde06b5b29c2a63e2fec4ab7f325de68c size: 739 [root@k4151v /home/xubo-iri/heapster]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE 10.209.3.81/library/heapster-influxdb-amd64 v1.3.3 577260d221db 7 months ago 12.5 MB 10.209.3.81/library/heapster-grafana-amd64 v4.4.3 8cb3de219af7 7 months ago 152 MB 10.209.3.81/library/heapster-amd64 v1.4.2 d4e02f5922ca 7 months ago 73.4 MB
好了,这里我们退镜像库就举了一个例子,其他的类似
下来我们修改heapster-master/deploy/kube-config/influxdb下的配置文件写成自己的镜像源,如下:
[root@k4152v /heapster-master/deploy/kube-config/influxdb]# grep -r '10.209' ./
./grafana.yaml: image: 10.209.3.81/library/heapster-grafana-amd64:v4.4.3
./heapster.yaml: image: 10.209.3.81/library/heapster-amd64:v1.4.2
./influxdb.yaml: image: 10.209.3.81/library/heapster-influxdb-amd64:v1.3.3
然后重新导入:
[root@k4152v /heapster-master/deploy/kube-config/influxdb]# kubectl create -f ./
Error from server (AlreadyExists): error when creating "grafana.yaml": deployments.extensions "monitoring-grafana" already exists
Error from server (AlreadyExists): error when creating "grafana.yaml": services "monitoring-grafana" already exists
Error from server (AlreadyExists): error when creating "heapster.yaml": serviceaccounts "heapster" already exists
Error from server (AlreadyExists): error when creating "heapster.yaml": deployments.extensions "heapster" already exists
Error from server (AlreadyExists): error when creating "heapster.yaml": services "heapster" already exists
Error from server (AlreadyExists): error when creating "influxdb.yaml": deployments.extensions "monitoring-influxdb" already exists
Error from server (AlreadyExists): error when creating "influxdb.yaml": services "monitoring-influxdb" already exists
如上,报错了,说已经存在了,怎么办呢,重新加载,或者删除了重新创建,我这里直接删除了,如下:
[/heapster-master/deploy/kube-config/influxdb]# kubectl delete -f grafana.yaml deployment "monitoring-grafana" deleted service "monitoring-grafana" deleted [/heapster-master/deploy/kube-config/influxdb]# kubectl delete -f heapster.yaml serviceaccount "heapster" deleted deployment "heapster" deleted service "heapster" deleted [/heapster-master/deploy/kube-config/influxdb]# kubectl delete -f influxdb.yaml deployment "monitoring-influxdb" deleted service "monitoring-influxdb" deleted
然后重新加载:
[/heapster-master/deploy/kube-config/influxdb]# kubectl create -f ./ deployment "monitoring-grafana" created service "monitoring-grafana" created serviceaccount "heapster" created deployment "heapster" created service "heapster" created deployment "monitoring-influxdb" created service "monitoring-influxdb" created [/heapster-master/deploy/kube-config/influxdb]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-k4152v.add.bjyt.qihoo.net 1/1 Running 5 6d heapster-7f85995564-xp8bf 1/1 Running 0 11s kube-apiserver-k4152v.add.bjyt.qihoo.net 1/1 Running 5 6d kube-controller-manager-k4152v.add.bjyt.qihoo.net 1/1 Running 6 6d kube-dns-6f4fd4bdf-xmh8k 3/3 Running 9 6d kube-flannel-ds-j9wrn 1/1 Running 5 7d kube-flannel-ds-mmcx7 1/1 Running 5 7d kube-flannel-ds-nnjrm 1/1 Running 3 7d kube-proxy-d8xb6 1/1 Running 5 7d kube-proxy-qf9lt 1/1 Running 3 7d kube-proxy-zzkkn 1/1 Running 5 7d kube-scheduler-k4152v.add.bjyt.qihoo.net 1/1 Running 5 6d kubernetes-dashboard-58f5cb49c-w2w6n 1/1 Running 3 7d monitoring-grafana-568fb966dd-2zrl9 1/1 Running 0 12s monitoring-influxdb-84885b4444-s7nz9 1/1 Running 0 12s
好了,运行起来了,结果查看管理端,还是没变,于是查看日志,如下:
E0416 08:56:24.777656 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:51: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list nodes at the cluster scope E0416 08:56:24.778732 1 reflector.go:190] k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list namespaces at the cluster scope E0416 08:56:25.777522 1 reflector.go:190] k8s.io/heapster/metrics/heapster.go:322: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list pods at the cluster scope E0416 08:56:25.777775 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:51: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list nodes at the cluster scope E0416 08:56:25.778825 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:51: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list nodes at the cluster scope E0416 08:56:25.779844 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:51: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list nodes at the cluster scope E0416 08:56:25.780758 1 reflector.go:190] k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list namespaces at the cluster scope
权限的问题,进入到heapster-master/deploy/kube-config/rbac目录下,执行如下命令:
kubectl create -f heapster-rbac.yaml
这样就ok了
如图:
访问 grafana
通过 kube-apiserver 访问
获取 monitoring-grafana 服务 URL:
[root@k4152v]# kubectl cluster-info
Kubernetes master is running at https://10.209.3.82:6443
Heapster is running at https://10.209.3.82:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.209.3.82:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://10.209.3.82:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://10.209.3.82:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
浏览器通过下面的url也可以访问:
- 安卓客户端下载
- 微信扫一扫
- 微信公众号
- 微信公众号扫一扫