kubernetes1.9群集安装

  • kubernetes1.9群集安装已关闭评论
  • 320 views
  • A+
所属分类:Kubernetes

环境

master 10.209.3.82
node 10.209.5.43
操作系统版本:
centos7.3 
内核版本: 
3.10.0-327.el7.x86_64

软件版本

kubernetes v1.9
docker:17.03
kubeadm:v1.9.0
kube-apiserver:v1.9.0
kube-controller-manager:v1.9.0
kube-scheduler:v1.9.0
k8s-dns-sidecar:1.14.7
k8s-dns-kube-dns:1.14.7
k8s-dns-dnsmasq-nanny:1.14.7
kube-proxy:v1.9.0
etcd:3.1.10
pause :3.0
flannel:v0.9.1
kubernetes-dashboard:v1.8.1

采用kubeadm安装

kubeadm为kubernetes官方推荐的自动化部署工具,他将kubernetes的组件以pod的形式部署在master和node节点上,并自动完成证书认证等操作。
因为kubeadm默认要从google的镜像仓库下载镜像,但目前国内无法访问google镜像仓库,所以这里我提交将镜像下好了,只需要将离线包的镜像导入到节点中就可以了。

开始安装

所有节点操作

下载
链接: https://pan.baidu.com/s/1c2O1gIW 密码: 9s92
比对md5解压离线包

MD5 (k8s_images.tar.bz2) = b60ad6a638eda472b8ddcfa9006315ee

解压下载下来的离线包

tar -xjvf k8s_images.tar.bz2

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

vim /etc/selinux/config
SELINUX=disabled

setenforce 0

配置系统路由参数,防止kubeadm报路由警告

echo "
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
sysctl -p

安装docker-ce17.03(kubeadmv1.9最大支持docker-ce17.03)

rpm -ihv docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

启动docker-ce

systemctl start docker && systemctl enable docker

导入镜像

#cd k8s_images/docker_images/
#for i in $(ls *.tar);do docker load < $i;done
#cd ..
#docker load < kubernetes-dashboard_v1.8.1.tar

安装安装kubelet kubeadm kubectl

rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm  kubelet-1.9.9-9.x86_64.rpm  kubectl-1.9.0-0.x86_64.rpm kubeadm-1.9.0-0.x86_64.rpm

master节点操作

启动kubelete

systemctl enable kubelet && sudo systemctl start kubelet

开始初始化master

kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16

kubernetes默认支持多重网络插件如flannel、weave、calico,这里使用flanne,就必须要设置--pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml里面配置的默认网段,如果需要修改的话,需要把kubeadm init的--pod-network-cidr参数和后面的kube-flannel.yml里面修改成一样的网段就可以了。

发现初始化的时候报错了,如下:

[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
 [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
 [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

根据报错,我们可以看出,要求把swap关了,我们就按照要求关 掉,命令如下:

#swapoff -a 
#sed 's/.*swap.*/#&/' /etc/fstab

然后继续初始化,发现还是启动不了,如下:

[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
 [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k4152v.add.bjyt.qihoo.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.209.3.82]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.

然后再查看/var/log/message日志,如下:

Apr 9 11:18:19 k4152v kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
Apr 9 11:18:19 k4152v systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Apr 9 11:18:19 k4152v systemd: Unit kubelet.service entered failed state.
Apr 9 11:18:19 k4152v systemd: kubelet.service failed.

发现原来是kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd
修改

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

修改为如下:Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

重启reload

systemctl daemon-reload && systemctl restart kubelet

将环境reset一下
执行

kubeadm reset

在重新执行

kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16

如下,说明已经成功了

 

Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token c6ada2.a04151609e2135e3 10.209.3.82:6443 --discovery-token-ca-cert-hash sha256:04asfasdfasdfd75e75e2787c6d3023ac31b932bf4602bd03cb28fd42dde

将kubeadm join xxx保存下来,等下node节点需要使用
如果忘记了,可以在master上通过kubeadmin token list得到
默认token 24小时就会过期,后续的机器要加入集群需要重新生成token

kubeadm token create
[root@k4152v k8s_images]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
c6ada2.a04151609e2135e3 23h 2018-04-10T11:26:12+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

通过上面可以看出前半部分,后半部分通过如下命令获取,如下:

[root@k4152v k8s_images]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
04asfasdfasdfd75e75e2787c6d3023ac31b932bf4602bd03cb28fd42dde

按照上面提示,此时root用户还不能使用kubelet控制集群需要,配置下环境变量
对于非root用户

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

对于root用户

export KUBECONFIG=/etc/kubernetes/admin.conf

也可以直接放到~/.bash_profile

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source一下环境变量

source ~/.bash_profile

kubectl version测试

[root@k4152v k8s_images]# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

安装网络,可以使用flannel、calico、weave、macvlan这里我们用flannel。

下载此文件

wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

或直接使用离线包里面的
若要修改网段,需要kubeadm --pod-network-cidr=和这里同步
vim kube-flannel.yml
修改network项

"Network": "10.244.0.0/16",

执行

kubectl create -f kube-flannel.yml

node节点操作

使用刚刚kubeadm后的kubeadm join --xxx

kubeadm join --token c6ada2.a04151609e2135e3 10.209.3.82:6443 --discovery-token-ca-cert-hash sha256:04asfasdfasdfd75e75e2787c6d3023ac31b932bf4602bd03cb28fd42dde

加入成功,在master节点上check一下:

加入成功
在master节点上check一下

[root@k4152v k8s_images]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k4152v.add.bjyt.qihoo.net Ready master 20m v1.9.0
k4622v.add.bjyt.qihoo.net Ready <none> 1m v1.9.0

kubernetes会在每个node节点创建flannel和kube-proxy的pod

[root@k4152v k8s_images]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k4152v.add.bjyt.qihoo.net Ready master 20m v1.9.0
k4622v.add.bjyt.qihoo.net Ready <none> 1m v1.9.0
[root@k4152v /home/xubo-iri/k8s_images]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k4152v.add.bjyt.qihoo.net 1/1 Running 0 21m
kube-system kube-apiserver-k4152v.add.bjyt.qihoo.net 1/1 Running 0 21m
kube-system kube-controller-manager-k4152v.add.bjyt.qihoo.net 1/1 Running 0 21m
kube-system kube-dns-6f4fd4bdf-wbnc5 3/3 Running 0 22m
kube-system kube-flannel-ds-mmcx7 1/1 Running 0 9m
kube-system kube-flannel-ds-nnjrm 1/1 Running 0 3m
kube-system kube-proxy-d8xb6 1/1 Running 0 22m
kube-system kube-proxy-qf9lt 1/1 Running 0 3m
kube-system kube-scheduler-k4152v.add.bjyt.qihoo.net 1/1 Running 0 21m

测试集群

在master节点上发起个创建应用请求
这里我们创建个名为httpd-app的应用,镜像为httpd,有一个副本pod

kubectl run httpd-app --image=httpd --replicas=1
[root@k4152v k8s_images]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
httpd-app 1 1 1 1 22s

检查pod
可以看见pod分布在node

[root@k4152v k8s_images]# kubectl run httpd-app --image=httpd --replicas=1
deployment "httpd-app" created
[root@k4152v k8s_images]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
httpd-app 1 1 1 1 22s
[root@k4152v k8s_images]# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpd-app-5fbccd7c6c-568hl 1/1 Running 0 1m
[root@k4152v k8s_images]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd-app-5fbccd7c6c-568hl 1/1 Running 0 2m 10.244.1.2 k4622v.add.bjyt.qihoo.net

因为创建的资源不是service所以不会调用kube-proxy
直接访问测试

[root@k4152v k8s_images]# curl 10.244.1.2
<html><body><h1>It works!</h1></body></html>

部署kubernetes-dashboard(master节点)

kubernetes-dashboard是可选组件,因为,实在不好用,功能太弱了。
建议在部署master时一起把kubernetes-dashboard一起部署了,不然在node节点加入集群后,kubernetes-dashboard会被kube-scheduler调度node节点上,这样根kube-apiserver通信需要额外配置。
直接使用离线包里面的kubernetes-dashboard.yaml

1.8官方给的是创建需要创建证书,不然访问不了

#mkdir -p /etc/kubernetes/addons/certs && cd /etc/kubernetes/addons
#openssl genrsa -des3 -passout pass:x -out certs/dashboard.pass.key 2048
#openssl rsa -passin pass:x -in certs/dashboard.pass.key -out certs/dashboard.key
#openssl req -new -key certs/dashboard.key -out certs/dashboard.csr -subj '/CN=kube-dashboard'
#openssl x509 -req -sha256 -days 365 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt
#rm certs/dashboard.pass.key
#kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kube-system

修改kubernetes-dashboard.yaml

如果需要让外面访问需要修改这个yaml文件端口类型为NodePort默认为clusterport外部访问不了,

kind: Service
apiVersion: v1
metadata:
 labels:
 k8s-app: kubernetes-dashboard
 name: kubernetes-dashboard
 namespace: kube-system
spec:
 type: NodePort 
 ports:
 - port: 443
 targetPort: 8443
 nodePort: 32666
 selector:
 k8s-app: kubernetes-dashboard

nodeport端口范围30000-32767
32666就是我的映射端口,根docker run -d xxx:xxx差不多
创建kubernetes-dashboard

kubectl create -f kubernetes-dashboard.yaml

访问

https://master_ip:32666

这里由于权限的问题,说以我们还得给赋权

cat <<EOF > /etc/kubernetes/pki/basic_auth_file
 admin,admin,2
EOF

格式为user,password,userid

vi /etc/kubernetes/manifests/kube-apiserver.yaml

 - --allow-privileged=true
 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
 - --requestheader-group-headers=X-Remote-Group
 - --advertise-address=10.99.11.71
 - --requestheader-username-headers=X-Remote-User
 - --requestheader-extra-headers-prefix=X-Remote-Extra-
 - --client-ca-file=/etc/kubernetes/pki/ca.crt
 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
 - --authorization-mode=Node,RBAC
 - --basic_auth_file=/etc/kubernetes/pki/basic_auth_file
 - --etcd-servers=http://127.0.0.1:2379
 image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.0

注意,- --不是空格,不然会出问题的

重启kubelet

 systemctl restart kubelet

更新kube-apiserver.yaml

cd /etc/kubernetes/manifests
kubectl apply -f kube-apiserver.yaml

查看验证

[root@k4152v /home/xubo-iri]# kubectl get clusterrole/cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"
 creationTimestamp: 2018-04-09T03:26:05Z
 labels:
 kubernetes.io/bootstrapping: rbac-defaults
 name: cluster-admin
 resourceVersion: "13"
 selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin
 uid: bcdb1f68-3ba5-11e8-95c9-fa163e6fb644
rules:
- apiGroups:
 - '*'
 resources:
 - '*'
 verbs:
 - '*'
- nonResourceURLs:
 - '*'
 verbs:
 - '*'

创建管理员账户

 kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin

查看账户是否加载

[root@k4152v /home/xubo-iri]# kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 creationTimestamp: 2018-04-09T04:47:53Z
 name: login-on-dashboard-with-cluster-admin
 resourceVersion: "6486"
 selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/login-on-dashboard-with-cluster-admin
 uid: 29fd1657-3bb1-11e8-a7d3-fa163e6fb644
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
 kind: User
 name: admin

好了,这样就可以了,我们现在通过浏览器访问试试:

输入https://10.209.3.82:32666,如图:

kubernetes1.9群集安装

输入admin,密码admin,进入

kubernetes1.9群集安装

这下就ok了

  • 安卓客户端下载
  • 微信扫一扫
  • weinxin
  • 微信公众号
  • 微信公众号扫一扫
  • weinxin
avatar