配置背景介紹
kubernetes是google開源的容器集群管理系統,提供應用部署、維護、擴展機制等功能,利用kubernetes能方便管理跨集群運行容器化的應用,簡稱:k8s(k與s之間有8個字母)
為什么要用kubernetes這么復雜的docker集群管理工具呢?一開始接觸了docker內置的swarm,這個工具非常簡單快捷的完成docker集群功能。但是在使用docker1.13內置的swarm做集群的時候遇到vip負載均衡沒有正確映射端口到外網,或者出現地址被占用的情況,這對高可用性的需求是不利的,然而又沒找到一個解決方案,只能轉投k8s。
實驗環境
- 騰訊云
- centos7.3 64位
安裝
1
2
3
4
5
6
|
yum-config-manager --add-repo https: //docs .docker.com /v1 .13 /engine/installation/linux/repo_files/centos/docker .repo yum makecache fast yum -y install docker-engine-1.13.1 yum install epel-release -y yum remove -y docker-engine* yum install -y kubernetes etcd docker flannel |
修改配置文件
注意下面的10.135.163.237換成自己服務器ip
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
sed -i "s/localhost:2379/10.135.163.237:2379/g" /etc/etcd/etcd .conf sed -i "s/localhost:2380/10.135.163.237:2380/g" /etc/etcd/etcd .conf sed -i "s/10.135.163.237:2379/10.135.163.237:2379,http:\/\/127.0.0.1:2379/g" /etc/etcd/etcd .conf sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/kubernetes/apiserver sed -i "s/--insecure-bind-address=127.0.0.1/--insecure-bind-address=0.0.0.0/g" /etc/kubernetes/apiserver sed -i "s/--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota/--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota/g" /etc/kubernetes/apiserver sed -i "s/--hostname-override=127.0.0.1/--hostname-override=10.135.163.237/g" /etc/kubernetes/kubelet sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/kubelet sed -i "s/--address=127.0.0.1/--address=0.0.0.0/g" /etc/kubernetes/kubelet sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/config sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/sysconfig/flanneld |
修改host
1
2
|
vi /etc/hosts 10.135.163.237 k8s_master |
添加網絡
1
2
3
4
5
|
systemctl enable etcd.service systemctl start etcd.service etcdctl mk //atomic .io /network/config '{"Network":"172.17.0.0/16"}' |
啟動服務
1
2
|
service docker start for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet docker flanneld ; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done ; |
第一個demo
編寫文件a.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-app spec: replicas: 2 template: metadata: labels: app: my-app spec: containers: - name: my-app image: registry.alauda.cn/yubang/paas_base_test ports: - containerPort: 80 command: ["/bin/bash", "/var/start.sh"] resources: limits: cpu: 0.5 memory: 64Mi |
編寫文件b.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
apiVersion: v1 kind: Service metadata: name: my-app-svc labels: app: my-app spec: ports: - port: 80 targetPort: 80 nodePort: 30964 type: NodePort selector: app: my-app |
創建服務
1
2
|
kubectl create -f a.yaml --validate kubectl create -f b.yaml --validate |
刪除服務
1
2
|
kubectl delete -f a.yaml kubectl delete -f b.yaml |
增加子節點
安裝軟件
1
2
3
4
5
6
|
yum-config-manager --add-repo https: //docs .docker.com /v1 .13 /engine/installation/linux/repo_files/centos/docker .repo yum makecache fast yum -y install docker-engine-1.13.1 yum install epel-release -y yum remove -y docker-engine* yum install -y kubernetes docker flannel |
修改配置文件(10.135.163.237為主節點ip,139.199.0.29為當前節點ip)
1
2
3
4
5
6
7
8
9
10
11
12
13
|
sed -i "s/--hostname-override=127.0.0.1/--hostname-override=139.199.0.29/g" /etc/kubernetes/kubelet sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/kubelet sed -i "s/--address=127.0.0.1/--address=0.0.0.0/g" /etc/kubernetes/kubelet sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/config sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/sysconfig/flanneld sed -i "s/--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota/--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota/g" /etc/kubernetes/apiserver |
啟動服務
1
2
3
4
5
6
7
|
service docker start for SERVICES in kube-proxy kubelet docker flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done ; |
在主服務器查看節點
1
|
kubectl get node |
重啟服務(重新加入集群)
1
|
systemctl restart kube-apiserver.service |
刪除節點
1
|
kubectl delete node 節點ip |
總結
以上就是這篇文章的全部內容了,希望本文的內容對大家的學習或者工作具有一定的參考學習價值,如果有疑問大家可以留言交流,謝謝大家對服務器之家的支持。
原文鏈接:http://blog.yubangweb.com/k8sshi-jian-bi-ji/