设为首页收藏本站

追梦Linux

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 402|回复: 0

Centos7 Kubernetes安装部署演示

[复制链接]

482

主题

485

帖子

16万

积分

CEO

Rank: 9Rank: 9Rank: 9

积分
168264

最佳新人活跃会员热心会员推广达人宣传达人灌水之王突出贡献优秀版主荣誉管理论坛元老

QQ
发表于 2018-4-16 17:05:29 | 显示全部楼层 |阅读模式
环境介绍:
CentOS7   192.168.2.191   Master、Node节点同一机器
一、安装Docker环境
本次安装的是docker1.10.3,建议安装最新的docker环境。
yum -y install http://yum.dockerproject.org/rep ... 7.centos.x86_64.rpm
启动docker服务命令:service docker start
将docker加入随主机自启动列表:chkconfig docker on
验证Docker环境,执行命令:docker version,显示如下信息,则docker环境正确安装:
[Bash shell] 纯文本查看 复制代码
Client:[/color][/font]
[font=Microsoft Yahei][color=#555555] Version:      1.10.3[/color][/font]
[font=Microsoft Yahei][color=#555555] API version:  1.22[/color][/font]
[font=Microsoft Yahei][color=#555555] Go version:   go1.5.3[/color][/font]
[font=Microsoft Yahei][color=#555555] Git commit:   20f81dd[/color][/font]
[font=Microsoft Yahei][color=#555555] Built:        Thu Mar 10 15:39:25 2016[/color][/font]
[font=Microsoft Yahei][color=#555555] OS/Arch:      linux/amd64[/color][/font]

[font=Microsoft Yahei][color=#555555]Server:[/color][/font]
[font=Microsoft Yahei][color=#555555] Version:      1.10.3[/color][/font]
[font=Microsoft Yahei][color=#555555] API version:  1.22[/color][/font]
[font=Microsoft Yahei][color=#555555] Go version:   go1.5.3[/color][/font]
[font=Microsoft Yahei][color=#555555] Git commit:   20f81dd[/color][/font]
[font=Microsoft Yahei][color=#555555] Built:        Thu Mar 10 15:39:25 2016[/color][/font]
[font=Microsoft Yahei][color=#555555] OS/Arch:      linux/amd64

二、安装etcd服务
下载etcd:wget https://github.com/coreos/etcd/r ... -linux-amd64.tar.gz
tar xf etcd-v2.3.6-linux-amd64.tar.gz

mv etcd-v2.3.6-linux-amd64  /usr/local/etcd
export PATH=$PATH:/usr/local/etcd/
etcd -version
配置etcd启动文件并启动,startEtcd.sh
[Bash shell] 纯文本查看 复制代码
#!/bin/sh
export PATH=$PATH:/usr/local/etcd
export ETCD_OPTS="-listen-client-urls [url=http://0.0.0.0:4001]http://0.0.0.0:4001[/url] -advertise-client-urls [url=http://0.0.0.0:4001]http://0.0.0.0:4001[/url] -data-dir /var/lib/etcd/default.etcd"
nohup /usr/local/etcd/etcd  $ETCD_OPTS &

执行startEtcd.sh启动etcd服务
三、安装flannel虚拟网络环境
wget https://github.com/coreos/flanne ... -linux-amd64.tar.gz
tar xf flannel-0.5.5-linux-amd64.tar.gz  
mv flannel-0.5.5  /usr/local/flannel
在etcd中预注册flannel要使用虚拟地址段
etcdctl mk /coreos.com/network/config '{ "Network": "172.19.0.0/16" }'
配置Flannel启动文件并启动,startFlannel.sh
[Bash shell] 纯文本查看 复制代码
#!/bin/sh

nohup  /usr/local/flannel/flanneld -etcd-endpoints=http://127.0.0.1:4001  &

etcd中验证flannel地址分配
# etcdctl ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.19.4.0-24
# etcdctl get /coreos.com/network/subnets/172.19.4.0-24
{"PublicIP":"192.168.2.191"}
这说明本地flannel使用的172.19.4.0网段地址。
生成docker相关参数,${FLANNEL_SUBNET}
./mk-docker-opts.sh
相关参数写入下面文件
cat /run/flannel/subnet.env
[Bash shell] 纯文本查看 复制代码
FLANNEL_NETWORK=172.19.0.0/16
FLANNEL_SUBNET=172.19.4.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false

给docker0虚拟网卡添加参数,重启
//ifconfig docker0 ${FLANNEL_SUBNET}
ifconfig docker0 172.19.4.1/24
ifconfig docker0 down
ifconfig docker0 up
重启docker服务
service docker stop
service docker start
验证
ifconfig docker0
[Bash shell] 纯文本查看 复制代码
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 172.19.4.1  netmask 255.255.255.0  broadcast 172.19.4.255

        inet6 fe80::42:b6ff:fe48:646b  prefixlen 64  scopeid 0x20<link>

        ether 02:42:b6:48:64:6b  txqueuelen 0  (Ethernet)

        RX packets 2472  bytes 4220685 (4.0 MiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 3170  bytes 487145 (475.7 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Docker网卡的地址不再是172.17网段的地址,而是172.19.4地址(这个每次获取都会变)。

四、安装K8S使用的是k8s 1.2.4版本
https://github.com/kubernetes/kubernetes/releases/download/1.2.4/kubernetes.tar.gz
tar -xzvf kubernetes.tar.gz
cd kubernetes/server/
tar -xzvf kubernetes-server-linux-amd64.tar.gz


验证
export  PATH=$PATH:/usr/local/kubernetes/server/kubernetes/server/bin
kubectl version


编辑5个服务的启动脚本
startApiServer.sh
[Bash shell] 纯文本查看 复制代码
#!/bin/sh
export  PATH=$PATH:/usr/local/kubernetes/server/kubernetes/server/bin
export KUBE_APISERVER_OPTS="--insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=192.168.4.0/24 --etcd_servers=http://127.0.0.1:4001 --advertise-address=192.168.2.191 --logtostderr=true"
nohup kube-apiserver ${KUBE_APISERVER_OPTS} &





startScheduler.sh
#!/bin/sh
[Bash shell] 纯文本查看 复制代码
export  PATH=$PATH:/usr/local/kubernetes/server/kubernetes/server/bin
export KUBE_SCHEDULER_OPTS="--master=192.168.2.191:8080 --logtostderr=true"
nohup kube-scheduler ${KUBE_SCHEDULER_OPTS} &



startControllerManager.sh
[Bash shell] 纯文本查看 复制代码
export  PATH=$PATH:/usr/local/kubernetes/server/kubernetes/server/bin
export KUBE_CONTROLLER_MANAGER_OPTS="--master=192.168.2.191:8080 --logtostderr=true"
nohup kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS} &



startKubelet.sh(node使用)(使用时速云的镜像地址)
[Bash shell] 纯文本查看 复制代码
export  PATH=$PATH:/usr/local/kubernetes/server/kubernetes/server/bin
export KUBELET_OPTS="--address=0.0.0.0 --port=10250 --hostname_override=192.168.2.191  --api_servers=http://192.168.2.191:8080 --pod-infra-container-image=index.tenxcloud.com/kubernetes/pause:latest --logtostderr=true --cluster_dns=192.168.4.155 --cluster_domain=cluster.local"
nohup kubelet ${KUBELET_OPTS} &





startProxy.sh(node)
[Bash shell] 纯文本查看 复制代码
#!/bin/sh
export  PATH=$PATH:/usr/local/kubernetes/server/kubernetes/server/bin
export KUBE_PROXY_OPTS="--master=http://192.168.2.191:8080 --proxy-mode=iptables --logtostderr=true"
nohup kube-proxy ${KUBE_PROXY_OPTS} &



启动服务
3个master服务:
./startApiServer.sh
./startScheduler.sh
./startControllerManager.sh
2个node服务:
./startKubelet.sh
./startProxy.sh


注:Kubelet服务在后边部署完skydns后,还要修改参数,重新启动。
验证
ps -elf | grep kube应该有5个进程存在:
[Bash shell] 纯文本查看 复制代码
4 S root      29830   4317  0  80   0 -  1819 futex_ 03:48 ?        00:00:03 /exechealthz -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null -port=8080
4 S root      32363      1  0  80   0 - 108005 futex_ 03:49 pts/0   00:00:53 kubelet --address=0.0.0.0 --port=10250 --hostname_override=192.168.2.191 --api_servers=http://192.168.2.191:8080 --pod-infra-container-image=index.tenxcloud.com/kubernetes/pause:latest --logtostderr=true --cluster_dns=192.168.4.155 --cluster_domain=cluster.local
4 S root      73339      1  0  80   0 - 22311 ep_pol 03:02 pts/0    00:00:32 kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=192.168.4.0/24 --etcd_servers=http://127.0.0.1:4001 --advertise-address=192.168.2.191 --logtostderr=true
4 S root      74217      1  0  80   0 - 12861 ep_pol 03:03 pts/0    00:00:33 kube-controller-manager --master=192.168.2.191:8080 --logtostderr=true
4 S root      75528      1  0  80   0 -  8969 futex_ 03:04 pts/0    00:00:01 kube-scheduler --master=192.168.2.191:8080 --logtostderr=true
4 S root      80544      1  0  80   0 -  7582 ep_pol 03:06 pts/0    00:00:05 kube-proxy --master=http://192.168.2.191:8080 --proxy-mode=iptables --logtostderr=true
0 R root      83014   1188  0  80   0 - 28165 -      05:20 pts/0    00:00:00 grep --color=auto kube
五、部署skydns服务
编辑skydns的yaml文件
Yaml文件的位置:kubernetes/cluster/addons/dns/,skydns-rc.yaml.in和skydns-svc.yaml.in。
cp  skydns-rc.yaml.in  skydns-rc.yaml
cp  skydns-svc.yaml.in  skydns-svc.yaml
skydns-rc.yaml
修改4个服务的镜像地址为时速云,etcd、kube2sky、skydns、healthz
比如:
-name: etcd
image: index.tenxcloud.com/google_containers/etcd-amd64:2.2.1
修改sky2kube、skydns服务的参数
完整如下,注意格式
[Bash shell] 纯文本查看 复制代码
apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v11
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v11
    kubernetes.io/cluster-service: "true"
spec:
  #replicas: {{ pillar['dns_replicas'] }}
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v11
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v11
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: index.tenxcloud.com/google_containers/etcd-amd64:2.2.1
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - [url]http://127.0.0.1:2379[/url],[url]http://127.0.0.1:4001[/url]
        - -advertise-client-urls
        - [url]http://127.0.0.1:2379[/url],[url]http://127.0.0.1:4001[/url]
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        #image: gcr.io/google_containers/kube2sky:1.14
        image: index.tenxcloud.com/google_containers/kube2sky:1.14
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            # Kube2sky watches all pods.
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube2sky"
        - --kube_msater_url=http://192.168.2.191:8080
        - --domain=cluster.local
        # - --domain={{ pillar['dns_domain'] }}
      - name: skydns
        #image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        image: index.tenxcloud.com/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://127.0.0.1:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - --domain=cluster.local
        # - -domain={{ pillar['dns_domain'] }}.
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        #image: gcr.io/google_containers/exechealthz:1.0
        image: index.tenxcloud.com/google_containers/exechealthz:1.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      dnsPolicy: Default  # Don't use cluster DNS.
Skydns-svc.yaml
ClusterIP一定要设置为apiserver指定的service的ip段的一个固定地址。
–service-cluster-ip-range=192.168.4.0/24
完整如下,注意格式
[Bash shell] 纯文本查看 复制代码
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  #clusterIP:  {{ pillar['dns_server'] }}
  clusterIP: 192.168.4.155
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
修改kubelet服务的参数
增加–cluster_dns=192.168.4.155 –cluster_domain=cluster.local
重启kubelet服务
启动skydns服务
kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml
六、部署K8S DashBoard服务
编辑dashboard的yaml文件
yaml文件的位置:kubernetes/cluster/addons/dashboard/,dashboard-controller.yaml和dashboard-service.yaml。
dashboard-controller.yaml:
镜像地址修改为时速云地址
Apiserver地址修改成实际地址,完整如下
[Bash shell] 纯文本查看 复制代码
apiVersion: v1
kind: ReplicationController
metadata:
  # Keep the name in sync with image version and
  # gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-v1.0.1
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    version: v1.0.1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: v1.0.1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gindex.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.0.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
        - --apiserver-host=http://192.168.2.191:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
dashboard-service.yaml
[Bash shell] 纯文本查看 复制代码
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090
启动dashboard服务
kubectl create -f dashboard-controller.yaml
kubectl create -f dashboard-service.yaml
访问dashboard
http://192.168.2.191:8080
http://192.168.2.191:8080/ui
此处可能ui无法显示,查看pod信息
kubectl get pods --namespace kube-system
获取dashboard 的name
kubectl describe pod kubernetes-dashboard-v1.0.1-2w3is  --namespace kube-system
分析错误原因镜像404,此时需要手工下载一个镜像然后tag改名或者修改配置文件把image源修改一下
docker  pull shenshouer/kubernetes-dashboard-amd64:v1.0.1
docker tag  shenshouer/kubernetes-dashboard-amd64:v1.0.1 gindex.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.0.1
再次查看就没有错误了,运行状态变成Running
kubectl describe pod kubernetes-dashboard-v1.0.1-2w3is  --namespace kube-system
kubectl get pods --namespace kube-system
恭喜,到此kubernetes部署成功。
后边就可以在k8s界面上操作部署应用了!

集群配置同理。


QQ|小黑屋|手机版|Archiver|追梦Linux ( 粤ICP备14096197号  点击这里给我发消息

GMT+8, 2019-9-16 22:38 , Processed in 0.247158 second(s), 30 queries .

Powered by 追梦Linux! X3.3 Licensed

© 2015-2017 追梦Linux!.

快速回复 返回顶部 返回列表