日本综合一区二区|亚洲中文天堂综合|日韩欧美自拍一区|男女精品天堂一区|欧美自拍第6页亚洲成人精品一区|亚洲黄色天堂一区二区成人|超碰91偷拍第一页|日韩av夜夜嗨中文字幕|久久蜜综合视频官网|精美人妻一区二区三区

RELATEED CONSULTING
相關(guān)咨詢
選擇下列產(chǎn)品馬上在線溝通
服務(wù)時(shí)間:8:30-17:00
你可能遇到了下面的問(wèn)題
關(guān)閉右側(cè)工具欄

新聞中心

這里有您想知道的互聯(lián)網(wǎng)營(yíng)銷解決方案
如何基于國(guó)產(chǎn)CPU的云平臺(tái)構(gòu)建容器管理平臺(tái)?(下篇)

如何基于國(guó)產(chǎn)CPU的云平臺(tái)構(gòu)建容器管理平臺(tái)?(下篇)

作者:ZStack 2018-07-16 22:40:21

云計(jì)算 上篇我給大家分享了國(guó)產(chǎn)CPU的服務(wù)器華芯通和國(guó)產(chǎn)云平臺(tái)ZStack試用體驗(yàn),接下來(lái)將為大家詳細(xì)分享如何基于ZStack云主機(jī)構(gòu)建K8S集群。

 隨著“中興事件”不斷升級(jí),引起了國(guó)人對(duì)國(guó)產(chǎn)自主可控技術(shù)的高度關(guān)注;本人作為所在單位的運(yùn)維工程師,也希望能找到一個(gè)穩(wěn)定、能兼容國(guó)產(chǎn)CPU的一整套架構(gòu)方案,來(lái)構(gòu)建IaaS平臺(tái)和PaaS平臺(tái),滿足單位對(duì)安全自主可控的需求。要基于全國(guó)產(chǎn)方式解決公司業(yè)務(wù)需求至少要在軟硬件層面滿足,而國(guó)內(nèi)基本都是基于x86解決方案,想找到滿足需求的國(guó)產(chǎn)化解決方案還是非常困難的事情。但筆者由于一個(gè)偶然的機(jī)會(huì),接觸到了國(guó)產(chǎn)的芯片廠商和云計(jì)算廠商,并得知他們已經(jīng)實(shí)現(xiàn)了全國(guó)產(chǎn)化的云計(jì)算平臺(tái),筆者也親自動(dòng)手體驗(yàn)了安裝部署該云計(jì)算平臺(tái),并在其之上安裝部署了容器平臺(tái)。上篇我給大家分享了國(guó)產(chǎn)CPU的服務(wù)器華芯通和國(guó)產(chǎn)云平臺(tái)ZStack試用體驗(yàn),接下來(lái)將為大家詳細(xì)分享如何基于ZStack云主機(jī)構(gòu)建K8S集群。

第三節(jié) 基于ZStack云主機(jī)構(gòu)建K8S集群

這里要提一下,為什么我們不直接使用物理ARM服務(wù)器部署K8S集群,這跟單位測(cè)試場(chǎng)景有關(guān)系,既要使用云主機(jī)透?jìng)鱃PU計(jì)算卡進(jìn)行大量的計(jì)算,又要實(shí)現(xiàn)容器管理平臺(tái)。況且國(guó)外主流的K8S集群通常是跑在虛擬機(jī)里面的,運(yùn)行在虛擬機(jī)里面的好處有很多,比如可以實(shí)現(xiàn)資源定制分配、利用云平臺(tái)API接口可以快速生成K8S集群Node節(jié)點(diǎn)、更好的靈活性以及可靠性;在ZStack ARM云平臺(tái)上可以同時(shí)構(gòu)建IaaS+PaaS混合平臺(tái),滿足不同場(chǎng)景下的需求。

由于篇幅有限下面先介紹一下如何在基于ZStack For ARM平臺(tái)中云主機(jī)部署K8S集群,整個(gè)部署過(guò)程大概花1小時(shí)(這主要是訪問(wèn)部分國(guó)外網(wǎng)絡(luò)時(shí)不是很順暢)。

集群環(huán)境介紹:

主機(jī)名角色I(xiàn)P地址配置系統(tǒng)版本

在本環(huán)境中用于構(gòu)建K8S集群所需的資源,為基于ZStack構(gòu)建的平臺(tái)上的云主機(jī):

ZStack云主機(jī)K8S集群架構(gòu)

1、準(zhǔn)備工作

配置主機(jī)名

hostnamectl set-hostname K8S-Master

hostnamectl set-hostname K8S-Node1

hostnamectl set-hostname K8S-Node2

hostnamectl set-hostname K8S-Node3

所有云主機(jī)上關(guān)閉swap分區(qū) 否則會(huì)報(bào)錯(cuò);該操作只需在云主機(jī)環(huán)境下執(zhí)行,物理機(jī)環(huán)境無(wú)需操作。

sudo swapoff -a

2、安裝部署

2.1安裝Docker

# step 1: 安裝必要的一些系統(tǒng)工具

sudo apt-get update

sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

# step 2: 安裝GPG證書(shū)

curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

# Step 3: 寫(xiě)入軟件源信息

sudo add-apt-repository "deb [arch=arm64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

# Step 4: 更新并安裝 Docker-CE

sudo apt-get -y update

sudo apt-get -y install docker-ce

使用daocloud對(duì)docker鏡像下載進(jìn)行加速。

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://56d10455.m.daocloud.io

2.2安裝go環(huán)境

apt-get install golang- golang

2.3 安裝kubelet、kubeadm、kubectl

apt-get update && apt-get install -y apt-transport-https

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat < /etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt-get update

apt-get install -y kubectl kubeadm kubectl

2.4用kubeadm創(chuàng)建集群

初始化Master

kubeadm init --apiserver-advertise-address 172.120.194.196 --pod-network-cidr 10.244.0.0/16

執(zhí)行完上面命令后,如果中途不報(bào)錯(cuò)會(huì)出現(xiàn)類似以下信息:

kubeadm join 172.120.194.196:6443 --token oyf6ns.whcoaprs0q7growa --discovery-token-ca-cert-hash sha256:30a459df1b799673ca87f9dcc776f25b9839a8ab4b787968e05edfb6efe6a9d2

這段信息主要是提示如何注冊(cè)其他節(jié)點(diǎn)到K8S集群。

2.5 配置kubectl

Kubectl是管理K8S集群的命令行工具,因此需要對(duì)kubectl運(yùn)行環(huán)境進(jìn)行配置。

su - zstack

sudo mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

echo "source <(kubectl completion bash)" >> ~/.bash

2.6 安裝Pod網(wǎng)絡(luò)

為了讓K8S集群的Pod之間能夠正常通訊,必須安裝Pod網(wǎng)絡(luò),Pod網(wǎng)絡(luò)可以支持多種網(wǎng)絡(luò)方案,當(dāng)前測(cè)試環(huán)境采用Flannel模式。

先將Flannel的yaml文件下載到本地,進(jìn)行編輯,編輯的主要目的是將原來(lái)X86架構(gòu)的鏡像名稱,改為ARM架構(gòu)的。讓其能夠在ZStack ARM云環(huán)境正常運(yùn)行。修改位置及內(nèi)容參考下面文件中紅色粗體字部分。

sudo wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

vim kube-flannel.yml

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

rules:

- apiGroups:

- ""

resources:

- pods

verbs:

- get

- apiGroups:

- ""

resources:

- nodes

verbs:

- list

- watch

- apiGroups:

- ""

resources:

- nodes/status

verbs:

- patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: flannel

subjects:

- kind: ServiceAccount

name: flannel

namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

name: kube-flannel-cfg

namespace: kube-system

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"plugins": [

{

"type": "flannel",

"delegate": {

"hairpinMode": true,

"isDefaultGateway": true

}

},

{

"type": "portmap",

"capabilities": {

"portMappings": true

}

}

]

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"Backend": {

"Type": "vxlan"

}

}

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: arm64

tolerations:

- key: node-role.kubernetes.io/master

operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.10.0-arm64

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.10.0-arm64

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

sudo kubectl apply -f kube-flannel.yml

執(zhí)行上面命令后會(huì)正常情況下會(huì)有如下輸出:

clusterrole.rbac.authorization.k8s.io "flannel" created

clusterrolebinding.rbac.authorization.k8s.io "flannel" created

serviceaccount "flannel" created

configmap "kube-flannel-cfg" created

daemonset.extensions "kube-flannel-ds" created

2.7注冊(cè)節(jié)點(diǎn)到K8S集群

分別在K8S-Node1、K8S-Node2、K8S-Node3

kubeadm join 172.120.194.196:6443 --token oyf6ns.whcoaprs0q7growa --discovery-token-ca-cert-hash sha256:30a459df1b799673ca87f9dcc776f25b9839a8ab4b787968e05edfb6efe6a9d2

kubectl get nodes 查看節(jié)點(diǎn)狀態(tài)

zstack@K8S-Master:~$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s-master Ready master 49m v1.11.0

k8s-node1 NotReady 4m v1.11.0

k8s-node2 NotReady 4m v1.11.0

k8s-node3 NotReady 4m v1.11.0

如果發(fā)現(xiàn)所有節(jié)點(diǎn)是NotReady 是因每個(gè)節(jié)點(diǎn)都需要啟動(dòng)若干個(gè)組件,這些組件都是在Pod中運(yùn)行,且需要到Google下載鏡像。使用下面命令查看Pod運(yùn)行狀況:

kubectl get pod --all-namespaces 正常情況應(yīng)該是如下的狀態(tài):

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-78fcdf6894-49tkw 1/1 Running 0 1h

kube-system coredns-78fcdf6894-gmcph 1/1 Running 0 1h

kube-system etcd-k8s-master 1/1 Running 0 19m

kube-system kube-apiserver-k8s-master 1/1 Running 0 19m

kube-system kube-controller-manager-k8s-master 1/1 Running 0 19m

kube-system kube-flannel-ds-bqx2s 1/1 Running 0 16m

kube-system kube-flannel-ds-jgmjp 1/1 Running 0 16m

kube-system kube-flannel-ds-mxpl8 1/1 Running 0 21m

kube-system kube-flannel-ds-sd6lh 1/1 Running 0 16m

kube-system kube-proxy-cwslw 1/1 Running 0 16m

kube-system kube-proxy-j75fj 1/1 Running 0 1h

kube-system kube-proxy-ptn55 1/1 Running 0 16m

kube-system kube-proxy-zl8mb 1/1 Running 0 16m

kube-system kube-scheduler-k8s-master 1/1 Running 0 19m

在整個(gè)過(guò)程中如果發(fā)現(xiàn)狀態(tài)為Pending、ContainerCreateing、ImagePullBackOff等狀態(tài)都表示Pod還未就緒,只有Running狀態(tài)才是正常的。要做的事情只有等待。

kubectl get nodes 再次查看節(jié)點(diǎn)狀態(tài)

NAME STATUS ROLES AGE VERSION

k8s-master Ready master 1h v1.11.0

k8s-node1 Ready 16m v1.11.0

k8s-node2 Ready 16m v1.11.0

k8s-node3 Ready 16m v1.11.0

當(dāng)所有節(jié)點(diǎn)均為 Ready狀時(shí),此時(shí)就可以使用這個(gè)集群了

2.8部署kubernetes-dashboard

克隆kubernetes-dashboard yaml文件

sudo git clone https://github.com/gh-Devin/kubernetes-dashboard.git

修改kubernetes-dashboard yaml文件,修改內(nèi)容為下面紅色粗體部分。

cd kubernetes-dashboard/

vim kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with

# Kubernetes 1.8.

#

# Example usage: kubectl create -f

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque

---

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

---

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

rules:

# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

resources: ["secrets"]

verbs: ["create"]

# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

verbs: ["create"]

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: ["secrets"]

resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

verbs: ["get", "update", "delete"]

# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

resourceNames: ["kubernetes-dashboard-settings"]

verbs: ["get", "update"]

# Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

resources: ["services"]

resourceNames: ["heapster"]

verbs: ["proxy"]

- apiGroups: [""]

resources: ["services/proxy"]

resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system

---

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment

apiVersion: apps/v1beta2

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

spec:

serviceAccountName: kubernetes-dashboard

containers:

- name: kubernetes-dashboard

image: k8s.gcr.io/kubernetes-dashboard-arm64:v1.8.3

ports:

- containerPort: 9090

protocol: TCP

args:

#- --auto-generate-certificates

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

volumeMounts:

- name: kubernetes-dashboard-certs

mountPath: /certs

# Create on-disk volume to store exec logs

- mountPath: /tmp

name: tmp-volume

livenessProbe:

httpGet:

scheme: HTTP

path: /

port: 9090

initialDelaySeconds: 30

timeoutSeconds: 30

volumes:

- name: kubernetes-dashboard-certs

secret:

secretName: kubernetes-dashboard-certs

- name: tmp-volume

emptyDir: {}

serviceAccountName: kubernetes-dashboard-admin

# Comment the following tolerations if Dashboard must not be deployed on master

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

---

# ------------------- Dashboard Service ------------------- #

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

ports:

- port: 9090

targetPort: 9090

selector:

k8s-app: kubernetes-dashboard

# ------------------------------------------------------------

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-external

namespace: kube-system

spec:

ports:

- port: 9090

targetPort: 9090

nodePort: 30090

type: NodePort

selector:

k8s-app: kubernetes-dashboard

修改完成后執(zhí)行

kubectl -n kube-system create -f .

執(zhí)行命令的正常輸出:

serviceaccount "kubernetes-dashboard-admin" created

clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-admin" created

secret "kubernetes-dashboard-certs" created

serviceaccount "kubernetes-dashboard" created

role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created

rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created

deployment.apps "kubernetes-dashboard" created

service "kubernetes-dashboard-external" created

然后查看kubernetes-dashboard Pod的狀態(tài)

kubectl get pod --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system kubernetes-dashboard-66885dcb6f-v6qfm 1/1 Running 0 8m

當(dāng)狀態(tài)為running 時(shí)執(zhí)行下面命令查看端口

kubectl --namespace=kube-system describe svc kubernetes-dashboard

Name: kubernetes-dashboard-external

Namespace: kube-system

Labels: k8s-app=kubernetes-dashboard

Annotations:

Selector: k8s-app=kubernetes-dashboard

Type: NodePort

IP: 10.111.189.106

Port: 9090/TCP

TargetPort: 9090/TCP

NodePort: 30090/TCP 此端口為外部訪問(wèn)端口

Endpoints: 10.244.2.4:9090

Session Affinity: None

External Traffic Policy: Cluster

Events:

注意:如果在部署K8S-Dashboard界面過(guò)程中如果則登錄UI的時(shí)候會(huì)報(bào)錯(cuò):

這是因?yàn)镵8S在1.6版本以后啟用了RBAC訪問(wèn)控制策略,可以使用kubectl或Kubernetes API進(jìn)行配置。使用RBAC可以直接授權(quán)給用戶,讓用戶擁有授權(quán)管理的權(quán)限,這樣就不再需要直接觸碰Master Node。按照上面部署步驟則可以避免。

至此,基于ARM環(huán)境的K8S集群就部署完成了。

第四節(jié) 全篇總結(jié)

先說(shuō)說(shuō)關(guān)于ZStack安裝部署的一些心得,整個(gè)ZStack For ARM平臺(tái)部署到業(yè)務(wù)環(huán)境構(gòu)建的過(guò)程,都是比較流暢的。ZStack產(chǎn)品化程度高,安裝過(guò)程非常簡(jiǎn)單,基本上按照官方部署文檔1個(gè)小時(shí)內(nèi)就能完成3臺(tái)規(guī)模的云平臺(tái)搭建及平臺(tái)初始化工作。

ZStack云平臺(tái)采用獨(dú)特的異步架構(gòu),大大提升了平臺(tái)響應(yīng)能力,使得批量并發(fā)操作不再成為煩惱;管理層面與業(yè)務(wù)層面獨(dú)立,不會(huì)因?yàn)楣芾砉?jié)點(diǎn)意外宕機(jī)導(dǎo)致業(yè)務(wù)中斷;平臺(tái)內(nèi)置大量實(shí)用性很高的功能,極大方便了在測(cè)試過(guò)程中運(yùn)維任務(wù);版本升級(jí)簡(jiǎn)單可靠,完全實(shí)現(xiàn)5分鐘跨版本無(wú)縫升級(jí),經(jīng)實(shí)測(cè)升級(jí)過(guò)程中完全不影響業(yè)務(wù)正常運(yùn)行。通過(guò)升級(jí)后能實(shí)現(xiàn)異構(gòu)集群管理,也就是說(shuō)在ARM服務(wù)器上構(gòu)建管理節(jié)點(diǎn),可以同時(shí)管理ARM集群中的資源,也能管理X86架構(gòu)集群中的資源;同時(shí)實(shí)現(xiàn)高級(jí)SDN功能。

而基于ZStack云主機(jī)構(gòu)建K8S集群時(shí),我們團(tuán)隊(duì)在選擇方案的時(shí)候,也拿物理機(jī)和云主機(jī)做過(guò)一系列對(duì)比,對(duì)比之后發(fā)現(xiàn)當(dāng)我用ZStack云主機(jī)部署K8S集群的時(shí)候更加靈活、可控。具體的可以在以下幾個(gè)方面體現(xiàn):

1、ZStack云主機(jī)天生隔離性好

對(duì)容器技術(shù)了解的人應(yīng)該清楚,多個(gè)容器公用一個(gè)Host Kernel;這樣就會(huì)遇到隔離性方面的問(wèn)題,雖然隨著技術(shù)發(fā)展,目前也可以使用Linux系統(tǒng)上的防護(hù)機(jī)制實(shí)現(xiàn)安全隔離,但是從某個(gè)層面講并不是完全隔離,而云主機(jī)方式受益于虛擬化技術(shù),天生就有非常好的隔離性,從而可以進(jìn)一步保障安全。ZStack就是基于KVM虛擬化技術(shù)架構(gòu)自研。

2、受益于ZStack云平臺(tái)多租戶

在物理服務(wù)器上運(yùn)行的大堆容器要實(shí)現(xiàn)資源自理,所謂資源自理就是各自管理自己的容器資源,那么這個(gè)時(shí)候問(wèn)題就來(lái)了,一臺(tái)物理機(jī)上有成千上萬(wàn)個(gè)容器怎么去細(xì)分管理范圍呢?這個(gè)時(shí)候云平臺(tái)的多租戶管理就派上用處了,每個(gè)租戶被分配到相應(yīng)的云主機(jī),各自管理各自的云主機(jī)以及容器集群。同時(shí)還能對(duì)不同人員權(quán)限進(jìn)行控制管理。在本次測(cè)試的ZStack For ARM云平臺(tái),就可以實(shí)現(xiàn)按企業(yè)組織架構(gòu)方式進(jìn)行資源、權(quán)限管理,同時(shí)還能實(shí)現(xiàn)流程審批,審批完成后自動(dòng)創(chuàng)建所需的云主機(jī);據(jù)說(shuō)后面發(fā)布的ZStack2.5.0版本還有資源編排功能。

3.ZStack云平臺(tái)靈活性、自動(dòng)化程度高

通過(guò)ZStack,可以根據(jù)業(yè)務(wù)需求,對(duì)云主機(jī)進(jìn)行資源定制,減少資源浪費(fèi)。同時(shí)根據(jù)自身業(yè)務(wù)情況調(diào)整架構(gòu)實(shí)現(xiàn)模式,比如:有計(jì)算密集型業(yè)務(wù),此時(shí)可以借助GPU透?jìng)鞴δ?,將GPU透?jìng)鞯皆浦鳈C(jī),能快速實(shí)現(xiàn)計(jì)算任務(wù),避免過(guò)多繁瑣配置。

另外目前各種云平臺(tái)都有相應(yīng)API接口,可以方便第三方應(yīng)用直接調(diào)用,從而實(shí)現(xiàn)根據(jù)業(yè)務(wù)壓力自動(dòng)進(jìn)行資源伸縮。但是對(duì)于物理服務(wù)器來(lái)說(shuō)沒(méi)什么完整的API接口,基本上都是基于IPMI方式進(jìn)行管理,而且每個(gè)廠商的IPMI還不通用,很難實(shí)現(xiàn)資源的動(dòng)態(tài)伸縮。說(shuō)到API接口,我了解到的ZStack云平臺(tái),具備全API接口開(kāi)放的特點(diǎn)??梢允谷萜骷焊鶕?jù)業(yè)務(wù)壓力自動(dòng)伸縮。

4、可靠性非常好

為什么這么說(shuō)呢?其實(shí)不難理解,計(jì)劃內(nèi)和計(jì)劃外業(yè)務(wù)影響少。當(dāng)我們對(duì)物理服務(wù)器進(jìn)行計(jì)劃內(nèi)維護(hù)時(shí),那些單容器運(yùn)行的業(yè)務(wù)必定會(huì)受影響,此時(shí)可以借助云平臺(tái)中的熱遷移功能,遷移的過(guò)程中可實(shí)現(xiàn)業(yè)務(wù)不中斷。對(duì)于計(jì)劃外停機(jī),對(duì)業(yè)務(wù)影響基本上都是按天算的,損失不可言表。如果采用云平臺(tái)方式業(yè)務(wù)中斷時(shí)間將會(huì)縮短到分鐘級(jí)別。

上面簡(jiǎn)單分享了一下用云主機(jī)構(gòu)建K8S集群的一些優(yōu)點(diǎn),當(dāng)然也有一些缺點(diǎn),在我看來(lái)缺點(diǎn)無(wú)非就是性能有稍微點(diǎn)損失,總之利大于弊??梢栽谝?guī)劃時(shí)規(guī)避掉這個(gè)問(wèn)題,比如可以將性能型容器資源集中放到物理Node上,這樣就可以***解決了。

***再說(shuō)說(shuō)在ZStack ARM架構(gòu)的云主機(jī)上部署K8S需要注意的地方,為大家提供一些參考。

1、默認(rèn)Get下來(lái)的yaml配置文件,里面涉及的image路徑都是x86架構(gòu)的amd64,需要將其改成arm64。

2、在創(chuàng)建集群的時(shí)候,如果采用flannel網(wǎng)絡(luò)模式則--pod-network-cidr一定要為 10.244.0.0/16,否則Pod網(wǎng)可能不通。

3、云主機(jī)環(huán)境一定要執(zhí)行sudo swapoff -a 不然創(chuàng)建K8S集群的時(shí)候就會(huì)報(bào)錯(cuò)。

以上就是我本次的主要分享內(nèi)容,歡迎大家關(guān)注交流。(qq:410185063;mail:zts@viczhu.com)。


當(dāng)前文章:如何基于國(guó)產(chǎn)CPU的云平臺(tái)構(gòu)建容器管理平臺(tái)?(下篇)
文章起源:http://www.dlmjj.cn/article/cddeese.html