第一章:Dubbo微服务概述

  • Dubbo是什么?
    • Dubbo是阿里巴巴SOA服务化治理方案的核心框架,每天为2000+个服务提供300000000+次访问量支持,被广泛应用于阿里巴巴集团的各成员站点。
    • Dubbo是一个分布式服务框架,致力于提供高性能和透明化的RPC远程服务调用方案,以及SOA服务治理方案。
    • 简单地说,dubbo就是个服务框架,如果没有分布式的需求,其实是不需要的,只有在分布式的时候,才有dubbo这样的分布式服务框架的需求,并且本质上就是服务调用,说白了就是个远程服务调用的分布式框架。
  • Dubbo能做什么?
    • 透明化的远程方法调用,就像调用本地方法一样调用远程方法,只需简单配置,没有任何API侵入。
    • 软负载均衡及容错机制,可在内网替代F5等硬件负载均衡器,降低成本,减少单点。
    • 服务自动注册与发行,不需要写死服务提供方地址,注册中心基于接口查询服务提供者的IP地址,并且能够平滑添加和删除服务提供者。
  • Dubbo框架

第二章:实验架构详解

1.架构图解:

1.最上面一排为K8S集群外服务

  1. 代码仓库使用基于git的gitee
  2. 注册中心使用3台zk组成集群
  3. 用户通过ingress暴露出去的服务进行访问

2.中间层是K8S集群内服务

  1. jenkins以容器方式运行,数据目录通过共享磁盘做持久化
  2. 整套dubbo微服务都以POD方式交付,通过zk集群通信
  3. 需要提供的外部访问的服务通过ingress方式暴露

3.最下层是运维主机层

  1. harbor是docker私有仓库,存放docker镜像
  2. POD相关yaml文件创建在运维主机特定目录
  3. 在K8S集群内通过nginx提供的下载链接应用yaml配置

2.交付说明

docker虽然可以部署有状态服务,但如果不是有特别需要,还是建议不要部署有状态服务。
k8s同理,也不建议部署有状态服务,如mysql,zk等。
因此手动将zookeeper创建集群提供给dubbo使用。

第三章:部署zookeeper集群

  • Zookeeper是Dubbo微服务集群的注册中心。
  • 它的高可用机制和K8S的etcd集群一致。
  • 由java编写,所以需要jdk环境

1.集群规划

主机名 角色 ip
HDSS7-11.host.com k8s代理节点1,zk1 10.4.7.11
HDSS7-12.host.com k8s代理节点2,zk2 10.4.7.12
HDSS7-21.host.com k8s运算节点1,zk3 10.4.7.21
HDSS7-22.host.com k8s运算节点2,jenkins 10.4.7.22
HDSS7-200.host.com k8s运维节点(docker仓库) 10.4.7.200

2.安装jdk1.8(3台zk角色主机)

JDK_ALL下载地址

[root@hdss7-11 ~]# mkdir /opt/src
[root@hdss7-11 ~]# mkdir /usr/java
[root@hdss7-11 ~]# cd /opt/src/
[root@hdss7-11 src]# tar xf jdk-8u221-linux-x64.tar.gz -C /usr/java/
[root@hdss7-11 src]# ln -s /usr/java/jdk1.8.0_221/ /usr/java/jdk
[root@hdss7-11 src]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

[root@hdss7-11 src]# source  /etc/profile
[root@hdss7-11 src]# 
[root@hdss7-11 src]# 
[root@hdss7-11 src]# java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
[root@hdss7-11 src]# 

3.安装zookeeper(3台zk角色主机)

zk下载
zookeeper

1.解压软件包

[root@hdss7-11 src]# wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
[root@hdss7-11 src]# tar xf zookeeper-3.4.14.tar.gz  -C /opt/
[root@hdss7-11 src]# ln -s /opt/zookeeper-3.4.14/ /opt/zookeeper
[root@hdss7-11 src]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs
mkdir: created directory ‘/data’
mkdir: created directory ‘/data/zookeeper’
mkdir: created directory ‘/data/zookeeper/data’
mkdir: created directory ‘/data/zookeeper/logs’

[root@hdss7-11 src]# vim /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
server.1=zk1.od.com:2888:3888
server.2=zk2.od.com:2888:3888
server.3=zk3.od.com:2888:3888

注意:各节点zk配置相同

2.配置myid

HDSS7-11.host.com上:

[root@hdss7-11 src]# echo "1" > /data/zookeeper/data/myid

HDSS7-12.host.com上:

[root@hdss7-12 src]# echo "2" > /data/zookeeper/data/myid

HDSS7-21.host.com上:

[root@hdss7-21 src]# echo "3" > /data/zookeeper/data/myid

3.DNS解析

HDSS7-11.host.com

[root@hdss7-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600    ; 10 minutes
@           IN SOA    dns.od.com. dnsadmin.od.com. (
                2021031906 ; serial
                10800      ; refresh (3 hours)
                900        ; retry (15 minutes)
                604800     ; expire (1 week)
                86400      ; minimum (1 day)
                )
                NS   dns.od.com.
$TTL 60    ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
dashboard          A    10.4.7.10
zk1                A    10.4.7.11
zk2                A    10.4.7.12
zk3                A    10.4.7.21
[root@hdss7-11 ~]# named-checkconf 
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# 
[root@hdss7-11 ~]# 
[root@hdss7-11 ~]# dig -t A zk2.od.com @10.4.7.11 +short
10.4.7.12
[root@hdss7-11 ~]# dig -t A zk1.od.com @10.4.7.11 +short
10.4.7.11
[root@hdss7-11 ~]# dig -t A zk3.od.com @10.4.7.11 +short
10.4.7.21
[root@hdss7-11 ~]# 

4.依次启动

[root@hdss7-11 ~]# /opt/zookeeper/bin/zkServer.sh start
[root@hdss7-12 ~]# /opt/zookeeper/bin/zkServer.sh start
[root@hdss7-21 ~]# /opt/zookeeper/bin/zkServer.sh start

5.检查

[root@hdss7-11 ~]# netstat  -lntup|grep 2181
tcp6       0      0 :::2181                 :::*                    LISTEN      70129/java          
[root@hdss7-11 ~]# 
[root@hdss7-11 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@hdss7-11 ~]# 
[root@hdss7-12 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@hdss7-12 ~]# 
[root@hdss7-21 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@hdss7-21 ~]# 

第四章:部署Jenkins

1.准备镜像

jenkins官网
jenkins镜像

在运维主机下载官网上的稳定版(这里下载2.277.1),如果此版本有问题,可以用最新的版本,默认不支持关闭CSRF,所以需要在jenkins 控制台中手动关闭CSRF。
解决方案为在jenkins控制台中执行以下代码。

// 允许禁用
// Script Console
hudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION = true
[root@hdss7-200 ~]# docker pull jenkins/jenkins:2.277.1
[root@hdss7-200 ~]# docker images |grep jenkins
jenkins/jenkins                    2.277.1                    a470e85971b2   2 weeks ago     570MB
[root@hdss7-200 ~]# docker tag a470e85971b2  harbor.od.com/public/jenkins:v2.277.1
[root@hdss7-200 ~]# docker push harbor.od.com/public/jenkins:v2.277.1

2.自定义Dockerfile

在运维主机HDSS7-200.host.com上编辑自定义dockerfile

[root@hdss7-200 ~]#  mkdir -p  /data/dockerfile/jenkins/
[root@hdss7-200 ~]# vim /data/dockerfile/jenkins/Dockerfile
[root@hdss7-200 ~]# cat /data/dockerfile/jenkins/Dockerfile 
FROM harbor.od.com/public/jenkins:v2.277.1
USER root
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ 
    echo 'Asia/Shanghai' >/etc/timezone
ADD id_rsa /root/.ssh/id_rsa
ADD config.json /root/.docker/config.json
ADD get-docker.sh /get-docker.sh
RUN echo "    StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&\
   /get-docker.sh --mirror Aliyun
[root@hdss7-200 ~]# 

这个Dockerfile里我们主要做了以下几件事

  • 设置容器用户为root

  • 设置容器内的时区

  • 将ssh私钥加入(使用git拉取代码时要用到,配置的公钥应配置在gitlab中)

  • 加入了登录自建harbor仓库的config文件

  • 修改了ssh客户端的配置

  • 安装了一个docker的客户端

  • 1.生成ssh秘钥对:

[root@hdss7-200 ~]# ssh-keygen  -t rsa -b 2048 -C "xtyang@hebye.com" -N "" -f /root/.ssh/id_rsa
#查看生成文件
[root@hdss7-200 ~]# ll /root/.ssh/
total 8
-rw------- 1 root root 1675 Mar 26 11:24 id_rsa
-rw-r--r-- 1 root root  398 Mar 26 11:24 id_rsa.pub
[root@hdss7-200 ~]# 
#查看生成的公钥
[root@hdss7-200 ~]# cat /root/.ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCV4BRT1eJSDJnZvCGVrcs8SLdkVfNabhkK5GtvQKxSOgl3bF763MnlJ9WhmD0Fkd26C8xKCaCE6IH1iDYJlANVCdNu1vDVcg17CiarLex9ljv6/m5p8LMFLWnUt+50AqBTZjbHn24HtE+0Qf11Jb2QRDlDvb3DAqlahsO9FnpI+Q7JZdbzUrynwZlF2bFpYMY14Hg2ceUOV1iRQvjfsH44d224wKTE3DBAaI6Tn2T++DUtvqzxenlZwGpoM0w8ipGjzNdzmkuVYKIRF/LNvI5VmYyoCqR/xQr+EOpThGbz+MnTN6JtJelnyWgYfyDH+ifmZBVi1dlZZTxpFwZwtBQR xtyang@hebye.com
[root@hdss7-200 ~]# 
  • 2.拷贝文件:
[root@hdss7-200 ~]# cp /root/.ssh/id_rsa /data/dockerfile/jenkins/
[root@hdss7-200 ~]# cp /root/.docker/config.json  /data/dockerfile/jenkins/
[root@hdss7-200 ~]# cd /data/dockerfile/jenkins/ && curl -fsSL get.docker.com -o get-docker.sh && chmod +x get-docker.sh
  • 3.查看docker harbor config
[root@hdss7-200 jenkins]# cat /root/.docker/config.json 
{
    "auths": {
        "harbor.od.com": {
            "auth": "YWRtaW46SGFyYm9yMTIzNDU="
        }
    }
}[root@hdss7-200 jenkins]# 
  • 4.检查文件
[root@hdss7-200 jenkins]# pwd
/data/dockerfile/jenkins
[root@hdss7-200 jenkins]# ls
config.json  Dockerfile  get-docker.sh  id_rsa
[root@hdss7-200 jenkins]# 

3.制作自定义镜像

[root@hdss7-200 jenkins]# docker build . -t harbor.od.com/infra/jenkins:v2.277.1
Sending build context to Docker daemon  20.48kB
Step 1/7 : FROM harbor.od.com/public/jenkins:v2.277.1
 ---> a470e85971b2
Step 2/7 : USER root
 ---> Using cache
 ---> bb9835038486
Step 3/7 : RUN /bin/cp usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&    echo 'Asia/Shanghai' >/etc/timezone
 ---> Using cache
 ---> 8af6a00c00fd
Step 4/7 : ADD id_rsa /root/.ssh/id_rsa
 ---> 83ac948a6e95
Step 5/7 : ADD config.json /root/.docker/config.json
 ---> 505df9e8c257
Step 6/7 : ADD get-docker.sh /get-docker.sh
 ---> 71224cfa4bed
Step 7/7 : RUN echo "    StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&   /get-docker.sh --mirror Aliyun
 ---> Running in db556ea0096f
# Executing docker install script, commit: 3d8fe77c2c46c5b7571f94b42793905e5b3e42e4
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sh -c curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/debian/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c echo "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/debian buster stable" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
debconf: delaying package configuration, since apt-utils is not installed
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
Removing intermediate container db556ea0096f
 ---> b5b80b38192e
Successfully built b5b80b38192e
Successfully tagged harbor.od.com/infra/jenkins:v2.277.1
[root@hdss7-200 jenkins]# 

4.创建infra仓库

在Harbor页面,创建infra仓库,注意:私有仓库

5.推送镜像到harbor

[root@hdss7-200 jenkins]# docker push harbor.od.com/infra/jenkins:v2.277.1
- 登录gitee.com,然后 添加公钥钥,使得jenkins能够拉去gitee的代码,测试jenkins镜像

[root@hdss7-200 jenkins]# docker run --rm harbor.od.com/infra/jenkins:v2.277.1 ssh -i /root/.ssh/id_rsa  -T git@gitee.com
Warning: Permanently added 'gitee.com,212.64.62.183' (ECDSA) to the list of known hosts.
Hi Yodo1! You've successfully authenticated, but GITEE.COM does not provide shell access.
[root@hdss7-200 jenkins]# 

6.创建kubernetes命名空间,私有仓库鉴权

在任意运算节点上:

[root@hdss7-21 ~]# kubectl create ns infra
namespace/infra created
[root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n infra
secret/harbor created
[root@hdss7-21 ~]# 

7.准备共享存储

运维主机,以及所有运算节点上:

[root@hdss7-21 ~]# yum install nfs-utils -y
[root@hdss7-22 ~]# yum install nfs-utils -y
[root@hdss7-200 ~]# yum install nfs-utils -y
  • 配置NFS服务
    运维主机HDSS7-200.host.com上:
[root@hdss7-200 ~]# cat /etc/exports 
/data/nfs-volume 10.4.7.0/24(rw,no_root_squash)
[root@hdss7-200 ~]# mkdir -p /data/nfs-volume/jenkins_home
[root@hdss7-200 ~]# 
  • 启动NFS服务
[root@hdss7-200 ~]# systemctl start nfs
[root@hdss7-200 ~]# systemctl enable nfs

8.准备资源配置清单

运维主机HDSS7-200.host.com上:

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/jenkins
[root@hdss7-200 ~]# cd  /data/k8s-yaml/jenkins
[root@hdss7-200 jenkins]# vim dp.yaml
[root@hdss7-200 jenkins]# cat dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: jenkins
  namespace: infra
  labels: 
    name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: jenkins
  template:
    metadata:
      labels: 
        app: jenkins 
        name: jenkins
    spec:
      volumes:
      - name: data
        nfs: 
          server: hdss7-200
          path: /data/nfs-volume/jenkins_home
      - name: docker
        hostPath: 
          path: /run/docker.sock
          type: ''
      containers:
      - name: jenkins
        image: harbor.od.com/infra/jenkins:v2.277.1
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -Xmx512m -Xms512m
        resources:
          limits: 
            cpu: 500m
            memory: 1Gi
          requests: 
            cpu: 500m
            memory: 1Gi
        volumeMounts:
        - name: data
          mountPath: /var/jenkins_home
        - name: docker
          mountPath: /run/docker.sock
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 jenkins]# 
[root@hdss7-200 jenkins]# cat svc.yaml 
kind: Service
apiVersion: v1
metadata: 
  name: jenkins
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  selector:
    app: jenkins
[root@hdss7-200 jenkins]#
[root@hdss7-200 jenkins]# cat ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: jenkins
  namespace: infra
spec:
  rules:
  - host: jenkins.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: jenkins
          servicePort: 80
[root@hdss7-200 jenkins]# 

9.应用资源配置清单

  • 在任意运算节点上运行
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/jenkins/dp.yaml
deployment.extensions/jenkins created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/jenkins/svc.yaml
service/jenkins created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/jenkins/ingress.yaml
ingress.extensions/jenkins created
[root@hdss7-21 ~]# 
  • 检查
[root@hdss7-21 ~]# kubectl get all -n infra
NAME                          READY   STATUS    RESTARTS   AGE
pod/jenkins-b69779cdc-mhgzj   1/1     Running   0          3m7s


NAME              TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
service/jenkins   ClusterIP   192.168.101.119   <none>        80/TCP    3m4s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jenkins   1/1     1            1           3m7s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/jenkins-b69779cdc   1         1         1       3m7s
[root@hdss7-21 ~]# kubectl get pods -n infra
NAME                      READY   STATUS    RESTARTS   AGE
jenkins-b69779cdc-mhgzj   1/1     Running   0          3m42s
[root@hdss7-21 ~]# kubectl get svc -n infra
NAME      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
jenkins   ClusterIP   192.168.101.119   <none>        80/TCP    3m46s
[root@hdss7-21 ~]# kubectl get ingress -n infra
NAME      HOSTS            ADDRESS   PORTS   AGE
jenkins   jenkins.od.com             80      3m50s
[root@hdss7-21 ~]# 

10.域名解析

HDSS7-11.host.com

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
jenkins            A    10.4.7.10
[root@hdss7-11 ~]# named-checkconf 
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A jenkins.od.com @10.4.7.11 +short
10.4.7.10
[root@hdss7-11 ~]# 

11.配置jenkins加速

[root@hdss7-200 ~]# cd /data/nfs-volume/jenkins_home/updates/
 [root@hdss7-200 ~]# sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

12.浏览器访问

浏览器访问 http://jenkins.od.com/

13.域名配置jenkins

1.初始化密码

[root@hdss7-21 ~]# kubectl get pods -n infra
NAME                      READY   STATUS    RESTARTS   AGE
jenkins-b69779cdc-mhgzj   1/1     Running   0          28m
[root@hdss7-21 ~]# kubectl exec jenkins-b69779cdc-mhgzj  /bin/cat /var/jenkins_home/secrets/initialAdminPassword -n infra 
4a55ea5ad1684285b6b435e57de6bb03

[root@hdss7-200 ~]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword 
4a55ea5ad1684285b6b435e57de6bb03
[root@hdss7-200 ~]# 

2.跳过安装插件

3.创建管理员用户

4.调整安全选项



5.安装Blue Ocean插件

  • Manage Jenkins
  • Manage Plugins
  • Available
  • Blue Ocean

    6.最后的检验工作

    1.检查jenkins容器里的docker客户端

    进入jenkins容器里,检查dockerk客户端是否可用
    [root@hdss7-22 ~]# kubectl get pods -n infra -o wide
    NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
    jenkins-b69779cdc-mhgzj   1/1     Running   0          73m   172.7.21.8   hdss7-21.host.com   <none>           <none>
    [root@hdss7-22 ~]# kubectl  exec -it jenkins-b69779cdc-mhgzj /bin/bash -n infra
    root@jenkins-b69779cdc-mhgzj:/# docker ps -a
    CONTAINER ID   IMAGE                               COMMAND                  CREATED             STATUS             PORTS                NAMES
    db1ca6f4e736   b5b80b38192e                        "/sbin/tini -- /usr/…"   About an hour ago   Up About an hour                        k8s_jenkins_jenkins-b69779cdc-mhgzj_infra_2ec2f1b8-4a46-45b8-973b-e876ea0992b3_0
    bcff03d042ee   harbor.od.com/public/pause:latest   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_jenkins-b69779cdc-mhgzj_infra_2ec2f1b8-4a46-45b8-973b-e876ea0992b3_0
    4e17a3ce4645   harbor.od.com/public/nginx          "nginx -g 'daemon of…"   20 hours ago        Up 20 hours                             k8s_nginx_nginx-dp-5dfc689474-fwqtr_kube-public_3678c27f-a086-448a-8ff5-fef77bb9688c_0
    369bc42adcca   harbor.od.com/public/heapster       "/opt/bitnami/heapst…"   20 hours ago        Up 20 hours                             k8s_heapster_heapster-b5b9f794-79qb2_kube-system_192ad1b3-644e-4885-a3f6-0949847a025f_0
    1464f974d0a0   harbor.od.com/public/coredns        "/coredns -conf /etc…"   20 hours ago        Up 20 hours                             k8s_coredns_coredns-6b6c4f9648-jhtpx_kube-system_c0ed1d19-03b5-4eaa-9549-675e80778357_0
    c9d61f208976   fcac9aa03fd6                        "/dashboard --insecu…"   20 hours ago        Up 20 hours                             k8s_kubernetes-dashboard_kubernetes-dashboard-56b587f595-lmtpn_kube-system_72c284fe-113e-4f53-8b8e-11386efe4f03_0
    697cd083ffac   harbor.od.com/public/pause:latest   "/pause"                 20 hours ago        Up 20 hours                             k8s_POD_nginx-dp-5dfc689474-fwqtr_kube-public_3678c27f-a086-448a-8ff5-fef77bb9688c_0
    5f5010b46fe0   harbor.od.com/public/pause:latest   "/pause"                 20 hours ago        Up 20 hours                             k8s_POD_heapster-b5b9f794-79qb2_kube-system_192ad1b3-644e-4885-a3f6-0949847a025f_0
    fcaa03c53378   harbor.od.com/public/pause:latest   "/pause"                 20 hours ago        Up 20 hours                             k8s_POD_kubernetes-dashboard-56b587f595-lmtpn_kube-system_72c284fe-113e-4f53-8b8e-11386efe4f03_0
    eafeb87fb982   harbor.od.com/public/pause:latest   "/pause"                 20 hours ago        Up 20 hours                             k8s_POD_coredns-6b6c4f9648-jhtpx_kube-system_c0ed1d19-03b5-4eaa-9549-675e80778357_0
    f1bd716b5003   94dc08f300e5                        "nginx -g 'daemon of…"   20 hours ago        Up 20 hours                             k8s_my-nginx_nginx-ds-76l6b_default_c386d5bd-3336-47a7-88da-60108853c0b8_0
    9f08fe7130c0   add5fac61ae5                        "/entrypoint.sh --ap…"   20 hours ago        Up 20 hours                             k8s_traefik-ingress_traefik-ingress-hb6sz_kube-system_06920938-6fd7-4091-8afc-9698ee0bb1e0_0
    634ebbdb2bca   harbor.od.com/public/pause:latest   "/pause"                 20 hours ago        Up 20 hours                             k8s_POD_nginx-ds-76l6b_default_c386d5bd-3336-47a7-88da-60108853c0b8_0
    fe5e08c65d79   harbor.od.com/public/pause:latest   "/pause"                 20 hours ago        Up 20 hours        0.0.0.0:81->80/tcp   k8s_POD_traefik-ingress-hb6sz_kube-system_06920938-6fd7-4091-8afc-9698ee0bb1e0_0
    root@jenkins-b69779cdc-mhgzj:/# 
    #和宿主机共享容器,这里查看的信息,也是宿主机10.4.7.21上的运行的docker

    2.检查jenkins容器里的SSH key

    root@jenkins-b69779cdc-mhgzj:/# ssh -i /root/.ssh/id_rsa  -T git@gitee.com
    Warning: Permanently added 'gitee.com,212.64.62.183' (ECDSA) to the list of known hosts.
    Hi Yodo1! You've successfully authenticated, but GITEE.COM does not provide shell access.
    root@jenkins-b69779cdc-mhgzj:/# 

    3.检查jenkins容器里连接harbor.od.com

    `shell
    root@jenkins-b69779cdc-mhgzj:/# docker login harbor.od.com
    Authenticating with existing credentials…
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
root@jenkins-b69779cdc-mhgzj:/#

#### 4.不是maven软件
maven官方下载地址:
[maven3](https://archive.apache.org/dist/maven/maven-3/)
在运维主机hdss7-200.host.com上二进制部署,这里部署maven-3.6.1版

```shell
[root@hdss7-200 updates]# cd /opt/src/
[root@hdss7-200 src]# wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
## 8u232是jenkins中java的版本
[root@hdss7-200 ~]# mkdir /data/nfs-volume/jenkins_home/maven-3.6.1-8u232
[root@hdss7-200 src]# tar xf apache-maven-3.6.1-bin.tar.gz  -C /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/
[root@hdss7-200 src]# cd /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/
[root@hdss7-200 maven-3.6.1-8u232]# mv apache-maven-3.6.1/* .
[root@hdss7-200 maven-3.6.1-8u232]# rm -rf apache-maven-3.6.1/
  • 设置国内镜像
[root@hdss7-200 ~]# vim /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/conf/settings.xml 
 <mirrors>
    <mirror>
        <id>nexus-aliyun</id>
        <mirrorOf>*</mirrorOf>
        <name>Nexus aliyun</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
   </mirror>
  </mirrors>

第五章:制作dubbo微服务的底包镜像

运维主机HDSS7-200.host.com

  • 下载基础镜像
# jre8
[root@hdss7-200 ~]# docker pull hebye/jre8:8u112
[root@hdss7-200 ~]# docker tag  fa3a085d6ef1 harbor.od.com/public/jre8:8u112
[root@hdss7-200 ~]# docker push harbor.od.com/public/jre8:8u112

# jre7 可以做两个底包镜像
[root@hdss7-200 ~]# docker pull  hebye/jre7:7u80
  • 自定义Dockerfile
[root@hdss7-200 ~]# mkdir /data/dockerfile/jre8
[root@hdss7-200 ~]# cd /data/dockerfile/jre8/
[root@hdss7-200 jre8]#  vim Dockerfile
FROM harbor.od.com/public/jre8:8u112
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
    echo 'Asia/Shanghai' >/etc/timezone
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/
WORKDIR /opt/project_dir
ADD entrypoint.sh /entrypoint.sh
CMD ["/entrypoint.sh"]
  • 获取config.yml
[root@hdss7-200 jre8]# more config.yml 
---
rules:
  - pattern: '.*'
[root@hdss7-200 jre8]# 
  • 获取jmx_javaagent-0.3.1.jar
[root@hdss7-200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
  • 获取entrypoint.sh
[root@hdss7-200 jre8]# more entrypoint.sh 
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_BALL=${JAR_BALL}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
[root@hdss7-200 jre8]# 
[root@hdss7-200 jre8]# chmod  +x entrypoint.sh
  • 创建base的私有镜像仓库

  • 制作dubbo服务docker底包
[root@hdss7-200 jre8]# docker build . -t harbor.od.com/base/jre8:8u112
[root@hdss7-200 jre8]# docker push harbor.od.com/base/jre8:8u112
# 注意:jre7底包制作类似,这里略

第六章:使用jenkins持续构建交付dubbo服务的提供者

1.配置New job

  • create new jobs

  • Enter anitem name

    dubbo-demo

  • Pipeline –> ok

  • Discard old builds

    Days to keep builds:3
    Max # of builds to keep:30

  • This project is parameterized

    1.Add Parameter –> String Parameter

    Name:app_name
    Default Value:
    Description:project name,e.g:dubbo-demo-service

    2.Add Parameter -> String Parameter

    Name : image_name
    Default Value :
    Description : project docker image name. e.g: app/dubbo-demo-service

    3.Add Parameter -> String Parameter

    Name : git_repo
    Default Value :
    Description : project git repository. e.g: https://gitee.com/yodo1/dubbo-demo-service.git

    4.Add Parameter -> String Parameter

    Name : git_ver
    Default Value :
    Description : git commit id of the project.

    5.Add Parameter -> String Parameter

    Name : add_tag
    Default Value :
    Description : project docker image tag, date_timestamp recommended. e.g: 190117_1920

    6.Add Parameter -> String Parameter

    Name : mvn_dir
    Default Value : ./
    Description : project maven directory. e.g: ./

    7.Add Parameter -> String Parameter

    Name : target_dir
    Default Value : ./target
    Description : the relative path of target file such as .jar or .war package. e.g: ./dubbo-server/target

    8.Add Parameter -> String Parameter

    Name : mvn_cmd
    Default Value : mvn clean package -Dmaven.test.skip=true
    Description : maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true

    9.Add Parameter -> Choice Parameter

    Name : base_image
    Default Value :

    • base/jre7:7u80
    • base/jre8:8u112
      Description : project base image list in harbor.od.com.

    10.Add Parameter -> Choice Parameter

    Name : maven
    Default Value :

    • 3.6.1-8u232
    • 3.2.5-6u025
    • 2.2.1-6u025
      Description : different maven edition.







    2.Pipeline Script

    pipeline {
    agent any 
      stages {
        stage('pull') { //get project code from repo 
          steps {
            sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
          }
        }
        stage('build') { //exec mvn cmd
          steps {
            sh "cd ${params.app_name}/${env.BUILD_NUMBER}  && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
          }
        }
        stage('package') { //move jar file into project_dir
          steps {
            sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
          }
        }
        stage('image') { //build image and push to registry
          steps {
            writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.od.com/${params.base_image}
    ADD ${params.target_dir}/project_dir /opt/project_dir"""
            sh "cd  ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
          }
        }
      }
    }

    第七章:交付dubbo微服务至kubernetes集群

1.dubbo服务提供者(dubbo-demo-service)

注意:提前在harbor仓库,创建私有仓库app,为下面构建镜像使用

注意:master分支 zk的配置信息在代码 dubbo-server/src/main/java/config.propertie中,如果需要修改,请修改

dubbo.registry=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181
dubbo.port=20880

1.通过jenkins进行一次CI

依次填入/选择:

app_name

dubbo-demo-service

image_name

app/dubbo-demo-service

git_repo

https://gitee.com/yodo1/dubbo-demo-service.git

git_ver

master

add_tag

210328_2045

mvn_dir

./

target_dir

./dubbo-server/target

mvn_cmd

mvn clean package -Dmaven.test.skip=true

base_image

base/jre8:8u112

maven

3.6.1-8u232

点击Build进行构建,等待构建完成

2.检查harbor仓库内镜像

3..准备k8s资源配置清单(CD过程)

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/dubbo-demo-service/
[root@hdss7-200 ~]# cd /data/k8s-yaml/dubbo-demo-service/
[root@hdss7-200 dubbo-demo-service]# vim dp.yaml
[root@hdss7-200 dubbo-demo-service]# 
[root@hdss7-200 dubbo-demo-service]# 
[root@hdss7-200 dubbo-demo-service]# cat dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: app
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.od.com/app/dubbo-demo-service:master_210328_2045
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 dubbo-demo-service]# 

4.应用资源配置清单

在任意一台k8s运算节点执行:

  • 创建kubernetes命名空间,私有仓库鉴权
[root@hdss7-21 ~]# kubectl create ns app
namespace/app created
[root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n app
secret/harbor created
[root@hdss7-21 ~]# 
  • 应用资源配置清单
[root@hdss7-21 ~]# kubectl  apply -f http://k8s-yaml.od.com/dubbo-demo-service/dp.yaml
deployment.extensions/dubbo-demo-service created
[root@hdss7-21 ~]# 

5.检查docker运行情况及zk里的信息

[root@hdss7-21 ~]# /opt/zookeeper/bin/zkCli.sh  -server localhost:2181
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[dubbo, zookeeper]
[root@hdss7-21 ~]# kubectl get pods -n app
NAME                                  READY   STATUS    RESTARTS   AGE
dubbo-demo-service-7976d866cb-7269q   1/1     Running   0          2m41s
[root@hdss7-21 ~]# 

6.查看dubbo-demo-service的日志

2.交付dubbo-monitor监控服务到K8S

dubbo-monitor源码包
dubbo-monitor是监控zookeeper状态的一个服务,另外还有dubbo-admin,效果一样

1.下载源码并解压

运维主机HDSS7-200.host.com

[root@hdss7-200 /]# cd  /opt/src/
[root@hdss7-200 src]# wget https://github.com/Jeromefromcn/dubbo-monitor/archive/master.zip
[root@hdss7-200 src]# unzip master.zip 
[root@hdss7-200 src]# mv dubbo-monitor-master/ /data/dockerfile/dubbo-monitor

2.修改配置

  • 修改dubbo-monitor主配置文件
[root@hdss7-200 src]# cd /data/dockerfile/dubbo-monitor/
[root@hdss7-200 dubbo-monitor]# vim dubbo-monitor-simple/conf/dubbo_origin.properties
dubbo.container=log4j,spring,registry,jetty
dubbo.application.name=dubbo-monitor
dubbo.application.owner=HeByeEdu
dubbo.registry.address=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181
dubbo.protocol.port=20880
dubbo.jetty.port=8080
dubbo.jetty.directory=/dubbo-monitor-simple/monitor
dubbo.charts.directory=/dubbo-monitor-simple/charts
dubbo.statistics.directory=/dubbo-monitor-simple/statistics
dubbo.log4j.file=logs/dubbo-monitor-simple.log
dubbo.log4j.level=WARN
  • 修改dubbo-monitor启动脚本
[root@hdss7-200 dubbo-monitor]# sed -r -i -e '/^nohup/{p;:a;N;$!ba;d}'  ./dubbo-monitor-simple/bin/start.sh && sed  -r -i -e "s%^nohup(.*)%exec \1%"  ./dubbo-monitor-simple/bin/start.sh

提示:记得最后的&符删除掉

  • 调整jvm参数(非必须)
    JAVA_MEM_OPTS=" -server -Xmx128m -Xms128m -Xmn32m -XX:PermSize=16m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
else
    JAVA_MEM_OPTS=" -server -Xms128m -Xmx128m -XX:PermSize=16m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
fi
  • 示例:启动脚本完整配置
[root@hdss7-200 dubbo-monitor]# cat  dubbo-monitor-simple/bin/start.sh 
#!/bin/bash
sed -e "s/{ZOOKEEPER_ADDRESS}/$ZOOKEEPER_ADDRESS/g" /dubbo-monitor-simple/conf/dubbo_origin.properties > /dubbo-monitor-simple/conf/dubbo.properties
cd `dirname $0`
BIN_DIR=`pwd`
cd ..
DEPLOY_DIR=`pwd`
CONF_DIR=$DEPLOY_DIR/conf

SERVER_NAME=`sed '/dubbo.application.name/!d;s/.*=//' conf/dubbo.properties | tr -d '\r'`
SERVER_PROTOCOL=`sed '/dubbo.protocol.name/!d;s/.*=//' conf/dubbo.properties | tr -d '\r'`
SERVER_PORT=`sed '/dubbo.protocol.port/!d;s/.*=//' conf/dubbo.properties | tr -d '\r'`
LOGS_FILE=`sed '/dubbo.log4j.file/!d;s/.*=//' conf/dubbo.properties | tr -d '\r'`

if [ -z "$SERVER_NAME" ]; then
    SERVER_NAME=`hostname`
fi

PIDS=`ps -f | grep java | grep "$CONF_DIR" |awk '{print $2}'`
if [ -n "$PIDS" ]; then
    echo "ERROR: The $SERVER_NAME already started!"
    echo "PID: $PIDS"
    exit 1
fi

if [ -n "$SERVER_PORT" ]; then
    SERVER_PORT_COUNT=`netstat -tln | grep $SERVER_PORT | wc -l`
    if [ $SERVER_PORT_COUNT -gt 0 ]; then
        echo "ERROR: The $SERVER_NAME port $SERVER_PORT already used!"
        exit 1
    fi
fi

LOGS_DIR=""
if [ -n "$LOGS_FILE" ]; then
    LOGS_DIR=`dirname $LOGS_FILE`
else
    LOGS_DIR=$DEPLOY_DIR/logs
fi
if [ ! -d $LOGS_DIR ]; then
    mkdir $LOGS_DIR
fi
STDOUT_FILE=$LOGS_DIR/stdout.log

LIB_DIR=$DEPLOY_DIR/lib
LIB_JARS=`ls $LIB_DIR|grep .jar|awk '{print "'$LIB_DIR'/"$0}'|tr "\n" ":"`

JAVA_OPTS=" -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true "
JAVA_DEBUG_OPTS=""
if [ "$1" = "debug" ]; then
    JAVA_DEBUG_OPTS=" -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n "
fi
JAVA_JMX_OPTS=""
if [ "$1" = "jmx" ]; then
    JAVA_JMX_OPTS=" -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false "
fi
JAVA_MEM_OPTS=""
BITS=`java -version 2>&1 | grep -i 64-bit`
if [ -n "$BITS" ]; then
    JAVA_MEM_OPTS=" -server -Xmx128m -Xms128m -Xmn32m -XX:PermSize=16m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
else
    JAVA_MEM_OPTS=" -server -Xms128m -Xmx128m -XX:PermSize=16m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
fi

echo -e "Starting the $SERVER_NAME ...\c"
exec  java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1
[root@hdss7-200 dubbo-monitor]# 

3.制作镜像

  • 准备Dockerfile,代码已提供
[root@hdss7-200 dubbo-monitor]# cat  Dockerfile 
FROM jeromefromcn/docker-alpine-java-bash
MAINTAINER Jerome Jiang
COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
CMD /dubbo-monitor-simple/bin/start.sh
[root@hdss7-200 dubbo-monitor]# 
  • build镜像
[root@hdss7-200 dubbo-monitor]# docker build . -t harbor.od.com/infra/dubbo-monitor:latest
[root@hdss7-200 dubbo-monitor]# docker push harbor.od.com/infra/dubbo-monitor:latest

4.解析域名

在DNS主机HDSS7-11.host.com上:

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
dubbo-monitor       A    10.4.7.10
[root@hdss7-11 ~]# named-checkconf 
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A dubbo-monitor.od.com @10.4.7.11 +short
10.4.7.10
[root@hdss7-11 ~]# 

5.准备k8s资源配置清单

运维主机HDSS7-200.host.com

  • 创建目录
[root@hdss7-200 ~]# mkdir /data/k8s-yaml/dubbo-monitor
  • dp.yaml
[root@hdss7-200 ~]# more /data/k8s-yaml/dubbo-monitor/dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-monitor
  namespace: infra
  labels: 
    name: dubbo-monitor
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-monitor
  template:
    metadata:
      labels: 
        app: dubbo-monitor
        name: dubbo-monitor
    spec:
      containers:
      - name: dubbo-monitor
        image: harbor.od.com/infra/dubbo-monitor:latest
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 ~]# 
  • svc.yaml
[root@hdss7-200 ~]# more /data/k8s-yaml/dubbo-monitor/svc.yaml 
kind: Service
apiVersion: v1
metadata: 
  name: dubbo-monitor
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector: 
    app: dubbo-monitor
[root@hdss7-200 ~]# 
  • ingress.yaml
[root@hdss7-200 ~]# more /data/k8s-yaml/dubbo-monitor/ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: dubbo-monitor
  namespace: infra
spec:
  rules:
  - host: dubbo-monitor.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: dubbo-monitor
          servicePort: 8080
[root@hdss7-200 ~]# 

6.应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/dp.yaml
deployment.extensions/dubbo-monitor created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/svc.yaml
service/dubbo-monitor created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/ingress.yaml
ingress.extensions/dubbo-monitor created
[root@hdss7-21 ~]# 

7.浏览器访问

http://dubbo-monitor.od.com

3.构建dubbo-consumer服务

之前创建的dubbo-servuce是微服务的提供者,现在创建一个微服务的消费者。
公开仓库:微服务的提供者:https://gitee.com/yodo1/dubbo-demo-service.git
私有仓库:微服务的消费者:git@gitee.com:yodo1/dubbo-demo-web.git
注意:zk的配置信息在代码 dubbo-client/src/main/java/config.properties中,如果需要修改,请修改

dubbo.registry=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181

1.配置流水线

之前已经在jenkins配置了dubbo-demo项目,只需要填写参数就行了。

​ 依次填入/选择

参数名 参数值
app_name dubbo-demo-consumer
image_name app/dubbo-demo-consumer
git_repo git@gitee.com:yodo1/dubbo-demo-web.git
git_ver master
add_tag 210328_2210
mvn_dir ./
target_dir ./dubbo-client/target
mvn_cmd mvn clean package -e -q -Dmaven.test.skip=true
base_image base:jre8:8u112
maven 3.6.1-8u232

2.检查harbor仓库内镜像

3.解析域名

在DNS主机HDSS7-11.host.com上:

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
demo           A    10.4.7.10
[root@hdss7-11 ~]# named-checkconf 
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A demo.od.com @10.4.7.11 +short
10.4.7.10
[root@hdss7-11 ~]# 

4.准备K8S的资源配置清单

运维主机HDSS7-200.host.com上,准备资源配置清单

  • 创建目录
[root@hdss7-200 ~]#  mkdir /data/k8s-yaml/dubbo-demo-consumer
[root@hdss7-200 ~]# cd /data/k8s-yaml/dubbo-demo-consumer/
[root@hdss7-200 dubbo-demo-consumer]# 
  • dp.yaml
[root@hdss7-200 dubbo-demo-consumer]# more dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: app
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.od.com/app/dubbo-demo-consumer:master_210328_2210
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-client.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 dubbo-demo-consumer]# 
  • svc.yaml
[root@hdss7-200 dubbo-demo-consumer]# more svc.yaml 
kind: Service
apiVersion: v1
metadata: 
  name: dubbo-demo-consumer
  namespace: app
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector: 
    app: dubbo-demo-consumer
  clusterIP: None
  type: ClusterIP
  sessionAffinity: None
[root@hdss7-200 dubbo-demo-consumer]# 
  • ingress.yaml
[root@hdss7-200 dubbo-demo-consumer]# more ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: dubbo-demo-consumer
  namespace: app
spec:
  rules:
  - host: demo.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: dubbo-demo-consumer
          servicePort: 8080
[root@hdss7-200 dubbo-demo-consumer]#

5.应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/dp.yaml
deployment.extensions/dubbo-demo-consumer created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/svc.yaml
service/dubbo-demo-consumer created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/ingress.yaml
ingress.extensions/dubbo-demo-consumer created
[root@hdss7-21 ~]# 

6.检查docker运行情况及dubbo-monitor

[root@hdss7-21 ~]# kubectl get pods -n app
NAME                                  READY   STATUS    RESTARTS   AGE
dubbo-demo-consumer-795469c58-hgzhf   1/1     Running   0          50s
dubbo-demo-service-7976d866cb-7269q   1/1     Running   0          82m
[root@hdss7-21 ~]# 

http://dubbo-monitor.od.com

7.浏览器访问

http://demo.od.com/hello?name=xtyang

第八章:实战维护dubbo微服务集群

  • 更新(rolling update)

    • 修改代码提git(发版)

    • 使用jenkins进行CI

    • 修改并应用k8s资源配置清单(或者在k8s的dashboard上直接操作)

  1. 修改代码

    dubbo-client/src/main/java/com/od/dubbotest/action/HelloAction.java

  2. jenkins构建

    用这个commit id 进行发版:[ c9db2975]

  3. 查看生成的镜像

  4. 发布

用镜像 harbor.od.com/app/dubbo-demo-consumer:c9db2975_210328_2240,修改dp.yaml,或者在k8s的dashboard上直接操作进行发版

  • 扩容(scaling)
    • k8s的dashboard上直接操作
    • 命令扩容
      `shell
      [root@hdss7-21 ~]# kubectl scale deployment dubbo-demo-consumer –replicas=2 -n app
      [root@hdss7-21 ~]# kubectl get pods -n app
      NAME READY STATUS RESTARTS AGE
      dubbo-demo-consumer-795469c58-fxh84 1/1 Running 0 42s
      dubbo-demo-consumer-795469c58-hgzhf 1/1 Running 1 29m
      dubbo-demo-service-7976d866cb-7269q 1/1 Running 0 111m
      [root@hdss7-21 ~]#
# 第九章:K8S灾难性毁灭测试
=====================================================
运行中的机器在某天挂了一台
```shell
[root@hdss7-21 ~]# halt 

这时访问业务会有短暂的Bad Gateway

  • 1、K8S中移除坏的节点(这时会触发自愈机制)
    [root@hdss7-22 ~]# kubectl get nodes
    NAME                STATUS     ROLES         AGE     VERSION
    hdss7-21.host.com   NotReady   master,node   3d18h   v1.15.12
    hdss7-22.host.com   Ready      master,node   3d18h   v1.15.12
    [root@hdss7-22 ~]# kubectl delete node hdss7-21.host.com 
    node "hdss7-21.host.com" deleted
    [root@hdss7-22 ~]# kubectl get nodes
    NAME                STATUS   ROLES         AGE     VERSION
    hdss7-22.host.com   Ready    master,node   3d18h   v1.15.12
    [root@hdss7-22 ~]# 
  • 2、这时需要判断负载均衡是否要移除节点
#修改负载均衡的相关操作,注释坏掉节点的配置
  • 3、机器修复完,服务启动正常后会自动加入集群,打标签
[root@hdss7-21 ~]# kubectl get nodes
NAME                STATUS   ROLES         AGE    VERSION
hdss7-21.host.com   Ready    <none>        8h     v1.15.12
hdss7-22.host.com   Ready    master,node   4d2h   v1.15.12
[root@hdss7-21 ~]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
node/hdss7-21.host.com labeled
[root@hdss7-21 ~]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
node/hdss7-21.host.com labeled
[root@hdss7-21 ~]# kubectl get nodes
NAME                STATUS   ROLES         AGE    VERSION
hdss7-21.host.com   Ready    master,node   8h     v1.15.12
hdss7-22.host.com   Ready    master,node   4d2h   v1.15.12
[root@hdss7-21 ~]# 
  • 4、根据测试结果是否要重启docker引擎
systemctl restart docker
  • 5、根据情况平滑POD负载
[root@hdss7-21 ~]# kubectl get pods -n kube-system  -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
coredns-6b6c4f9648-rj98r                1/1     Running   0          10m     172.7.22.10   hdss7-22.host.com   <none>           <none>
heapster-b5b9f794-2dwwv                 1/1     Running   0          10m     172.7.22.8    hdss7-22.host.com   <none>           <none>
kubernetes-dashboard-56b587f595-vrb9h   1/1     Running   0          10m     172.7.22.9    hdss7-22.host.com   <none>           <none>
traefik-ingress-7tf9l                   1/1     Running   0          3m24s   172.7.21.2    hdss7-21.host.com   <none>           <none>
traefik-ingress-pzhlv                   1/1     Running   1          3d18h   172.7.22.2    hdss7-22.host.com   <none>           <none>
[root@hdss7-21 ~]# kubectl  delete pods kubernetes-dashboard-56b587f595-vrb9h -n kube-system
pod "kubernetes-dashboard-56b587f595-vrb9h" deleted
[root@hdss7-21 ~]# kubectl get pods -n kube-system  -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
coredns-6b6c4f9648-rj98r                1/1     Running   0          12m     172.7.22.10   hdss7-22.host.com   <none>           <none>
heapster-b5b9f794-2dwwv                 1/1     Running   0          12m     172.7.22.8    hdss7-22.host.com   <none>           <none>
kubernetes-dashboard-56b587f595-cc8lc   1/1     Running   0          6s      172.7.21.4    hdss7-21.host.com   <none>           <none>
traefik-ingress-7tf9l                   1/1     Running   0          4m42s   172.7.21.2    hdss7-21.host.com   <none>           <none>
traefik-ingress-pzhlv                   1/1     Running   1          3d18h   172.7.22.2    hdss7-22.host.com   <none>           <none>
  • 6、总结:
    1、删除k8s坏的node节点,这时故障自愈
    2、注释掉坏负载均衡器上坏节点ip
    3、修复好机器加入集群
    4、打标签,平衡节点pods
文档更新时间: 2021-06-17 10:10   作者:xtyang