1 确保网站无缝运行:Keepalived高可用与Nginx集成实战-德赢Vwin官网 网
0
  • 聊天消息
  • 系统消息
  • 评论与回复
登录后你可以
  • 下载海量资料
  • 学习在线课程
  • 观看技术视频
  • 写文章/发帖/加入社区
会员中心
创作中心

完善资料让更多小伙伴认识你,还能领取20积分哦,立即完善>

3天内不再提示

确保网站无缝运行:Keepalived高可用与Nginx集成实战

马哥Linux运维 来源:马哥Linux运维 2024-11-27 09:08 次阅读

目录

keepalived高可用(nginx)

keepalived简介

keepalived的重要功能

keepalived高可用架构图

keepalived工作原理描述

keepalived实现nginx负载均衡机高可用

脑裂

脑裂产生的原因

脑裂的常见解决方案

对脑裂进行监控

keepalived简介

keepalived官网
Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。

Keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个网络可以不间断地运行。

所以,Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能。

keepalived的重要功能

eepalived 有三个重要的功能,分别是:

管理LVS负载均衡软件
实现LVS集群节点的健康检查
作为系统网络服务的高可用性(failover)

keepalived高可用架构图

9ab6b53e-a3fd-11ef-93f3-92fbcf53809c.png

keepalived工作原理描述

Keepalived高可用对之间是通过VRRP通信的,因此,我们从 VRRP开始了解起:

VRRP,全称 Virtual Router Redundancy Protocol,中文名为虚拟路由冗余协议,VRRP的出现是为了解决静态路由的单点故障。

VRRP是通过一种竟选协议机制来将路由任务交给某台 VRRP路由器的。

VRRP用 IP多播的方式(默认多播地址(224.0_0.18))实现高可用对之间通信。

工作时主节点发包,备节点接包,当备节点接收不到主节点发的数据包的时候,就启动接管程序接管主节点的开源。备节点可以有多个,通过优先级竞选,但一般 Keepalived系统运维工作中都是一对。

VRRP使用了加密协议加密数据,但Keepalived官方目前还是推荐用明文的方式配置认证类型和密码。

介绍完 VRRP,接下来我再介绍一下 Keepalived服务的工作原理:

Keepalived高可用是通过 VRRP 进行通信的, VRRP是通过竞选机制来确定主备的,主的优先级高于备,因此,工作时主会优先获得所有的资源,备节点处于等待状态,当主挂了的时候,备节点就会接管主节点的资源,然后顶替主节点对外提供服务。

在 Keepalived 服务之间,只有作为主的服务器会一直发送 VRRP 广播包,告诉备它还活着,此时备不会枪占主,当主不可用时,即备监听不到主发送的广播包时,就会启动相关服务接管资源,保证业务的连续性.接管速度最快可以小于1秒。

keepalived实现nginx负载均衡机高可用

环境说明:

系统信息 主机名 IP
centos 8.5 master 192.168.222.138
centos 8.5 backup 192.168.222.139

本次高可用虚拟IP(VIP)地址暂定为192.168.222.133
keepalived安装
阿里云官网
配置主keepalived

关闭防火墙:
[root@master ~]# systemctl stop firewalld.service 
[root@master ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@master ~]# setenforce 0
[root@master ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
配置网络源:
[root@master ~]# dnf -y install wget
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@master yum.repos.d]#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
安装epel源:
[root@master yum.repos.d]#dnf install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
[root@master yum.repos.d]#sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@master yum.repos.d]#sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@master yum.repos.d]# ls
CentOS-Base.repo   epel-next-testing.repo  epel-playground.repo       epel-testing.repo
epel-modular.repo  epel-next.repo          epel-testing-modular.repo  epel.repo
查找keepalived:
[root@master yum.repos.d]# cd
[root@master ~]# dnf list all |grep keepalived
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
安装keepalived:
[root@master ~]# dnf -y install keepalived
查看配置文件:
[root@master ~]# ls /etc/keepalived/
keepalived.conf
查看安装生成的文件:
[root@master ~]# rpm -ql keepalived 
/etc/keepalived     //配置目录
/etc/keepalived/keepalived.conf   //此为主配置文件
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/lib/.build-id
/usr/lib/.build-id/0a
/usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e
/usr/lib/.build-id/6f
/usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f
/usr/lib/systemd/system/keepalived.service  //此为服务控制文件
/usr/libexec/keepalived
/usr/sbin/keepalived
/usr/share/doc/keepalived
/usr/share/doc/keepalived/AUTHOR
/usr/share/doc/keepalived/CONTRIBUTORS
/usr/share/doc/keepalived/COPYING
/usr/share/doc/keepalived/ChangeLog
/usr/share/doc/keepalived/README
/usr/share/doc/keepalived/TODO
/usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port
/usr/share/doc/keepalived/keepalived.conf.IPv6
/usr/share/doc/keepalived/keepalived.conf.PING_CHECK
/usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK
/usr/share/doc/keepalived/keepalived.conf.SSL_GET
/usr/share/doc/keepalived/keepalived.conf.SYNOPSIS
/usr/share/doc/keepalived/keepalived.conf.UDP_CHECK
/usr/share/doc/keepalived/keepalived.conf.conditional_conf
/usr/share/doc/keepalived/keepalived.conf.fwmark
/usr/share/doc/keepalived/keepalived.conf.inhibit
/usr/share/doc/keepalived/keepalived.conf.misc_check
/usr/share/doc/keepalived/keepalived.conf.misc_check_arg
/usr/share/doc/keepalived/keepalived.conf.quorum
/usr/share/doc/keepalived/keepalived.conf.sample
/usr/share/doc/keepalived/keepalived.conf.status_code
/usr/share/doc/keepalived/keepalived.conf.track_interface
/usr/share/doc/keepalived/keepalived.conf.virtual_server_group
/usr/share/doc/keepalived/keepalived.conf.virtualhost
/usr/share/doc/keepalived/keepalived.conf.vrrp
/usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck
/usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd
/usr/share/doc/keepalived/keepalived.conf.vrrp.routes
/usr/share/doc/keepalived/keepalived.conf.vrrp.rules
/usr/share/doc/keepalived/keepalived.conf.vrrp.scripts
/usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress
/usr/share/doc/keepalived/keepalived.conf.vrrp.sync
/usr/share/man/man1/genhash.1.gz
/usr/share/man/man5/keepalived.conf.5.gz
/usr/share/man/man8/keepalived.8.gz
/usr/share/snmp/mibs/KEEPALIVED-MIB.txt
/usr/share/snmp/mibs/VRRP-MIB.txt
/usr/share/snmp/mibs/VRRPv3-MIB.txt


用同样的方法在备服务器上安装keepalived

关闭防火墙:
[root@backup ~]# systemctl stop firewalld.service 
[root@backup ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@backup ~]# setenforce 0
[root@backup ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
配置网络源:
[root@backup ~]# dnf -y install wget
[root@backup ~]# cd /etc/yum.repos.d/
[root@backup yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@backup yum.repos.d]#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
安装epel源
[root@backup yum.repos.d]#dnf install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
[root@backup yum.repos.d]#sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@backup yum.repos.d]#sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@backup yum.repos.d]# ls
CentOS-Base.repo   epel-next-testing.repo  epel-playground.repo       epel-testing.repo
epel-modular.repo  epel-next.repo          epel-testing-modular.repo  epel.repo
查找keepalived:
[root@backup yum.repos.d]# cd
[root@backup ~]# dnf list all |grep keepalived
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
安装keepalived:
[root@backup ~]# dnf -y install keepalived
查看配置文件:
[root@backup ~]# ls /etc/keepalived/
keepalived.conf
查看安装生成的文件:
[root@backup ~]# rpm -ql keepalived 
/etc/keepalived     //配置目录
/etc/keepalived/keepalived.conf   //此为主配置文件
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/lib/.build-id
/usr/lib/.build-id/0a
/usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e
/usr/lib/.build-id/6f
/usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f
/usr/lib/systemd/system/keepalived.service  //此为服务控制文件
/usr/libexec/keepalived
/usr/sbin/keepalived
/usr/share/doc/keepalived
/usr/share/doc/keepalived/AUTHOR
/usr/share/doc/keepalived/CONTRIBUTORS
/usr/share/doc/keepalived/COPYING
/usr/share/doc/keepalived/ChangeLog
/usr/share/doc/keepalived/README
/usr/share/doc/keepalived/TODO
/usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port
/usr/share/doc/keepalived/keepalived.conf.IPv6
/usr/share/doc/keepalived/keepalived.conf.PING_CHECK
/usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK
/usr/share/doc/keepalived/keepalived.conf.SSL_GET
/usr/share/doc/keepalived/keepalived.conf.SYNOPSIS
/usr/share/doc/keepalived/keepalived.conf.UDP_CHECK
/usr/share/doc/keepalived/keepalived.conf.conditional_conf
/usr/share/doc/keepalived/keepalived.conf.fwmark
/usr/share/doc/keepalived/keepalived.conf.inhibit
/usr/share/doc/keepalived/keepalived.conf.misc_check
/usr/share/doc/keepalived/keepalived.conf.misc_check_arg
/usr/share/doc/keepalived/keepalived.conf.quorum
/usr/share/doc/keepalived/keepalived.conf.sample
/usr/share/doc/keepalived/keepalived.conf.status_code
/usr/share/doc/keepalived/keepalived.conf.track_interface
/usr/share/doc/keepalived/keepalived.conf.virtual_server_group
/usr/share/doc/keepalived/keepalived.conf.virtualhost
/usr/share/doc/keepalived/keepalived.conf.vrrp
/usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck
/usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd
/usr/share/doc/keepalived/keepalived.conf.vrrp.routes
/usr/share/doc/keepalived/keepalived.conf.vrrp.rules
/usr/share/doc/keepalived/keepalived.conf.vrrp.scripts
/usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress
/usr/share/doc/keepalived/keepalived.conf.vrrp.sync
/usr/share/man/man1/genhash.1.gz
/usr/share/man/man5/keepalived.conf.5.gz
/usr/share/man/man8/keepalived.8.gz
/usr/share/snmp/mibs/KEEPALIVED-MIB.txt
/usr/share/snmp/mibs/VRRP-MIB.txt
/usr/share/snmp/mibs/VRRPv3-MIB.txt

在主备机上分别安装nginx
在master上安装nginx

[root@master ~]# dnf -y install nginx
[root@master ~]# cd /usr/share/nginx/html/
[root@master html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@master html]# echo 'master' > index.html
[root@master html]# systemctl start nginx
[root@master html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master html]# systemctl enable nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
//在主节点这边需要设置开机自启

在backup上安装nginx

[root@backup ~]# dnf -y install nginx
[root@backup ~]# cd /usr/share/nginx/html/
[root@backup html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@backup html]# echo 'backup' > index.html
root@backup html]# systemctl start nginx
[root@backup html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
//在备节点这边不需要设置开机自启

在浏览器上访问试试,确保master上的nginx服务能够正常访问

9ade0a6c-a3fd-11ef-93f3-92fbcf53809c.png

9aef8d46-a3fd-11ef-93f3-92fbcf53809c.png

keepalived配置
配置主keepalived

[root@master html]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf
[root@master keepalived]# mv keepalived.conf{,-bak}
[root@master keepalived]# ls
keepalived.conf-bak                 //备份一下配置文件
[root@master keepalived]# dnf -y install vim
[root@master keepalived]# vim keepalived.conf  //编辑一个新配置文件
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {        //这里主备节点需要一致
    state BACKUP
    interface ens33      //网卡
    virtual_router_id 51
    priority 100     //这里比备节点的高
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密码(可以随机生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虚拟IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master keepalived]# ls
keepalived.conf  keepalived.conf-bak
[root@master keepalived]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 002983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
//此时备节点的keepalived还没有启动
[root@master keepalived]# scp keepalived.conf 192.168.222.139:/etc/keepalived
The authenticity of host '192.168.222.139 (192.168.222.139)' can't be established.
ECDSA key fingerprint is SHA256:anVVbTlEIzA1E8rB7IbLzaf7t9oQjB0qFP6Dd/ijnJI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.222.139' (ECDSA) to the list of known hosts.
root@192.168.222.139's password: 
keepalived.conf                                                    100%  875   905.2KB/s   00:00    
//将创建的这个配置文件传到备节点上去,因为主,备节点的这个配置文件基本上都是一样的只需要改一点点

配置备keepalived

[root@backup html]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf
[root@backup keepalived]# mv keepalived.conf{,-bak}
[root@backup keepalived]# ls
keepalived.conf-bak               //备份一下配置文件
[root@backup keepalived]# dnf -y install vim
[root@backup keepalived]# ls     //接收到主节点传过来的配置文件
keepalived.conf  keepalived.conf-bak
[root@backup keepalived]# vim keepalived.conf    //进行修改一下
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02    
}

vrrp_instance VI_1 {       //这里主备节点需要一致
    state BACKUP
    interface ens33      //网卡
    virtual_router_id 51
    priority 90     //这里比主节点的小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密码(可以随机生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虚拟IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup keepalived]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

查看VIP在哪里
在MASTER上查看

[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
//主节点上面有vip

在BACKUP上查看

[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
//备节点上面没有vip

测试
停掉master的keepalived服务,开启backup的niginx和keepalived服务然后查看主权情况
master

[root@master keepalived]# systemctl stop keepalived.service 

backup:

[root@backup keepalived]# systemctl start nginx.service
[root@backup keepalived]# systemctl start keepalived.service
[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

9b04785a-a3fd-11ef-93f3-92fbcf53809c.png


//此时可以看见backup是主
然后再开启master的keepalived服务再查看主权情况
master

[root@master keepalived]# systemctl start keepalived.service 
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff

backup

[root@backup keepalived]# systemctl stop nginx.service 
//此时测试的时候backup上面的nginx是要进行关闭的
[root@backup keepalived]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

9b1497b2-a3fd-11ef-93f3-92fbcf53809c.png


//此时可以看见master还是主
让keepalived监控nginx负载均衡机
keepalived通过脚本来监控nginx负载均衡机的状态
在master上编写脚本

[root@master keepalived]# cd
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
if [ $nginx_status -lt 1 ];then
    systemctl stop keepalived
fi
[root@master scripts]# chmod +x check_nginx.sh 
[root@master scripts]# ll
total 4
-rwxr-xr-x. 1 root root 142 Oct  8 23:21 check_nginx.sh
[root@master scripts]# vim notify.sh
[root@master scripts]# cat notify.sh
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
        echo "Usage:$0 master|backup VIP"
    ;;
esac
[root@master scripts]# chmod +x notify.sh 
[root@master scripts]# ll
total 8
-rwxr-xr-x. 1 root root 142 Oct  8 23:21 check_nginx.sh
-rwxr-xr-x. 1 root root 383 Oct  8 23:31 notify.sh
[root@master scripts]# scp check_nginx.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
check_nginx.sh                                                     100%  142   113.6KB/s   00:00    
[root@master scripts]# scp notify.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
notify.sh                                                          100%  383   244.7KB/s   00:00    
//将这个脚本传给备节点上提前创建好的目录里面

在backup上编写脚本

[root@backup keepalived]# cd
[root@backup ~]# mkdir /scripts
[root@backup ~]# cd /scripts/
[root@backup scripts]# ll
total 8
-rwxr-xr-x. 1 root root 142 Oct  8 23:39 check_nginx.sh
-rwxr-xr-x. 1 root root 383 Oct  8 23:36 notify.sh

配置keepalived加入监控脚本的配置
配置主keepalived

[root@master scripts]# cd
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_script nginx_check {   //添加这一部分
    script "/scripts/check_nginx.sh"
    interval 5
    weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33      
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    track_script {    //添加这一部分
        nginx_check
    }
    notify_master "/scripts/notify.sh master 192.168.222.133"   
    notify_backup "/scripts/notify.sh backup 192.168.222.133"
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl restart keepalived.service
[root@master ~]# systemctl restart nginx.service

配置备keepalived
backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭

[root@backup scripts]# cd
[root@backup ~]# vim /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33      
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    notify_master "/scripts/notify.sh master 192.168.222.133" //添加
    notify_backup "/scripts/notify.sh backup 192.168.222.133" //添加
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl restart keepalived.service 
[root@backup ~]# systemctl restart nginx.service 

测试
正常状态运行查看状态

master:
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
[root@master ~]# curl 192.168.222.133
master
backup:
[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
停止nginx
[root@master ~]# systemctl stop nginx.service 
[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

master上停止nginx后的状态

master:
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
backup:
[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
[root@backup ~]# curl 192.168.222.133
backup

脑裂

在高可用(HA)系统中,当联系2个节点的“心跳线”断开时,本来为一整体、动作协调的HA系统,就分裂成为2个独立的个体。由于相互失去了联系,都以为是对方出了故障。两个节点上的HA软件像“裂脑人”一样,争抢“共享资源”、争起“应用服务”,就会发生严重后果——或者共享资源被瓜分、2边“服务”都起不来了;或者2边“服务”都起来了,但同时读写“共享存储”,导致数据损坏(常见如数据库轮询着的联机日志出错)。

对付HA系统“裂脑”的对策,目前达成共识的的大概有以下几条:

添加冗余的心跳线,例如:双线条线(心跳线也HA),尽量减少“裂脑”发生几率;
启用磁盘锁。正在服务一方锁住共享磁盘,“裂脑”发生时,让对方完全“抢不走”共享磁盘资源。但使用锁磁盘也会有一个不小的问题,如果占用共享盘的一方不主动“解锁”,另一方就永远得不到共享磁盘。现实中假如服务节点突然死机或崩溃,就不可能执行解锁命令。后备节点也就接管不了共享资源和应用服务。于是有人在HA中设计了“智能”锁。即:正在服务的一方只在发现心跳线全部断开(察觉不到对端)时才启用磁盘锁。平时就不上锁了。
设置仲裁机制。例如设置参考IP(如网关IP),当心跳线完全断开时,2个节点都各自ping一下参考IP,不通则表明断点就出在本端。不仅“心跳”、还兼对外“服务”的本端网络链路断了,即使启动(或继续)应用服务也没有用了,那就主动放弃竞争,让能够ping通参考IP的一端去起服务。更保险一些,ping不通参考IP的一方干脆就自我重启,以彻底释放有可能还占用着的那些共享资源

脑裂产生的原因

一般来说,脑裂的发生,有以下几种原因:

高可用服务器对之间心跳线链路发生故障,导致无法正常通信
因心跳线坏了(包括断了,老化)
因网卡及相关驱动坏了,ip配置及冲突问题(网卡直连)
因心跳线间连接的设备故障(网卡及交换机
因仲裁的机器出问题(采用仲裁的方案)

高可用服务器上开启了 iptables防火墙阻挡了心跳消息传输

高可用服务器上心跳网卡地址等信息配置不正确,导致发送心跳失败

其他服务配置不当等原因,如心跳方式不同,心跳广插冲突、软件Bug等

注意:
Keepalived配置里同一 VRRP实例如果 virtual_router_id两端参数配置不一致也会导致裂脑问题发生。

脑裂的常见解决方案

在实际生产环境中,我们可以从以下几个方面来防止裂脑问题的发生:

同时使用串行电缆和以太网电缆连接,同时用两条心跳线路,这样一条线路坏了,另一个还是好的,依然能传送心跳消息
当检测到裂脑时强行关闭一个心跳节点(这个功能需特殊设备支持,如Stonith、feyce)。相当于备节点接收不到心跳消患,通过单独的线路发送关机命令关闭主节点的电源
做好对裂脑的监控报警(如邮件及手机短信等或值班).在问题发生时人为第一时间介入仲裁,降低损失。例如,百度的监控报警短信就有上行和下行的区别。报警消息发送到管理员手机上,管理员可以通过手机回复对应数字或简单的字符串操作返回给服务器.让服务器根据指令自动处理相应故障,这样解决故障的时间更短.

当然,在实施高可用方案时,要根据业务实际需求确定是否能容忍这样的损失。对于一般的网站常规业务.这个损失是可容忍的

对脑裂进行监控

对脑裂的监控应在备用服务器上进行,通过添加zabbix自定义监控进行。
监控什么信息呢?监控备上有无VIP地址

备机上出现VIP有两种情况:

发生了脑裂
正常的主备切换
监控只是监控发生脑裂的可能性,不能保证一定是发生了脑裂,因为正常的主备切换VIP也是会到备上的。

监控脚本如下:

[root@backup ~]# mkdir -p /scripts && cd /scripts
[root@backup scripts]# vim check_keepalived.sh
#!/bin/bash

if [ `ip a show ens33 |grep 192.168.222.133|wc -l` -ne 0 ]
then
    echo "keepalived is error!"
else
    echo "keepalived is OK !"
fi

编写脚本时要注意,网卡要改成你自己的网卡名称,VIP也要改成你自己的VIP,最后不要忘了给脚本赋予执行权限,且要修改/scripts目录的属主属组为zabbix

环境

主机 安装的服务 ip
master keepalived,nginx 192.168.222.138
backup keepalived,nginx,zabbix客户端 192.168.222.139
zabbix zabbix服务端 192.168.222.250

VIP:192.168.222.133
zabbix的安装部署以及一些操作可以看我的博客监控服务zabbix部署,zabbix的基础使用,
zabbix监控详解里面有zabbix安装的详细操作

在backup主机安装zabbix的客户端,在192.168.222.250主机安装zabbix服务端用于使用web网页管理监控

监控出现异常的两种状态:

正常情况下master主机nginx和keepalived为启动状态,backup主机keepalived为开启,nginx为关闭
当master主机发生异常时backup主机通过脚本抢夺vip
当出现脑裂时主备的两台主机都会有vip,虚拟IP
编写监控脚本
在backup主机或者zabbix客户端编写脚本

[root@backup ~]# cd /scripts/
[root@backup scripts]# ls
check_nginx.sh  notify.sh
[root@backup scripts]# vim check_keepalived.sh 
[root@backup scripts]# cat check_keepalived.sh 
#!/bin/bash

if [ `ip a show ens33 |grep 192.168.222.133|wc -l` -ne 0 ]
then
    echo "1"   //有问题
else 
    echo "0"   //没问题
fi
[root@backup scripts]# chmod +x check_keepalived.sh 
[root@backup scripts]# ls
check_keepalived.sh  check_nginx.sh  notify.sh
[root@backup scripts]# chown -R zabbix.zabbix /scripts/
[root@backup scripts]# ll
total 12
-rwxr-xr-x. 1 zabbix zabbix 148 Oct  9 21:05 check_keepalived.sh
-rwxr-xr-x. 1 zabbix zabbix 142 Oct  8 23:39 check_nginx.sh
-rwxr-xr-x. 1 zabbix zabbix 383 Oct  8 23:36 notify.sh
[root@backup scripts]# systemctl stop nginx.service 
[root@backup scripts]# ss -antl
State      Recv-Q     Send-Q         Local Address:Port            Peer Address:Port     Process     
LISTEN     0          128                  0.0.0.0:22                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10050                0.0.0.0:*                    
LISTEN     0          128                     [::]:22                      [::]:*                    
[root@backup scripts]# ./check_keepalived.sh 
0  
//测试脚本
//正常情况下master主机nginx和keepalived为启动状态,backup主机keepalived为开启,nginx为关闭
[root@backup scripts]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0029:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

修改backup的zabbix配置文件

[root@backup scripts]# cd
[root@backup ~]# cd /usr/local/etc/
[root@backup etc]# pwd
/usr/local/etc
[root@backup etc]# vim zabbix_agentd.conf
UserParameter=check_keepalived,/bin/bash /scripts/check_keepalived.sh       //修改
UnsafeUserParameters=1       //修改
[root@backup ~]# pkill zabbix_agentd 
[root@backup ~]# zabbix_agentd 
//重启zabbix

zabbix服务端测试

[root@zabbix ~]# ss -antl
State      Recv-Q     Send-Q         Local Address:Port            Peer Address:Port     Process     
LISTEN     0          128                  0.0.0.0:80                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:22                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10050                0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10051                0.0.0.0:*                    
LISTEN     0          128                127.0.0.1:9000                 0.0.0.0:*                    
LISTEN     0          128                     [::]:22                      [::]:*                    
LISTEN     0          70                         *:33060                      *:*                    
LISTEN     0          128                        *:3306                       *:*                   
[root@zabbix ~]#  zabbix_get -s 192.168.222.139 -k check_keepalived
0

查看master状态

[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff

创建监控主机

9b2904a4-a3fd-11ef-93f3-92fbcf53809c.png

9b334522-a3fd-11ef-93f3-92fbcf53809c.png

9b4b9762-a3fd-11ef-93f3-92fbcf53809c.png

9b56984c-a3fd-11ef-93f3-92fbcf53809c.png

9b61de14-a3fd-11ef-93f3-92fbcf53809c.png

9b7cd016-a3fd-11ef-93f3-92fbcf53809c.png

9b9682e0-a3fd-11ef-93f3-92fbcf53809c.png

9ba383dc-a3fd-11ef-93f3-92fbcf53809c.png

9bbc6c94-a3fd-11ef-93f3-92fbcf53809c.png


添加监控项

9beeeb6a-a3fd-11ef-93f3-92fbcf53809c.png

9c15c6ea-a3fd-11ef-93f3-92fbcf53809c.png

9c21524e-a3fd-11ef-93f3-92fbcf53809c.png


查看数据

9c37fbfc-a3fd-11ef-93f3-92fbcf53809c.png

9c531630-a3fd-11ef-93f3-92fbcf53809c.png


添加触发器

9c644bd0-a3fd-11ef-93f3-92fbcf53809c.png

9c74ef9e-a3fd-11ef-93f3-92fbcf53809c.png

9c8d75aa-a3fd-11ef-93f3-92fbcf53809c.png


测试
在master上面停止nginx开启keepalived,backup上面开启nginx,keepalived
vwin 故障转移
master

[root@master ~]# systemctl stop nginx.service 
[root@master ~]# systemctl restart keepalived.service 

backeup

[root@backup ~]# systemctl start nginx
[root@backup ~]# systemctl restart keepalived.service 

查看状态

master:
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
backup:
[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

查看告警触发

9c97beca-a3fd-11ef-93f3-92fbcf53809c.png


重新启动master上面的nginx,keepalived

root@master ~]# systemctl restart nginx.service 
[root@master ~]# systemctl restart keepalived.service 
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff

//此时没有报警信息

9ca85fc8-a3fd-11ef-93f3-92fbcf53809c.png


模拟脑裂
更改master主机keepalived配置文件,将virtual_router_id进行更改,与backup里面不一样就可以
master

[root@master ~]# vim /etc/keepalived/keepalived.conf
virtual_router_id 55    //我这里是将51改为了55
[root@master ~]# systemctl restart keepalived.service 
//重启keepalived
[root@master ~]# ip a    //发现VIP还在
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
[root@master ~]# ss -antl    //nginx也在
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                  

backup

[root@backup ~]# ip a   //发现也有VIP
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
[root@backup ~]# ss -antl   //nginx也在
State      Recv-Q     Send-Q         Local Address:Port            Peer Address:Port     Process     
LISTEN     0          128                  0.0.0.0:22                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10050                0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:80                   0.0.0.0:*                    
LISTEN     0          128                     [::]:22                      [::]:*                    
LISTEN     0          128                     [::]:80                      [::]:*                   

出现了报警信息

9cb488c0-a3fd-11ef-93f3-92fbcf53809c.png

链接:https://www.cnblogs.com/tushanbu/p/16770767.html

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表德赢Vwin官网 网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉
  • VRRP
    +关注

    关注

    0

    文章

    11

    浏览量

    5707
  • nginx
    +关注

    关注

    0

    文章

    149

    浏览量

    12170
  • Keepalived
    +关注

    关注

    0

    文章

    6

    浏览量

    4013

原文标题:确保网站无缝运行:Keepalived 高可用与Nginx 集成实战

文章出处:【微信号:magedu-Linux,微信公众号:马哥Linux运维】欢迎添加关注!文章转载请注明出处。

收藏 人收藏

    评论

    相关推荐

    Keepalive基础知识

    中预先定义) 为ipvs集群的各RS做健康状态检测 基于脚本调用接口完成脚本中定义的功能,进而影响集群事务,以此支持nginx、haproxy等服务 2 Keepalived 架构 官方文档
    的头像 发表于 12-19 09:57 48次阅读
    Keepalive基础知识

    Nginx代理转发实战:零基础掌握服务器流量分发技巧

    Nginx 是最常用的反向代理工具之一,一个指令 proxy_pass搞定反向代理,对于接口代理、负载均衡很是实用,但 proxy_pass指令后面的参数很有讲究,通常一个“/”都可能引发一个血案
    的头像 发表于 12-09 12:28 232次阅读

    Nginx日常运维方法Linux版

    / 默认站点目录:/usr/share/nginx/html 通过筛选进程查看当前使用的主配置文件和运行用户:   ps aux | grep nginx   如图: 主要配置文件:
    的头像 发表于 12-06 16:38 145次阅读
    <b class='flag-5'>Nginx</b>日常运维方法Linux版

    「服务器」Nginx Proxy Manager申请cloudflare泛域名

    一概述NginxProxyManager是一个基于Nginx的反向代理管理工具,它提供了一个用户友好的Web界面,方便用户管理和配置Nginx反向代理。主要功能包括:简易的用户界面:通过图形界面
    的头像 发表于 12-06 01:03 125次阅读
    「服务器」<b class='flag-5'>Nginx</b> Proxy Manager申请cloudflare泛域名

    使用lsof实现对linux文件的误删除恢复练习

    本文记录使用lsof实现对linux文件的误删除恢复练习。题目如下: 1.确保当前nginx进程运行中 2.删除日志文件,rm -f /var/log/nginx/access.log
    的头像 发表于 11-24 11:14 173次阅读
    使用lsof实现对linux文件的误删除恢复练习

    nginx负载均衡配置介绍

    目录 nginx负载均衡 nginx负载均衡介绍 反向代理与负载均衡 nginx负载均衡配置 Keepalived
    的头像 发表于 11-10 13:39 230次阅读
    <b class='flag-5'>nginx</b>负载均衡配置介绍

    使用bq769x0对可用性系统进行故障监控

    德赢Vwin官网 网站提供《使用bq769x0对可用性系统进行故障监控.pdf》资料免费下载
    发表于 10-15 10:13 0次下载
    使用bq769x0对<b class='flag-5'>高</b><b class='flag-5'>可用</b>性系统进行故障监控

    立绕磁环电感运行温度怎么调整

    德赢Vwin官网 网站提供《立绕磁环电感运行温度怎么调整.docx》资料免费下载
    发表于 09-19 17:44 0次下载

    华纳云:企业网站服务器怎么选?

    等方面的性能都至关重要,确保网站能够快速响应用户请求并保持稳定运行。 稳定性和可靠性: 企业官网需要具有可用性和可靠性,
    的头像 发表于 08-26 14:46 217次阅读

    nginx重启命令linux步骤是什么?

      1、验证nginx配置文件是否正确   方法一:进入nginx安装目录sbin下,输入命令./nginx -t   看到如下显示nginx.conf syntax is ok
    发表于 07-11 17:13

    nginx重启命令linux步骤是什么?

      1、验证nginx配置文件是否正确   方法一:进入nginx安装目录sbin下,输入命令./nginx -t   看到如下显示nginx.conf syntax is ok
    发表于 07-10 16:40

    基于LVS+Keepalived实现可用负载均衡

    LVS 是一种预装在 Linux 系统中,基于四层、具有强大性能的反向代理服务器。ipvsadm 是 LVS 的命令行管理工具。
    的头像 发表于 04-09 12:30 1167次阅读
    基于LVS+<b class='flag-5'>Keepalived</b>实现<b class='flag-5'>高</b><b class='flag-5'>可用</b>负载均衡

    华为云网站可用解决方案引爆华为云开年采购季:助力多场景下业务可用、数据可靠

    随着数字化转型进程不断深入,企业核心系统的稳定性、云上业务的连续性逐渐成为影响企业持续运营的关键因素。为了让中小企业上云之旅走得更加稳健,华为云开年采购季重点向企业客户推荐网站可用解决方案,面向
    的头像 发表于 03-17 12:30 280次阅读

    防服务器托管,保证网站稳定性和安全性!

    意味着网站的安全性和稳定性。我们需要更可靠和稳定的服务器托管防 服务,以确保我们的网站运行。 网络的风险和安全威胁已经成为我们不可避免的问
    的头像 发表于 01-10 15:30 301次阅读

    emWin 实战指南

    德赢Vwin官网 网站提供《emWin 实战指南.pdf》资料免费下载
    发表于 12-22 11:03 5次下载