Linux/Python学习论坛-京峰教育

 找回密码
 立即注册

一键登录:

搜索
热搜: 活动 交友 discuz
查看: 1541|回复: 0

CentOS6.3上使用RHCS集群套件做HA

[复制链接]

238

主题

288

帖子

1925

积分

超级版主

Rank: 8Rank: 8

积分
1925
QQ
发表于 2015-3-19 11:11:22 | 显示全部楼层 |阅读模式
本帖最后由 My-shiguang 于 2015-4-2 14:51 编辑



我这里准备了3台机做HA
Storage.phey.cc, 192.168.122.2, 做ISCSI服务器,简称Storage
Server1.phey.cc, 192.168.122.3, 做Real Server1,简称Server1
Server2.phey.cc, 192.168.122.4, 做Real Server2,简称Server2


关于存储,Storage上把整个/dev/sda共享出去,因为用的是KVM,所以系统装在了/dev/vda上,而/dev/sda是第二块磁盘,不要误解了


1. 准备共享存储
在Storage:
安装ISCSI服务端套件
[root@Storage ~]# yum install scsi-target-utils -y
[root@Storage ~]#
准备存储,车的儿子说如果不写initiator-address去限制能访问存储的客户端的IP的话就是允许所有IP可以访问
[root@Storage ~]# cat >> /etc/tgt/targets.conf
<target iqn.2013-09-02.cc.phey.Storage>
        backing-store /dev/sda
</target>
[root@Storage ~]#
启动服务端的ISCSI服务
[root@Storage ~]# service tgtd start
Starting SCSI target daemon:                               [  OK  ]
[root@Storage ~]#
查看一下共享存储的信息
[root@Storage ~]# tgt-admin -s
Target 1: iqn.2013-09-02.cc.phey.Storage
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 34360 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sda
            Backing store flags:
    Account information:
    ACL information:
        ALL
[root@Storage ~]#
在Server1:
安装iscsi的客户端套件
[root@Server1 ~]# yum install iscsi-initiator-utils -y
[root@Server1 ~]#
发现并登入设备
[root@Server1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.122.2
Starting iscsid:                                           [  OK  ]
192.168.122.2:3260,1 iqn.2013-09-02.cc.phey.Storage
[root@Server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2013-09-02.cc.phey.Storage, portal: 192.168.122.2,3260] (multiple)
Login to [iface: default, target: iqn.2013-09-02.cc.phey.Storage, portal: 192.168.122.2,3260] successful.
[root@Server1 ~]#
会看到多了一个/dev/sda
[root@Server1 ~]# fdisk -l
Disk /dev/vda: 34.4 GB, 34359738368 bytes
16 heads, 63 sectors/track, 66576 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004c425


   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3        1018      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2            1018       66577    33041408   8e  Linux LVM
Partition 2 does not end on cylinder boundary.


Disk /dev/mapper/vg_server1-lv_root: 31.7 GB, 31717326848 bytes
255 heads, 63 sectors/track, 3856 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_server1-lv_swap: 2113 MB, 2113929216 bytes
255 heads, 63 sectors/track, 257 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sda: 34.4 GB, 34359738368 bytes
64 heads, 32 sectors/track, 32768 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@Server1 ~]#
准备一下分区,写点文件,为了后面验证服务。
分区
[root@Server1 ~]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x6b52f64f.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p
Disk /dev/sda: 34.4 GB, 34359738368 bytes
64 heads, 32 sectors/track, 32768 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6b52f64f
   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-32768, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-32768, default 32768):
Using default value 32768
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@Server1 ~]# partprobe
Warning: WARNING: the kernel failed to re-read the partition table on /dev/vda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
[root@Server1 ~]#
创建ext4文件系统,车的儿子说RHEL5中不能使用EXT4。目测谁用谁悲剧
[root@Server1 ~]# mkfs.ext4 /dev/sda1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
2097152 inodes, 8388604 blocks
419430 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
256 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@Server1 ~]#
挂载,准备个文件,卸载
[root@Server1 ~]# mount /dev/sda1 /mnt
[root@Server1 ~]# echo 'Hello CC!' > /mnt/index.html
[root@Server1 ~]# umount /mnt
[root@Server1 ~]#
在Server2:


安装iscsi客户端套件
[root@Server2 ~]# yum install iscsi-initiator-utils -y
[root@Server2 ~]#

发现并登入设备,会看到刚才分区好的设备,然后更新下内核分区表,否则会挂载不上
[root@Server2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.122.2
Starting iscsid:                                           [  OK  ]
192.168.122.2:3260,1 iqn.2013-09-02.cc.phey.Storage
[root@Server2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2013-09-02.cc.phey.Storage, portal: 192.168.122.2,3260] (multiple)
Login to [iface: default, target: iqn.2013-09-02.cc.phey.Storage, portal: 192.168.122.2,3260] successful.
[root@Server2 ~]# fdisk -l
Disk /dev/vda: 34.4 GB, 34359738368 bytes
16 heads, 63 sectors/track, 66576 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00080ad4


   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3        1018      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2            1018       66577    33041408   8e  Linux LVM
Partition 2 does not end on cylinder boundary.


Disk /dev/mapper/vg_server2-lv_root: 31.7 GB, 31717326848 bytes
255 heads, 63 sectors/track, 3856 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000




Disk /dev/mapper/vg_server2-lv_swap: 2113 MB, 2113929216 bytes
255 heads, 63 sectors/track, 257 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sda: 34.4 GB, 34359738368 bytes
64 heads, 32 sectors/track, 32768 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6b52f64f


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       32768    33554416   83  Linux
[root@Server2 ~]#


[root@Server2 ~]# partprobe
Warning: WARNING: the kernel failed to re-read the partition table on /dev/vda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
[root@Server2 ~]#

--------------------------------------
安装RHCS套件:


在Storage:


安装luci套件
[root@Storage ~]# yum install luci -y
[root@Storage ~]#

启动luci
[root@Storage ~]# service luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `Storage.CentOS63' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
        (none suitable found, you can still do it manually as mentioned above)


Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Starting saslauthd:                                        [  OK  ]
Start luci...                                              [  OK  ]
Point your web browser to https://Storage.CentOS63:8084 (or equivalent) to access luci
[root@Storage ~]#

直接访问本地端口机可以看到登陆界面了,用户名和密码是系统用户名和密码,可以是root的
1.png
2.png
设置好本地域名解析
[root@Storage ~]# cat >> /etc/hosts
192.168.122.3 Server1 Server1.phey.cc
192.168.122.4 Server2 Server2.phey.cc
[root@Storage ~]#

在Server1:


设置好本地域名解析:
[root@Server1 ~]# cat >> /etc/hosts
192.168.122.3 Server1 Server1.phey.cc
192.168.122.4 Server2 Server2.phey.cc
[root@Server1 ~]#

安装好一些套件
cman,一个分布式集群管理工具,运行在集群的各个节点上,提供管理服务。用于管理集群成员、消息和通知。
ricci,是luci的客户端,luci通过ricci跟各个节点通信。
rgmanager,RHCS通过rgmanager来管理集群服务。用来监督、启动和停止集群应用,将服务从失败的节点转移到健康节点。
[root@Server1 ~]# yum install cman ricci rgmanager -y
[root@Server1 ~]#
启动ricci服务
[root@Server1 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@Server1 ~]#
设置ricci密码
[root@Server1 ~]# passwd ricci
Changing password for user ricci.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@Server1 ~]#
在Server2:
设置本地域名解析
[root@Server2 ~]# cat >> /etc/hosts
192.168.122.3 Server1 Server1.phey.cc
192.168.122.4 Server2 Server2.phey.cc
[root@Server2 ~]#
安装客户端集群套件
[root@Server2 ~]# yum install cman ricci rgmanager -y
[root@Server2 ~]#
启动ricci服务
[root@Server2 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@Server2 ~]#
设置ricci密码
[root@Server2 ~]# passwd ricci
Changing password for user ricci.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@Server2 ~]#
----------------------------------------
准备服务,例如apache提供的web服务
在Server1:
安装httpd套件
[root@Server1 ~]# yum install httpd -y
[root@Server1 ~]#
在Server2:
安装httpd套件
[root@Server2 ~]# yum install httpd -y
[root@Server2 ~]#

----------------------------------------
创建集群


创建一个集群,填入主机名和密码
Manage Cluster --> Create

3.png
4.png
5.png
添加失效域
Manage Clusters --> ccCluster --> Failover Domains --> Add

6.png
7.png
添加资源,ip、服务、文件系统之类的都是资源
Manage Clusters --> ccCluster --> Resources --> Add

8.png
9.png
10.png
给集群添加服务
Manage Clusters --> ccCluster --> Service Groups --> Add

11.png
12.png
13.png
13.png
14.png
15.png
刚开始是显示失效的
16.png
刷新一下页面就好了,可以看到服务是跑在Server1上
17.png
测试一下
18.png
查看集群状态
[root@Server1 ~]# clustat -l
Cluster Status for ccCluster @ Tue Sep  3 04:46:07 2013
Member Status: Quorate


Member Name                           ID   Status
------ ----                           ---- ------
Server1.phey.cc                           1 Online, Local, rgmanager
Server2.phey.cc                           2 Online, rgmanager


Service Information
------- -----------


Service Name      : service:cc-httpd
  Current State   : started (112)
  Flags           : none (0)
  Owner           : Server1.phey.cc
  Last Owner      : Server1.phey.cc
  Last Transition : Tue Sep  3 04:44:06 2013


[root@Server1 ~]#
直接kill掉Server1上的进程
[root@Server1 ~]# pgrep httpd | xargs kill -9
[root@Server1 ~]#

服务不受影响
[root@Server1 ~]# curl http://192.168.122.5
Hello CC!
[root@Server1 ~]#

查看磁盘的时候发现/dev/sda1已经被卸载了
[root@Server1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_server1-lv_root
                       30G  1.8G   26G   7% /
tmpfs                 499M   26M  474M   6% /dev/shm
/dev/vda1             485M   33M  428M   8% /boot
[root@Server1 ~]#
在Server2上查看,http在运行,共享磁盘被挂载了
[root@Server2 ~]# service httpd status
httpd (pid  7237) is running...
[root@Server2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_server2-lv_root
                       30G  1.8G   26G   7% /
tmpfs                 499M   26M  474M   6% /dev/shm
/dev/vda1             485M   33M  428M   8% /boot
/dev/sda1              32G  176M   30G   1% /var/www/html
[root@Server2 ~]#
直接kill掉Server2的服务
[root@Server2 ~]# pgrep httpd | xargs kill -9
[root@Server2 ~]#

服务又Server1上运行了
[root@Server1 ~]# service httpd status
httpd (pid  3435) is running...
[root@Server1 ~]#
当服务在Server2上运行的时候关机
[root@Server2 ~]# init 0
[root@Server2 ~]#
在Server1上查看状态,Server2已经Offline了,服务正常
[root@Server1 ~]# clustat -l
Cluster Status for ccCluster @ Tue Sep  3 04:57:02 2013
Member Status: Quorate
Member Name                           ID   Status
------ ----                           ---- ------
Server1.phey.cc                           1 Online, Local, rgmanager
Server2.phey.cc                           2 Offline


Service Information
------- -----------


Service Name      : service:cc-httpd
  Current State   : started (112)
  Flags           : none (0)
  Owner           : Server1.phey.cc
  Last Owner      : Server2.phey.cc
  Last Transition : Tue Sep  3 04:56:01 2013

[root@Server1 ~]# curl http://192.168.122.5
Hello CC!
[root@Server1 ~]#
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|京峰教育,只为有梦想的人 ( 京ICP备15013173号 )

GMT+8, 2019-6-16 09:26 , Processed in 0.060463 second(s), 14 queries , Apc On.

快速回复 返回顶部 返回列表