默认情况下,cephfs文件系统只配置一个活跃的mds进程。在大型系统中,为了扩展元数据性能,可以配置多个活跃的mds进程,此时他们会共同承担元数据负载。
要配置mds多活,只需要修改cephfs系统的max_mds参数即可。以下是未配置之前的集群状态
# ceph -scluster:id: 94e1228c-caba-4eb5-af86-259876a44c28health: HEALTH_OKservices:mon: 3 daemons, quorum test1,test2,test3mgr: test1(active), standbys: test3, test2mds: cephfs-2/2/1 up {0=test2=up:active,1=test3=up:active}, 1 up:standbyosd: 18 osds: 18 up, 18 inrgw: 3 daemons activedata:pools: 8 pools, 400 pgsobjects: 305 objects, 3.04MiBusage: 18.4GiB used, 7.84TiB / 7.86TiB availpgs: 400 active+clean
1、配置多活
# ceph mds set max_mds 2# ceph -scluster:id: 94e1228c-caba-4eb5-af86-259876a44c28health: HEALTH_OKservices:mon: 3 daemons, quorum test1,test2,test3mgr: test1(active), standbys: test3, test2mds: cephfs-2/2/2 up {0=test2=up:active,1=test3=up:active}, 1 up:standbyosd: 18 osds: 18 up, 18 inrgw: 3 daemons activedata:pools: 8 pools, 400 pgsobjects: 305 objects, 3.04MiBusage: 18.4GiB used, 7.84TiB / 7.86TiB availpgs: 400 active+clean
2、恢复单活mds
# ceph mds set max_mds 1# ceph mds deactivate 1# ceph -scluster:id: 94e1228c-caba-4eb5-af86-259876a44c28health: HEALTH_OKservices:mon: 3 daemons, quorum test1,test2,test3mgr: test1(active), standbys: test3, test2mds: cephfs-1/1/1 up {0=test2=up:active}, 2 up:standbyosd: 18 osds: 18 up, 18 inrgw: 3 daemons activedata:pools: 8 pools, 400 pgsobjects: 305 objects, 3.04MiBusage: 18.4GiB used, 7.84TiB / 7.86TiB availpgs: 400 active+cleanio:client: 31.7KiB/s rd, 170B/s wr, 31op/s rd, 21op/s wr
