2017年4月12日 星期三

ceph disk relocate

1: create bucket
        ceph osd crush add-bucket
        ex:
ceph osd crush add-bucket rep root
ceph osd crush add-bucket ceph6-rep host

                Available type:

2: organize structure.
ceph osd crush move ceph6-rep root=rep
3: create crush rule.
        osd crush rule create-simple {firstn|indep}
        ex:
                ceph osd crush rule create-simple qct_reppool rep host
                [select osds which is underneath root=rep]
              
4: pool and crush rule binding.
      ex:
ceph osd pool set reppool crush_ruleset 4

5: update ceph.conf on all osd nodes.
        Please note, there is no need to compile all osd information into one single ceph.conf.


below figure shows data is being written to specified osd drives.



ceph osd crush set osd.2 0.00729 host=ceph5

沒有留言: