2017年5月17日 星期三

storcli tips

for i in $(seq 0 14);do /opt/MegaRAID/storcli/storcli64 /c0/v$i delete; done
for i in $(seq 0 14);do /opt/MegaRAID/storcli/storcli64 /c0 add vd each r0 drives=8:$i Strip=256 cached ra AWB; done
for i in $(seq 0 14);do /opt/MegaRAID/storcli/storcli64 /c0/v$i start init; done
for i in $(seq 0 14);do /opt/MegaRAID/storcli/storcli64 /c0/v$i set pdcache=off; done

2017年5月4日 星期四

parse json output via bash

curl -s http://xxxxxxx/xxxx |python -c "import sys, json; print json.load(sys.stdin)['attribute']"

2017年4月24日 星期一

useful Ceph commands


systemctl restart ceph-mon@$(hostname -s)
ex: systemctl restart ceph-mon@ceph1

systemctl restart ceph-osd@
ex: systemctl restart ceph-osd@1

umount /var/lib/ceph-osd

lsblk
ceph-disk activate /dev/sdx1

ceph osd set noout
ceph osd set nodown
ceph osd pool create kido 128
ceph osd pool get kido size
ceph osd pool get kido min_size
ceph osd pool get kido min_size 1
ceph osd pool get kido size 1
ceph osd lspools

rbd create kido/test1 --size 1024 --image-feature layering
rbd -p kido ls
rbd -p kido du

rbd showmapped
rbdmap map kido/test1
ls /dev/rbd*
rbdmap unmap
systemctl restart rbdmap

mkfs.xfs -i size=1024 -f /dev/rbd0

/etc/fstab --> noauto (do not partition)

rbd resize kido/test1 --size 2048
xfs_growfs /home/test1
rbd resize kido/test1 --size 1576 --allow-shrink

ceph osd crush add-bucket qoo rack
ceph osd crush add-bucket urpapa host
ceph osd crush move urpapa rack=qoo
ceph osd crush move qoo root=default
ceph osd crush set osd.2 0.00728 host=ceph2 (temporarily)

============osd removal process=============
ceph osd out osd.
systemctl stop ceph-osd@
ceph -w
ceph osd crush remove osd.
ceph auth del osd.
ceph osd rm osd.
============osd removal process=============


2017年4月23日 星期日

Ceph Calamari API access via curl


Initial account:
calamari-ctl add_user kido --password kido --email kido.idv.tw@gmail.com
calamari-ctl enable_user kido
calamari-ctl assign_role kido superuser

Prepare cookie:

curl -c cookies.txt -k -i -d username=kido -d password=kido https://10.5.15.50:8002/api/v2/auth/login

JSON
curl -b cookies.txt -s -k https://10.5.15.50:8002/api/v2/cluster/ad3f18fa-e58a-4625-ae1a-b6d6bab18de7/cli -X POST --referer "https://10.5.15.50:8002/api/v2/auth/login"  -H "Content-Type: application/json; charset=UTF-8" -H "X-XSRF-TOKEN: oQDbEVTSgNDnDxSFTH6PrFO9hi8ExCDd" -d '{"command":"ceph osd tree"}'

curl -b cookies.txt -s -k https://10.5.15.50:8002/api/v2/cluster/ad3f18fa-e58a-4625-ae1a-b6d6bab18de7/cli -X POST --referer "https://10. "Content-Type: application/json; charset=UTF-8" -H "X-XSRF-TOKEN: oQDbEVTSgNDnDxSFTH6PrFO9hi8ExCDd" -d '{"command":["ceph","osd","tree"]}'

X-WWW-FORM-URLENCODED
curl -b cookies.txt -s -k https://10.5.15.50:8002/api/v2/cluster/ad3f18fa-e58a-4625-ae1a-b6d6bab18de7/cli -X POST --referer "https://10.5.15.50:8002/api/v2/auth/login" -d "csrfmiddlewaretoken=ELR8K6Z83y2jWQJpl1t5K8yOotqwddp7" -d "command=ceph%20osd%20perf" -H "Content-Type: application/x-www-form-urlencoded"

Accessing Ceph via calamari-lite API need to go through https and port number has been changed to 8002

To ignore certificate check in curl command: -k
Accessing through default http header (x-www-form-urlencoded) csrfmiddlewaretoken is needed
Accessing through JSON header, X-XSRF-TOKEN need to be included in header. 
XSRF token can be discovered in cook file.

http://calamari.readthedocs.io/en/latest/calamari_rest/resources/resources.html#clusterviewset

2017年4月12日 星期三

ceph disk relocate

1: create bucket
        ceph osd crush add-bucket
        ex:
ceph osd crush add-bucket rep root
ceph osd crush add-bucket ceph6-rep host

                Available type:

2: organize structure.
ceph osd crush move ceph6-rep root=rep
3: create crush rule.
        osd crush rule create-simple {firstn|indep}
        ex:
                ceph osd crush rule create-simple qct_reppool rep host
                [select osds which is underneath root=rep]
              
4: pool and crush rule binding.
      ex:
ceph osd pool set reppool crush_ruleset 4

5: update ceph.conf on all osd nodes.
        Please note, there is no need to compile all osd information into one single ceph.conf.


below figure shows data is being written to specified osd drives.



ceph osd crush set osd.2 0.00729 host=ceph5

2017年4月11日 星期二

Ceph OSD clone script

#!/bin/sh
OSD_TYPE_CODE="4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D"
JOURNAL_TYPE_CODE="45B0969E-9B03-4F30-B4C6-B4B80CEFF106"
mk_xfs_option="-i size=2048 -f "
journal_size=512m

dest_drive=/dev/sdd
backup_directory=/var/lib/ceph/osd/ceph-2
osd_uuid=$(cat $backup_directory/fsid)
osd_id=$(cat $backup_directory/whoami)
osd_partition=$(blkid |grep "$osd_uuid"|cut -d':' -f1)
journal_device=$(readlink -f $backup_directory/journal)
echo OSD_UUID: $osd_uuid
echo OSD_ID: $osd_id
if [ $(mount |grep "${dest_drive}1"|grep "$backup_directory"|wc -l) -gt 0 ];then
  echo "Destination drive has been well prepared."
  exit 1
fi
exit
#sgdisk -o $dest_drive
#sgdisk --new=2:-${journal_size}: --change-name=2:'ceph journal'  --mbrtogpt --typecode=2:$JOURNAL_TYPE_CODE -- ${dest_drive}
#sgdisk --new=1:: --change-name=1:'ceph data'  --mbrtogpt --typecode=1:$OSD_TYPE_CODE --partition-guid=1:$osd_uuid -- ${dest_drive}
#partprobe ${dest_drive}
#mkfs.xfs $mk_xfs_option ${dest_drive}1
systemctl stop ceph-osd@$osd_id
ceph-osd -i 2 --flush-journal
mount_device=$(cat /proc/mounts |grep "$backup_directory"|awk '{print $1}')
mount_option=$(cat /proc/mounts |grep "$backup_directory"|awk '{print $4}')
umount $backup_directory
mount -o ${mount_option} ${dest_drive}1 $backup_directory
mount -o ${mount_option} ${mount_device} /mnt
ceph-osd -i $osd_id --mkfs --osd-uuid $osd_uuid --osd-journal ${dest_drive}2
rm -f ${backup_directory}/journal
journal_partuuid=$(blkid ${dest_drive}2 |grep -E -o "PARTUUID=.*"|cut -d'"' -f2)
ln -s /dev/disk/by-partuuid/${journal_partuuid} ${backup_directory}/journal
ceph-osd -i $osd_id --mkjournal
mv ${backup_directory}/journal ${backup_directory}/journal.orig
tar cf - . | (cd /var/lib/ceph/osd/ceph-2/ && tar xBf -)
rm -f ${backup_directory}/journal
mv ${backup_directory}/journal.orig ${backup_directory}/journal
umount /mnt
systemctl start ceph-osd@$osd_id

2017年4月10日 星期一

find OSD and Journal

#!/bin/sh #https://github.com/ceph/ceph/blob/firefly/src/ceph-disk#L78 OSD_TYPE_CODE="4fbd7e29-9d25-41b8-afd0-062c0ceff05d" JOURNAL_TYPE_CODE="45b0969e-9b03-4f30-b4c6-b4b80ceff106" OSD_TYPE_CODE=$(echo $OSD_TYPE_CODE|sed 's/\-//g'|awk '{print toupper($0)}') JOURNAL_TYPE_CODE=$(echo $JOURNAL_TYPE_CODE|sed 's/\-//g'|awk '{print toupper($0)}') for disk in $(ls /dev/*|grep -E -o "sd[a-z]{1,}[0-9]{1,}|nvme[0-9]{1,}n1p[0-9]{1,}");do raw_disk=$(echo $disk|grep -E -o "sd[a-z]{1,}|nvme[0-9]{1,}n1p") part_id=${disk:${#raw_disk}} if [ ${#part_id} -gt 0 ];then type_code=$(sgdisk -i $part_id /dev/$raw_disk|grep "GUID code"|grep Unknown|awk '{print $4}'|sed 's/\-//g') type_code=$(echo $type_code|sed 's/\-//g'|awk '{print toupper($0)}') if [ ${#type_code} -gt 0 ];then case "$type_code" in "$OSD_TYPE_CODE") echo /dev/$disk OSD PARTUUID $( blkid /dev/$disk |grep -E -o "PARTUUID=.*"|cut -d'"' -f2|sed 's/\-//g') ;; "$JOURNAL_TYPE_CODE") osd_part_uuid=$(hexdump -s 0x10 -n 16 -e '16 1 "%02x" ' /dev/$disk) echo /dev/$disk journal for OSD_PARTUUID $osd_part_uuid ;; *) echo "Undefined type code " $disk ;; esac fi fi done

2017年4月9日 星期日

OSD data migration

1: prepare new disk (cat SRC_DRIVE/fsid)
    partition UUID need to be exactly the same as source disk setup
    sgdisk --new=1:: --change-name=1:'ceph data'  --mbrtogpt --typecode=1:$PTYPE_UUID --partition-guid=partnum:$osd_uuid -- /dev/sdah
    mkfs.xfs -f -i /dev/sdah1
2: enter Ceph maintenance mode
    ceph osd set noout
3: stop corresponding OSD daemon
    ceph stop ceph-osd@OSD_ID
    ceph-osd -i OSD_ID --flush-journal
ceph-osd -i --mkfs --mkkey --osd-uuid --osd-journal
4: copy everything from source drive to destination drive
5: umount both source and destination drives
6: change xfs filesystem uuid
     xfs_admin -U b2ff97e8-498f-48b6-93a0-9a2a706f0201 /dev/sdah1
7: ceph-disk activate /dev/
8: if journal corrupt, dd if=/dev/zero of=/dev/JOURNAL and then re-activate ceph disk or restart ceph-osd service.
    ceph-osd reset-failed ceph-osd@OSD_ID
    ceph-osd start ceph-osd@OSD_ID
  ceph-osd --mkjournal -i OSD_ID


to review detail partition type code

use option: i in gdisk command.

https://github.com/ceph/ceph/blob/firefly/src/ceph-disk#L78

JOURNAL_UUID = '45b0969e-9b03-4f30-b4c6-b4b80ceff106'
DMCRYPT_JOURNAL_UUID = '45b0969e-9b03-4f30-b4c6-5ec00ceff106'
OSD_UUID = '4fbd7e29-9d25-41b8-afd0-062c0ceff05d'
DMCRYPT_OSD_UUID = '4fbd7e29-9d25-41b8-afd0-5ec00ceff05d'
TOBE_UUID = '89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be'
DMCRYPT_TOBE_UUID = '89c57f98-2fe5-4dc0-89c1-5ec00ceff2be'

2017年3月29日 星期三

kickstart pre-install script example

KS parameters defined here will be overwritten if parameter is defined in the global section.
Allow to separate parameter setup to different include files.

Development flow:
  Boot from CD-ROM and add ks=xxx to the boot option.
  After entering KS stage, hit Ctrl + Alt + F2 to enter shell for debugging
  Pre-install script will be placed at /tmp/ks-script-xxxxx


%pre --log=/tmp/pre.log
#!/bin/sh
for hdd in $(ls /sys/class/block/*/device/model 2>/dev/null);do
  hdd_model=$(cat $hdd|sed 's/ //g')
  echo "#Path: $hdd     Model: [$hdd_model]" >> /tmp/part-include

  if [ "$hdd_model" = "GGInInDer" ];then
      echo "#HDD model $hdd_model" > /tmp/part-include
      echo ignoredisk --only-use=$(echo $hdd|cut -d'/' -f5) >> /tmp/part-include
      echo "url --url=\"ftp://10.5.15.10/rhel71\"" >> /tmp/url-include
  fi
done
%end
%include /tmp/part-include
%include /tmp/url-include