28 Kasım 2018 Çarşamba

deduplication karşılaştırması - zfs vs. vdo

zfs ve vdo yu karşılaştırma zamanı geldi diyerek karşılaştırmaya karar verdim.

Testlerde kullanabileceğim iki tane IBM power8 sunucu üzerine, zfs için RHEL 7, vdo için RHEL 8 kurarak karşılaştırdım.

Sonuç olarak vdo daha az kaynak, neredeyse yarı yarıya, ve ~%3 daha iyi deduplication gerçekleştirdi. RHEL 8 geldiğinde arşivleme sistemleri için vdo yu kullanabiliriz görünüyor.

Tabi her şey deduplication değil. snapshot, replikasyon vs. gibi ihtiyaçlar varsa zfs öne çıkmaya devam edecek gibi görünüyor.  


Test sistemi hakkında;

[root@Rhel8onPower8 ~]# lsb_release -a
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
Distributor ID: RedHatEnterprise
Description:    Red Hat Enterprise Linux release 8.0 Beta (Ootpa)
Release:        8.0
Codename:       Ootpa


[root@Rhel8onPower8 ~]# uname -a
Linux Rhel8onPower8 4.18.0-32.el8.ppc64le #1 SMP Sat Oct 27 18:35:17 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
[root@Rhel8onPower8 ~]#



[root@Rhel8onPower8 ~]# vdo printConfigFile
config: !Configuration
  vdos:
    vdoarsiv: !VDOService
      _operationState: finished
      ackThreads: 1
      activated: enabled
      bioRotationInterval: 64
      bioThreads: 4
      blockMapCacheSize: 128M
      blockMapPeriod: 16380
      compression: disabled
      cpuThreads: 2
      deduplication: enabled
      device: /dev/disk/by-id/dm-uuid-LVM-j8P9Uhbk5qFa5qYNCoQXvF5GQUyuQgAo6mXwjVUSakKfGix2UOYeLWGpku51YOwD
      hashZoneThreads: 1
      indexCfreq: 0
      indexMemory: 0.25
      indexSparse: disabled
      indexThreads: 0
      logicalBlockSize: 4096
      logicalSize: 30T
      logicalThreads: 1
      name: vdoarsiv
      physicalSize: 3T
      physicalThreads: 1
      readCache: disabled
      readCacheSize: 0M
      slabSize: 2G
      writePolicy: auto
  version: 538380551
filename: /etc/vdoconf.yml

[root@Rhel8onPower8 ~]#

[root@Rhel8onPower8 ~]# vdostats  --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdoarsiv      3.0T    307.5G      2.7T  10%           77%
[root@Rhel8onPower8 ~]#

[root@Rhel8onPower8 ~]# yum info vdo
Updating Subscription Management repositories.
Last metadata expiration check: 0:41:53 ago on Tue 27 Nov 2018 11:59:03 PM EST.
Installed Packages
Name         : vdo
Version      : 6.2.0.239
Release      : 8.el8
Arch         : ppc64le
Size         : 5.0 M
Source       : vdo-6.2.0.239-8.el8.src.rpm
Repo         : @System
From repo    : localrepoBase
Summary      : Management tools for Virtual Data Optimizer
URL          : http://github.com/dm-vdo/vdo
License      : GPLv2
Description  : Virtual Data Optimizer (VDO) is a device mapper target that delivers
             : block-level deduplication, compression, and thin provisioning.
             :
             : This package provides the user-space management tools for VDO.

[root@Rhel8onPower8 ~]#



ZFSonLinux kurulu sistem;

[root@rhel7-on-s822l ~]# lsb_release
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
[root@rhel7-on-s822l ~]# lsb_release  -a
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
Distributor ID: RedHatEnterpriseServer
Description:    Red Hat Enterprise Linux Server release 7.6 (Maipo)
Release:        7.6
Codename:       Maipo
[root@rhel7-on-s822l ~]# uname -a
Linux rhel7-on-s822l 3.10.0-957.el7.ppc64le #1 SMP Thu Oct 4 20:51:36 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
[root@rhel7-on-s822l ~]#

[root@rhel7-on-s822l ~]# yum info zfs
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Installed Packages
Name        : zfs
Arch        : ppc64le
Version     : 0.8.0
Release     : rc1_102_gad796b8.el7
Size        : 1.8 M
Repo        : installed
Summary     : Commands to control the kernel modules and libraries
URL         : http://zfsonlinux.org/
License     : CDDL
Description : This package contains the ZFS command line utilities.

[root@rhel7-on-s822l ~]#
[root@rhel7-on-s822l ~]# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
alibaba   880G   339G   541G        -         -     9%    38%  3.98x  ONLINE  -
[root@rhel7-on-s822l ~]#

deduplication karşılaştırması - zfs vs. vdo

zfs ve vdo yu karşılaştırma zamanı geldi diyerek karşılaştırmaya karar verdim.



Testlerde kullanabileceğim iki tane IBM power8 sunucu üzerine, zfs için RHEL 7, vdo için RHEL 8 kurarak karşılaştırdım.



Sonuç olarak vdo daha az kaynak, neredeyse yarı yarıya, ve ~%3 daha iyi deduplication gerçekleştirdi. RHEL 8 geldiğinde arşivleme sistemleri için vdo yu kullanabiliriz görünüyor.



Tabi her şey deduplication değil. snapshot, replikasyon vs. gibi ihtiyaçlar varsa zfs öne çıkmaya devam edecek gibi görünüyor.  





Test sistemi hakkında;



[root@Rhel8onPower8 ~]# lsb_release -a
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
Distributor ID: RedHatEnterprise
Description:    Red Hat Enterprise Linux release 8.0 Beta (Ootpa)
Release:        8.0
Codename:       Ootpa





[root@Rhel8onPower8 ~]# uname -a
Linux Rhel8onPower8 4.18.0-32.el8.ppc64le #1 SMP Sat Oct 27 18:35:17 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
[root@Rhel8onPower8 ~]#







[root@Rhel8onPower8 ~]# vdo printConfigFile
config: !Configuration
  vdos:
    vdoarsiv: !VDOService
      _operationState: finished
      ackThreads: 1
      activated: enabled
      bioRotationInterval: 64
      bioThreads: 4
      blockMapCacheSize: 128M
      blockMapPeriod: 16380
      compression: disabled
      cpuThreads: 2
      deduplication: enabled
      device: /dev/disk/by-id/dm-uuid-LVM-j8P9Uhbk5qFa5qYNCoQXvF5GQUyuQgAo6mXwjVUSakKfGix2UOYeLWGpku51YOwD
      hashZoneThreads: 1
      indexCfreq: 0
      indexMemory: 0.25
      indexSparse: disabled
      indexThreads: 0
      logicalBlockSize: 4096
      logicalSize: 30T
      logicalThreads: 1
      name: vdoarsiv
      physicalSize: 3T
      physicalThreads: 1
      readCache: disabled
      readCacheSize: 0M
      slabSize: 2G
      writePolicy: auto
  version: 538380551
filename: /etc/vdoconf.yml

[root@Rhel8onPower8 ~]#



[root@Rhel8onPower8 ~]# vdostats  --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdoarsiv      3.0T    307.5G      2.7T  10%           77%
[root@Rhel8onPower8 ~]#



[root@Rhel8onPower8 ~]# yum info vdo
Updating Subscription Management repositories.
Last metadata expiration check: 0:41:53 ago on Tue 27 Nov 2018 11:59:03 PM EST.
Installed Packages
Name         : vdo
Version      : 6.2.0.239
Release      : 8.el8
Arch         : ppc64le
Size         : 5.0 M
Source       : vdo-6.2.0.239-8.el8.src.rpm
Repo         : @System
From repo    : localrepoBase
Summary      : Management tools for Virtual Data Optimizer
URL          : http://github.com/dm-vdo/vdo
License      : GPLv2
Description  : Virtual Data Optimizer (VDO) is a device mapper target that delivers
             : block-level deduplication, compression, and thin provisioning.
             :
             : This package provides the user-space management tools for VDO.

[root@Rhel8onPower8 ~]#







ZFSonLinux kurulu sistem;



[root@rhel7-on-s822l ~]# lsb_release
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
[root@rhel7-on-s822l ~]# lsb_release  -a
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
Distributor ID: RedHatEnterpriseServer
Description:    Red Hat Enterprise Linux Server release 7.6 (Maipo)
Release:        7.6
Codename:       Maipo
[root@rhel7-on-s822l ~]# uname -a
Linux rhel7-on-s822l 3.10.0-957.el7.ppc64le #1 SMP Thu Oct 4 20:51:36 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
[root@rhel7-on-s822l ~]#



[root@rhel7-on-s822l ~]# yum info zfs
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Installed Packages
Name        : zfs
Arch        : ppc64le
Version     : 0.8.0
Release     : rc1_102_gad796b8.el7
Size        : 1.8 M
Repo        : installed
Summary     : Commands to control the kernel modules and libraries
URL         : http://zfsonlinux.org/
License     : CDDL
Description : This package contains the ZFS command line utilities.

[root@rhel7-on-s822l ~]#

[root@rhel7-on-s822l ~]# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
alibaba   880G   339G   541G        -         -     9%    38%  3.98x  ONLINE  -
[root@rhel7-on-s822l ~]#


deduplication karşılaştırması - zfs vs. vdo

zfs ve vdo yu karşılaştırma zamanı geldi diyerek karşılaştırmaya karar verdim.

Testlerde kullanabileceğim iki tane IBM power8 sunucu üzerine, zfs için RHEL 7, vdo için RHEL 8 kurarak karşılaştırdım.

Sonuç olarak vdo daha az kaynak, neredeyse yarı yarıya, ve ~%3 daha iyi deduplication gerçekleştirdi. RHEL 8 geldiğinde arşivleme sistemleri için vdo yu kullanabiliriz görünüyor.

Tabi her şey deduplication değil. snapshot, replikasyon vs. gibi ihtiyaçlar varsa zfs öne çıkmaya devam edecek gibi görünüyor.  


Test sistemi hakkında;

[root@Rhel8onPower8 ~]# lsb_release -a
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
Distributor ID: RedHatEnterprise
Description:    Red Hat Enterprise Linux release 8.0 Beta (Ootpa)
Release:        8.0
Codename:       Ootpa


[root@Rhel8onPower8 ~]# uname -a
Linux Rhel8onPower8 4.18.0-32.el8.ppc64le #1 SMP Sat Oct 27 18:35:17 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
[root@Rhel8onPower8 ~]#



[root@Rhel8onPower8 ~]# vdo printConfigFile
config: !Configuration
  vdos:
    vdoarsiv: !VDOService
      _operationState: finished
      ackThreads: 1
      activated: enabled
      bioRotationInterval: 64
      bioThreads: 4
      blockMapCacheSize: 128M
      blockMapPeriod: 16380
      compression: disabled
      cpuThreads: 2
      deduplication: enabled
      device: /dev/disk/by-id/dm-uuid-LVM-j8P9Uhbk5qFa5qYNCoQXvF5GQUyuQgAo6mXwjVUSakKfGix2UOYeLWGpku51YOwD
      hashZoneThreads: 1
      indexCfreq: 0
      indexMemory: 0.25
      indexSparse: disabled
      indexThreads: 0
      logicalBlockSize: 4096
      logicalSize: 30T
      logicalThreads: 1
      name: vdoarsiv
      physicalSize: 3T
      physicalThreads: 1
      readCache: disabled
      readCacheSize: 0M
      slabSize: 2G
      writePolicy: auto
  version: 538380551
filename: /etc/vdoconf.yml

[root@Rhel8onPower8 ~]#

[root@Rhel8onPower8 ~]# vdostats  --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdoarsiv      3.0T    307.5G      2.7T  10%           77%
[root@Rhel8onPower8 ~]#

[root@Rhel8onPower8 ~]# yum info vdo
Updating Subscription Management repositories.
Last metadata expiration check: 0:41:53 ago on Tue 27 Nov 2018 11:59:03 PM EST.
Installed Packages
Name         : vdo
Version      : 6.2.0.239
Release      : 8.el8
Arch         : ppc64le
Size         : 5.0 M
Source       : vdo-6.2.0.239-8.el8.src.rpm
Repo         : @System
From repo    : localrepoBase
Summary      : Management tools for Virtual Data Optimizer
URL          : http://github.com/dm-vdo/vdo
License      : GPLv2
Description  : Virtual Data Optimizer (VDO) is a device mapper target that delivers
             : block-level deduplication, compression, and thin provisioning.
             :
             : This package provides the user-space management tools for VDO.

[root@Rhel8onPower8 ~]#



ZFSonLinux kurulu sistem;

[root@rhel7-on-s822l ~]# lsb_release
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
[root@rhel7-on-s822l ~]# lsb_release  -a
LSB Version:    :core-4.1-noarch:core-4.1-ppc64le:cxx-4.1-noarch:cxx-4.1-ppc64le:desktop-4.1-noarch:desktop-4.1-ppc64le:languages-4.1-noarch:languages-4.1-ppc64le:printing-4.1-noarch:printing-4.1-ppc64le
Distributor ID: RedHatEnterpriseServer
Description:    Red Hat Enterprise Linux Server release 7.6 (Maipo)
Release:        7.6
Codename:       Maipo
[root@rhel7-on-s822l ~]# uname -a
Linux rhel7-on-s822l 3.10.0-957.el7.ppc64le #1 SMP Thu Oct 4 20:51:36 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
[root@rhel7-on-s822l ~]#

[root@rhel7-on-s822l ~]# yum info zfs
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Installed Packages
Name        : zfs
Arch        : ppc64le
Version     : 0.8.0
Release     : rc1_102_gad796b8.el7
Size        : 1.8 M
Repo        : installed
Summary     : Commands to control the kernel modules and libraries
URL         : http://zfsonlinux.org/
License     : CDDL
Description : This package contains the ZFS command line utilities.

[root@rhel7-on-s822l ~]#
[root@rhel7-on-s822l ~]# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
alibaba   880G   339G   541G        -         -     9%    38%  3.98x  ONLINE  -
[root@rhel7-on-s822l ~]#

7 Kasım 2018 Çarşamba

cobbler'e yeni bir sunucu tanımlamak


spacewalk sunucusu üzerinden networkten kurulum için cobbler kullanılıyor.
cobbler'e yeni bir sunucu tanıtmak için aşağıdaki scripti kullanıyorum.


[root@spacewalk ~]# cat sunucu-ekle.sh
#!/bin/bash

# sunucular biostan, ikinci interfaceden boot olacak sekilde ayarlanmali


hostname="oel6-test2.oel.local"

int1="eth0"
mac1="00:01:02:03:42:BD"
ipaddr1="192.168.254.152"

int2="eth1"
mac2="00:01:02:03:2A:D4"
ipaddr2="192.168.11.152"

profile="oel6kickstart:1:local"
cobbler system add --name=$hostname --hostname=$hostname --interface=$int1 --mac=$mac1 --ip-address=$ipaddr1 --interface=$int2 --mac=$mac2 --ip-address=$ipaddr2 --profile=$profile

cobbler sync
systemc restart dhcpd
systemctl restart named

cobbler system list
[root@spacewalk ~]#

Tabiki cobblerin ayarlanmış olması şart. cobbler için en azından /etc/cobbler dizini altındaki  aşağıdaki dosyaları düzenlemeniz gerekmektedir. Aksi durumda dhcp/dns tanımlamaları aksaklığa neden olur.

zone.template
named.template
dhcp.template
settings




cobbler'e yeni bir sunucu tanımlamak


spacewalk sunucusu üzerinden networkten kurulum için cobbler kullanılıyor.
cobbler'e yeni bir sunucu tanıtmak için aşağıdaki scripti kullanıyorum.


[root@spacewalk ~]# cat sunucu-ekle.sh
#!/bin/bash

# sunucular biostan, ikinci interfaceden boot olacak sekilde ayarlanmali


hostname="oel6-test2.oel.local"

int1="eth0"
mac1="00:01:02:03:42:BD"
ipaddr1="192.168.254.152"

int2="eth1"
mac2="00:01:02:03:2A:D4"
ipaddr2="192.168.11.152"

profile="oel6kickstart:1:local"
cobbler system add --name=$hostname --hostname=$hostname --interface=$int1 --mac=$mac1 --ip-address=$ipaddr1 --interface=$int2 --mac=$mac2 --ip-address=$ipaddr2 --profile=$profile

cobbler sync
systemc restart dhcpd
systemctl restart named

cobbler system list
[root@spacewalk ~]#

Tabiki cobblerin ayarlanmış olması şart. cobbler için en azından /etc/cobbler dizini altındaki  aşağıdaki dosyaları düzenlemeniz gerekmektedir. Aksi durumda dhcp/dns tanımlamaları aksaklığa neden olur.

zone.template
named.template
dhcp.template
settings




cobbler'e yeni bir sunucu tanımlamak


spacewalk sunucusu üzerinden networkten kurulum için cobbler kullanılıyor.
cobbler'e yeni bir sunucu tanıtmak için aşağıdaki scripti kullanıyorum.


[root@spacewalk ~]# cat sunucu-ekle.sh
#!/bin/bash

# sunucular biostan, ikinci interfaceden boot olacak sekilde ayarlanmali


hostname="oel6-test2.oel.local"

int1="eth0"
mac1="00:01:02:03:42:BD"
ipaddr1="192.168.254.152"

int2="eth1"
mac2="00:01:02:03:2A:D4"
ipaddr2="192.168.11.152"

profile="oel6kickstart:1:local"
cobbler system add --name=$hostname --hostname=$hostname --interface=$int1 --mac=$mac1 --ip-address=$ipaddr1 --interface=$int2 --mac=$mac2 --ip-address=$ipaddr2 --profile=$profile

cobbler sync
systemc restart dhcpd
systemctl restart named

cobbler system list
[root@spacewalk ~]#

Tabiki cobblerin ayarlanmış olması şart. cobbler için en azından /etc/cobbler dizini altındaki  aşağıdaki dosyaları düzenlemeniz gerekmektedir. Aksi durumda dhcp/dns tanımlamaları aksaklığa neden olur.

zone.template
named.template
dhcp.template
settings




5 Kasım 2018 Pazartesi

Birazda bisiklet reklamı

Merhabalar,

Bu gün kişiye özel bisiklet yapımıyla ilgili reklam yapayım dedim.

Bisiklet sevenlerdenseniz, aşağıdaki linki ziyaret etmeniz iyi olur.

https://soulrider.bike/tr_TR/


Belki kendizine özel bir bisiklet istersiniz. :)


25 Ekim 2018 Perşembe

rsync yi ssh üzerinden kullanırken port degişikliği

rsync yi ssh üzerinden kullandığımızda default port olarak 22 kullanır. Bunu değiştirmenin en kolay yolu, rsync'yi çalıştıran kullanıcının .ssh dizini altında config dosyası oluşturarak içinde port tanımlamadır.



sles15:~ # cat .ssh/config
Host yedek
    Port 43288
    hostname yedek.akyuz.tech
    user root
sles15:~ #


rsync -auvh -e ssh yedek:/data/ /mnt/yedek/


Yok illa ben config dosyası oluşturmam diyenlerdenseniz, aşağıdaki gibide kullanabilirsiniz.

rsync -auvh -e "ssh -p 43288"  yedek:/data/ /mnt/yedek/

24 Ekim 2018 Çarşamba

bir lunun linux sistemden kaldırılması

multipath ile kullanılan bir luna ihtiyaç kalmadığında sistemden kaldırılması;

Bir disk ünitesinden verilen bir diski(lun) linux sistemlerinden pat diye kaldırdığımızda bir sürü hata mesajları alırız. ihtiyacım kalmayan bir diski sistemden düzgün bir şekilde kaldırmak için öncelikle multipath den kaldırıp, arkadısndan disk leri sistemden silmemiz gerekir.
Aşağıdaki script bu işlem için bir örnektir.

###############################ÖRNEK#########################################

#!/bin/bash
tmpf="/tmp/deldevlst.txt"
if (( $# > 0 ))
then
        export mpathdev=$1

dev=/dev/mapper/$mpathdev
if [ -e $dev ]
then
        multipath -ll $mpathdev |egrep -v "status=active|size|status=enabled|$mpathdev"|sed 's/| `-//g'|sed 's/`-//g' |awk '{print $2}' > $tmpf
        multipath -f $mpathdev && echo $mpathdev silindi..
        for devname in `cat $tmpf`
        do
        echo 1 > /sys/block/$devname/device/delete  && echo $devname Silindi..
        done

        echo $mpathdev baglantisini disk unitesinden iptal edebilirsiniz.
else
echo $dev bulunamadi
fi

else
        echo Kullanim sekli: $0 multipath-name
fi

 ##################################ÖRNEK########################################

kullanımdan kaldırılacak disk veya diskler lvm için pv olarak tanımlı ise bu işlemlerden önce pvremove ile lvm özelliğide kaldırılmalı.

2 Ekim 2018 Salı

RHEL 7 tabanlı sistemlerde initramfs dosyasının açılması

Mevcut initramfs dosyasını incelememiz gerektiğinde, aşağıdaki gibi initramfs açarak inceleyebiliriz.


[root@rhel7 tmp]# mkdir initramfs-extras
[root@rhel7 tmp]# cd initramfs-extras/
[root@rhel7 initramfs-extras]# cp /boot/initramfs-3.10.0-862.el7.x86_64.img .
[root@rhel7 initramfs-extras]# mv initramfs-3.10.0-862.el7.x86_64.img initramfs-3.10.0-862.el7.x86_64.img.gz
[root@rhel7 initramfs-extras]# gunzip initramfs-3.10.0-862.el7.x86_64.img.gz
[root@rhel7 initramfs-extras]# cpio -i < initramfs-3.10.0-862.el7.x86_64.img
126539 blocks
[root@rhel7 initramfs-extras]# ls
bin  dev  etc  init  initramfs-3.10.0-862.el7.x86_64.img  lib  lib64  proc  root  run  sbin  shutdown  sys  sysroot  tmp  usr  var
[root@rhel7 initramfs-extras]#

28 Eylül 2018 Cuma

Eski şıkıştırılmış sistem loglarını incelemek

Linux sistemlerde eski log dosyaları fazla yer kaplamaması için, aşağıdaki gibi otomatik olarak şıkıştırılır.

fail2ban.log        kern.log.2.gz      syslog.1             Xorg.0.log.old
fail2ban.log.1      kern.log.3.gz      syslog.2.gz          xrdp-sesman.log
fail2ban.log.2.gz   kern.log.4.gz      syslog.3.gz

Bunları incelemiz gerektiğinde gzip ile açmak yerine zless aracını kullanabiliriz.

$ zless fail2ban.log.2.gz


 Zless  is a filter which allows examination of compressed or plain text
       files one screenful at a time on  a  soft-copy  terminal.   It  is  the
       equivalent of setting the environment variable LESSOPEN to '|gzip -cdfq
       --   %s',   and   the    environment    variable    LESSMETACHARS    to
       '<space><tab><newline>;*?"()<>[|&^`#\$%=~',   and  then  running  less.
       However, enough people seem to think  that  having  the  command  zless
       available is important to be worth providing it.

1 Eylül 2018 Cumartesi

disklerin uzaktaki makineye 1-1 kopyalanması

Günün birinde bir sunucu taşımak zorunda kalır, fakat yeniden kurulum vs. ile uğraşmak istemiyorsanız, eski ve yeni yeni sunucuları live linux ile (tercihim ubuntu ) boot edip,


kaynak sunucu üzerinde;

dd if=/dev/sda | ssh root@1.2.3.4  "dd of=/dev/vda"

ile diski bire bir kopylayabilirsiniz.

1.2.3.4 yeni sunucunun ip adresi

vda yeni sunucudaki disk, sda eski sunucudaki disk

eyu eğlenceler

6 Ağustos 2018 Pazartesi

selinux - nginx


sudo cat /var/log/audit/audit.log | grep nginx | grep denied | sudo audit2allow -M nginx


cat nginx.te

module nginx 1.0;

require {
    type unlabeled_t;
    type httpd_t;
    class dir read;
    class file { getattr read };
}

#============= httpd_t ==============

#!!!! This avc is allowed in the current policy
allow httpd_t unlabeled_t:dir read;
allow httpd_t unlabeled_t:file read;

#!!!! This avc is allowed in the current policy
allow httpd_t unlabeled_t:file getattr;



semodule -i nginx.pp

systemctl restart nginx

17 Nisan 2018 Salı

ssh kullanıcı ayarlarım

linux sistemimdeki .ssh/config dosyasının temel satırları



Host *
ControlMaster                        auto
ControlPath                             ~/.ssh/sockets/%r@%h-%p
ControlPersist                        600
StrictHostKeyChecking         no


host  mynode
        hostname                          mynode.akyuz.tech
        user                                   root
        ForwardAgent                  yes
        ForwardX11                     yes
        TCPKeepAlive                  yes
        ConnectTimeout              15
        ConnectionAttempts        2
        Port                                  22
        DynamicForward            8989
        LocalForward                 127.0.0.1:5901 127.0.0.1:5901

30 Mart 2018 Cuma

RedHat reposunu yedeklemek

Bu gün boş boş otururken, redhat reposunu yedekleyen bir script hazırlayayım dedim.  rhel reposunu yansilamak isteyenler, kendilerine uygun değişikliği yaparak kullanabilirler.

Tüm repoları indirebilmek için, rhn den repoları aktifleştirmek gerektiğinide unutmamak lazım. :)

Eyu eğlenceler.




################################################
#!/bin/bash
# Bu script Remzi AKYUZ tarafindan, redhat paketlerini yerelden kullanmak icin hazirlanmistir.
# Her hangi bir sorunuz olursa, lutfen sorunuz.
# eposta: remziakyuz@gmail.com
#set -x
# time reposync --gpgcheck -l --repoid=

repolar=( 'jbappplatform-4.3.0-fp-x86_64-server-5-rpm' 'jbappplatform-4.3.0-x86_64-server-5-rpm' 'jbappplatform-6.4-x86_64-server-5-rpm' \
          'jb-ewp-5-x86_64-server-5-rpm' 'jb-ews-2-x86_64-server-5-rpm' 'jb-wfk-1-x86_64-server-5-rpm' \
          'rhel-x86_64-rhev-agent-5-server' 'rhel-x86_64-server-5' \
          'rhel-x86_64-server-5-mrg-management-2' 'rhel-x86_64-server-5-mrg-messaging-2' 'rhel-x86_64-server-5-thirdparty-oracle-java' \
          'rhel-x86_64-server-cluster-5' 'rhel-x86_64-server-cluster-storage-5' 'rhel-x86_64-server-dts2-5-beta' 'rhel-x86_64-server-fastrack-5' \
          'rhel-x86_64-server-productivity-5' 'rhel-x86_64-server-rhsclient-5' 'rhel-x86_64-server-scalefs-5' 'rhel-x86_64-server-supplementary-5' \
          'rhel-x86_64-server-vt-5' 'rhn-tools-rhel-x86_64-server-5' )

count=0

cd /arsiv/redhat-repo/rhel-5
while [ "${repolar[$count]}" != "" ]
do

reponame="${repolar[$count]}"



if [ -x $reponame ]; then
                          echo repo olusturulmus
                          echo repolar guncellestirilecek
                          sleep 3
                          time reposync --gpgcheck -l --repoid=$reponame --download_path=$reponame
                     else
                          echo repo olusturuluyor $reponame
                          mkdir $reponame
                          createrepo $reponame
fi                       
                 
let "count = $count + 1"

done


exit 0
################################################



Dosya kontrolü

Bir Dosyaya ne zaman erişilmiş, ne zaman değiştirilim gibi bilgileri stat komutu ile öğrenebiliriz.




29 Mart 2018 Perşembe

Linux sistemlerde multipath kullanımı - multipathd


Multipath hakkında bilmemiz gereken en önemli bir nokta ise ALUA kullanımıdır.

Günümüz disk üniteleri %99.9 ALUA yı destekler ve kullanır.

Disk kontrol üniteleri, ALUA aracılığı ile bağlantılarının nasıl kullanılacağı, önceliğinin ne olacağı bilgileri diski kullanan tarafa (linux sunuculardaki multipathd) bildirir. Multipath aldığı bu bilgiler ile disk kullanım şeklini oluşturarak, diski hizmete verir.

 Bu ufak bilgiden sonra genel rhel tabanlı sistemlere multipath  kullanımı hakkında faydalı bilgileri paylaşalım.

 
 

 1. Multipath paketi yüklü olup olmadığı kontrol edilerek, yüklü değilse yüklenir.

    rpm -q device-mapper-multipath

    rpm -q device-mapper
   
    yum install device-mapper

    yum install device-mapper-multipath


    yum install device-mapper-multipath

    mpathconf --enable-with_multipathd y --with_chkconfig y

    mpathconf


 2. Temel config dosyası hazırlanır;
   
   cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults /etc/multipath.conf


 3.  wwid leri multipath.conf dosyasına ilave edilirek, alias tanımlaması yapılır.
   
    Bunları /etc/multipath dizini altındaki dosyalardan alabileceğimiz gibi
 
    # scsi_id -g -u /dev/sdb

    komutuylada alabiliriz. (/dev/sdb değişebileceğini unutmalayım!)

 En basit multipath.conf dosyasına örnek;

    defaults {
                   user_friendly_names yes
                }

    blacklist {

            devnode "^asm/*"

            devnode "ofsctl"

            devnode "xvd*"

           wwid SATA_SEAGATE_ST950019XF0F2TK_

          wwid SATA_SEAGATE_ST950019XF0F37S_


          wwid "*"

     }


    blacklist_exceptions {

             wwid "36001438009b044d90000900000780000"

     }

    multipaths {

            multipath {

                    wwid                    "36001438009b044d90000900000780000"

                    alias                   asm1

          }

    }


 4. Multipath servisi başlatılır.

 service multipathd start

 chkconfig multipathd on

 5. "multipath -ll" , fdisk, dmesg komutlarıyla kontrol yapılır.

 

Multipath kullanımında en önemli sorunlardan bir tanesi multipathın aygıtının storage nasıl eriştiğidir.

Bunun için multipathd den faydalanıyoruz. multipath kullanım topolojisini öğrenmek için; 

[root@rac01 ~]# multipathd show topology
data05 (3600507640082018548000000000000ee) dm-2 IBM     ,2145
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 7:0:1:11  sdaa               65:160    active ready running
| `- 16:0:1:11 sdbc               67:96     active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 7:0:0:11  sdm                8:192     active ready running
  `- 16:0:0:11 sdao               66:128    active ready running
data06 (3600507640082018548000000000000ef) dm-3 IBM     ,2145
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 7:0:0:12  sdn                8:208     active ready running
| `- 16:0:0:12 sdap               66:144    active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 7:0:1:12  sdab               65:176    active ready running
  `- 16:0:1:12 sdbd               67:112    active ready running

multipath aygıtının yol haritasını öğrenmek için "multipathd show map multipath-name json" komutunu kullanabiliriz; data01 isimli multipath aygıtının aşağıdaki şekilde disk ünitesine erişmektedir.


[root@rac01 ~]# multipathd show map data01 json
{
   "major_version": 0,
   "minor_version": 1,
   "map":{
      "name" : "data01",
      "uuid" : "3600507640082018548000000000000ec",
      "sysfs" : "dm-7",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdy",
            "dev_t" : "65:128",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdba",
            "dev_t" : "67:64",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdk",
            "dev_t" : "8:160",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdam",
            "dev_t" : "66:96",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   }
}
[root@rac01 ~]#

Tüm multipath lerin haritasını ise '  multipathd show maps json ' komutuyla çıkartabiliriz.


[root@rac01 ~]#
[root@rac01 ~]#
[root@rac01 ~]#
[root@rac01 ~]# multipathd show maps json
{
   "major_version": 0,
   "minor_version": 1,
   "maps": [{
      "name" : "data05",
      "uuid" : "3600507640082018548000000000000ee",
      "sysfs" : "dm-2",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdaa",
            "dev_t" : "65:160",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdbc",
            "dev_t" : "67:96",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdm",
            "dev_t" : "8:192",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdao",
            "dev_t" : "66:128",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   },{
      "name" : "data06",
      "uuid" : "3600507640082018548000000000000ef",
      "sysfs" : "dm-3",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdn",
            "dev_t" : "8:208",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdap",
            "dev_t" : "66:144",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdab",
            "dev_t" : "65:176",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdbd",
            "dev_t" : "67:112",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   },{
      "name" : "data03",
      "uuid" : "3600507640082018548000000000000f0",
      "sysfs" : "dm-4",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdac",
            "dev_t" : "65:192",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdbe",
            "dev_t" : "67:128",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdo",
            "dev_t" : "8:224",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdaq",
            "dev_t" : "66:160",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   },{
      "name" : "data02",
      "uuid" : "3600507640082018548000000000000ed",
      "sysfs" : "dm-5",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdl",
            "dev_t" : "8:176",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdan",
            "dev_t" : "66:112",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdz",
            "dev_t" : "65:144",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdbb",
            "dev_t" : "67:80",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   },{
      "name" : "data04",
      "uuid" : "3600507640082018548000000000000eb",
      "sysfs" : "dm-6",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdj",
            "dev_t" : "8:144",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdal",
            "dev_t" : "66:80",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdx",
            "dev_t" : "65:112",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdaz",
            "dev_t" : "67:48",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   },{
      "name" : "data01",
      "uuid" : "3600507640082018548000000000000ec",
      "sysfs" : "dm-7",
      "failback" : "immediate",
      "queueing" : "5 chk",
      "paths" : 4,
      "write_prot" : "rw",
      "dm_st" : "active",
      "features" : "1 queue_if_no_path",
      "hwhandler" : "0",
      "action" : "",
      "path_faults" : 0,
      "vend" : "IBM     ",
      "prod" : "2145            ",
      "rev" : "0000",
      "switch_grp" : 0,
      "map_loads" : 2,
      "total_q_time" : 0,
      "q_timeouts" : 0,
      "path_groups": [{
         "selector" : "round-robin 0",
         "pri" : 50,
         "dm_st" : "active",
         "group" : 1,
         "paths": [{
            "dev" : "sdy",
            "dev_t" : "65:128",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdba",
            "dev_t" : "67:64",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 50,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a1",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a1",
            "host_adapter" : "0000:5a:00.0"
         }]
      },{
         "selector" : "round-robin 0",
         "pri" : 10,
         "dm_st" : "enabled",
         "group" : 2,
         "paths": [{
            "dev" : "sdk",
            "dev_t" : "8:160",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3da26f",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3da26f",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:07:00.0"
         },{
            "dev" : "sdam",
            "dev_t" : "66:96",
            "dm_st" : "active",
            "dev_st" : "running",
            "chk_st" : "ready",
            "checker" : "tur",
            "pri" : 10,
            "host_wwnn" : "0x200034800d3d9abc",
            "target_wwnn" : "0x500507680b0080a0",
            "host_wwpn" : "0x210034800d3d9abc",
            "target_wwpn" : "0x500507680b2180a0",
            "host_adapter" : "0000:5a:00.0"
         }]
      }]
   }]
}
[root@rac01 ~]#

multipathd  kullanımına bir başka örnek;

# multipathd show maps format "%n %d %s %N %Q %x %r %0 %1 %2 %s %A"





Bir başka önemli noktalardan bir taneside multipath aygıtının major ve minor id nolarıdır.

Öğrenmek için dmsetup komutundan faydalanabiliriz.

[root@rac01 ~]#   dmsetup  ls --target multipath
data01  (252, 7)
data06  (252, 3)
data05  (252, 2)
data04  (252, 6)
data03  (252, 4)
data02  (252, 5)
[root@rac01 ~]#

dmsetup komutunun bir başka faydalı çıktısı ise dmsetup tablo yapısıdır.


[root@rac01 ~]#  dmsetup table --tree
ol-var_log: 0 20971520 linear 8:3 668846080
23f4331e--f90d--4c51--92d9--5a69a3acca65-xleases: 0 2097152 linear 67:32 5507072
data01: 0 41943040 multipath 1 queue_if_no_path 0 2 1 round-robin 0 2 1 65:128 1 67:64 1 round-robin 0 2 1 8:160 1 66:96 1
23f4331e--f90d--4c51--92d9--5a69a3acca65-719ca94a--7f83--4a79--ab88--855f7896f9e6: 0 262144 linear 67:32 9701376
84b6a05f--6d12--48c9--bc82--bec55af1fa6a-218943c2--6f53--4f79--932e--2f46cb85e281: 0 524288000 linear 65:0 832047104
...

...

data05: 0 41943040 multipath 1 queue_if_no_path 0 2 1 round-robin 0 2 1 65:160 1 67:96 1 round-robin 0 2 1 8:192 1 66:128 1
84b6a05f--6d12--48c9--bc82--bec55af1fa6a-5562e8c3--3941--492d--8d44--adb011b959c5: 0 2097152 linear 65:0 1661470720
[root@rac01 ~]#  

 

dmestup info multipath-name ile multipath aygıtının durumunuda öğrenebiliriz. 

[root@rac01 ~]#  dmsetup info data04
Name:              data04
State:             ACTIVE
Read Ahead:        8192
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      252, 6
Number of targets: 1
UUID: mpath-3600507640082018548000000000000eb

[root@rac01 ~]#


[root@rac01 ~]#  dmsetup info data04p1
Name:              data04p1
State:             ACTIVE
Read Ahead:        8192
Tables present:    LIVE
Open count:        4
Event number:      0
Major, minor:      252, 12
Number of targets: 1
UUID: part1-mpath-3600507640082018548000000000000eb

[root@rac01 ~]#


multipath ayarları sonrasında multipath düzgün çalışıp çalışmadığını test etmek için fc kablolar  çekilebilir veya portlar geçici olarak iptal edilebilir.

Geçici devre dışı bırakmak için ;

echo "pci-device-id" > /sys/bus/pci/drivers/pci-driver-name/unbind


[root@rac01 ~]# lspci |grep -i fibre

08:00.0 Fibre Channel: QLogic Corp. ISP2722-based 16/32Gb Fibre Channel to PCIe Adapter (rev 01)

5b:00.0 Fibre Channel: QLogic Corp. ISP2722-based 16/32Gb Fibre Channel to PCIe Adapter (rev 01)

[root@rac01 ~]#


[root@rac01 ~]# ls   /sys/bus/pci/drivers/qla2xxx/

0000:08:00.0  0000:5b:00.0  bind  module  new_id  remove_id  uevent  unbind

[root@rac01 ~]#

data02 (3600507640082018548000000000000ed) dm-4 IBM     ,2145
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 15:0:0:10 sdl                8:176     active undef running
| `- 16:0:0:10 sdan               66:112    active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
  |- 15:0:1:10 sdz                65:144    active undef running
  `- 16:0:1:10 sdbb               67:80     active undef running
[root@rac01 ~]# ls  /sys/bus/pci/drivers/qla2xxx/
0000:08:00.0  0000:5b:00.0  bind  module  new_id  remove_id  uevent  unbind
[root@rac01 ~]# echo "0000:08:00.0" > /sys/bus/pci/drivers/qla2xxx/unbind
[root@rac01 ~]#

data02 (3600507640082018548000000000000ed) dm-4 IBM     ,2145
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 16:0:0:10 sdan               66:112    active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 16:0:1:10 sdbb               67:80     active undef running
[root@rac01 ~]#

Disk hakkında bizlere faydalı bilgi sağlayan bir başka komut ise lsscsi komutudur. lsscsi komutunu detaylı incelemek faydalı olur.

Örnek kullanım;

[root@rac01 ~]# lsscsi -tigs
[0:0:1:0]    disk                                    /dev/sda   -  /dev/sg0    480GB
[0:1:6:0]    enclosu                                 -          -  /dev/sg1        -
[7:0:0:0]    disk    fc:0x500507680b2180a00x011600   /dev/sdb   -  /dev/sg2   1.09TB
[7:0:0:1]    disk    fc:0x500507680b2180a00x011600   /dev/sdc   -  /dev/sg3   1.09TB
[7:0:0:2]    disk    fc:0x500507680b2180a00x011600   /dev/sdd   -  /dev/sg4   2.19TB
[7:0:0:8]    disk    fc:0x500507680b2180a00x011600   /dev/sdj   3600507640082018548000000000000eb  /dev/sg10  21.4GB
[7:0:1:3]    disk    fc:0x500507680b2180a10x011700   /dev/sds   -  /dev/sg19  1.09TB
[7:0:1:8]    disk    fc:0x500507680b2180a10x011700   /dev/sdx   3600507640082018548000000000000eb  /dev/sg24  21.4GB
[7:0:1:12]   disk    fc:0x500507680b2180a10x011700   /dev/sdab  3600507640082018548000000000000ef  /dev/sg28  21.4GB
[7:0:1:13]   disk    fc:0x500507680b2180a10x011700   /dev/sdac  3600507640082018548000000000000f0  /dev/sg29  21.4GB
[16:0:0:5]   disk    fc:0x500507680b2180a00x011600   /dev/sdai  -  /dev/sg35   214GB
[16:0:0:6]   disk    fc:0x500507680b2180a00x011600   /dev/sdaj  -  /dev/sg36   536GB
[16:0:0:11]  disk    fc:0x500507680b2180a00x011600   /dev/sdao  3600507640082018548000000000000ee  /dev/sg41  21.4GB
3600507640082018548000000000000f0  /dev/sg57  21.4GB
[root@rac01 ~]# lsscsi -tigs




ansible ile yerel quay sunucusu üzerinden execution environment kullanımı

 Yerel quay veya registry sunucularımızdaki ee leri ansible ile kullanabiliyoruz. Bunun için kendi yaptığımız ee leri veya hazır ee leri yük...