Ana içeriğe atla

Birazda GPFS ile oynayalım.

gpfs ile oynayayım dedim ve bir gpfs kurduktan sonra, mevcut dosya sisteminden disk çıkartıp ekledim.

Açıkcası dosya sisteminden bir diski silerken, bu disk üzerindeki datayı, diğer disklere taşıması, dosya sistemini küçülttükden sonrada diski kullanım dışı bırakması hoşuma gitti. Bu kadar basit olacağını düşünmemiştim.

Özetle gpfs te endişe duymadan  istediğiniz gibi disklerle oynayabilirsiniz.


[root@gpfs01 ~]# mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 shared        nsd1         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
 shared        nsd2         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org

[root@gpfs01 ~]# mmdeldisk shared nsd2
Deleting disks ...
Scanning file system metadata, phase 1 ...
 100 % complete on Tue Jan 22 12:55:13 2019
Scan completed successfully.
Scanning file system metadata, phase 2 ...
 100 % complete on Tue Jan 22 12:55:13 2019
Scan completed successfully.
Scanning file system metadata, phase 3 ...
 100 % complete on Tue Jan 22 12:55:13 2019
Scan completed successfully.
Scanning file system metadata, phase 4 ...
 100 % complete on Tue Jan 22 12:55:13 2019
Scan completed successfully.
Scanning user file metadata ...
   0.02 % complete on Tue Jan 22 12:55:49 2019  (    491520 inodes with total       4189 MB data processed)
 100.00 % complete on Tue Jan 22 12:55:49 2019  (    491520 inodes with total       4189 MB data processed)
Scan completed successfully.
Checking Allocation Map for storage pool system
tsdeldisk completed.
mmdeldisk: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
[root@gpfs01 ~]# mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 shared        nsd1         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
 (free disk)   nsd2         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org

[root@gpfs01 ~]#



[root@gpfs01 ~]# mmlsnsd -X -d nsd2

 Disk name    NSD volume ID      Device         Devtype  Node name                Remarks
---------------------------------------------------------------------------------------------------
 nsd2         0A220B165C3F085C   /dev/vdc       generic  gpfs01.localdomain.org   server node
 nsd2         0A220B165C3F085C   /dev/vdc       generic  gpfs02.localdomain.org   server node
 nsd2         0A220B165C3F085C   /dev/vdc       generic  gpfs03.localdomain.org   server node


[root@gpfs01 ~]# mmdelnsd nsd2
mmdelnsd: Processing disk nsd2
mmdelnsd: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

[root@gpfs01 ~]#


[root@gpfs01 ~]# mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 shared        nsd1         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org

[root@gpfs01 ~]#


[root@gpfs01 ~]# mmlsdisk shared
disk         driver   sector     failure holds    holds                            storage
name         type       size       group metadata data  status        availability pool
------------ -------- ------ ----------- -------- ----- ------------- ------------ ------------
nsd1         nsd         512           1 Yes      Yes   ready         up           system
[root@gpfs01 ~]#



[root@gpfs01 ~]# mmcrnsd -F disk2.lst
mmcrnsd: Processing disk vdc
mmcrnsd: Processing disk vdd
mmcrnsd: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
[root@gpfs01 ~]# cat disk2.lst
%nsd: device=vdc
nsd=nsd2
servers=gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
usage=dataOnly

%nsd: device=vdd
nsd=nsd3
servers=gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
usage=metadataOnly



[root@gpfs01 ~]# mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 shared        nsd1         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
 (free disk)   nsd2         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
 (free disk)   nsd3         gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org

[root@gpfs01 ~]#


[root@gpfs01 ~]# mmadddisk shared nsd2

The following disks of shared will be formatted on node gpfs03.localdomain.org:
    nsd2: size 20480 MB
Extending Allocation Map
Checking Allocation Map for storage pool system
Completed adding disks to file system shared.
mmadddisk: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
[root@gpfs01 ~]#


[root@gpfs01 ~]# mmrestripefs shared -b
Scanning file system metadata, phase 1 ...
  53 % complete on Tue Jan 22 13:23:47 2019
 100 % complete on Tue Jan 22 13:23:51 2019
Scan completed successfully.
Scanning file system metadata, phase 2 ...
 100 % complete on Tue Jan 22 13:23:51 2019
Scan completed successfully.
Scanning file system metadata, phase 3 ...
 100 % complete on Tue Jan 22 13:23:51 2019
Scan completed successfully.
Scanning file system metadata, phase 4 ...
 100 % complete on Tue Jan 22 13:23:52 2019
Scan completed successfully.
Scanning user file metadata ...
  21.84 % complete on Tue Jan 22 13:24:55 2019  (    491520 inodes with total       4193 MB data processed)
  42.71 % complete on Tue Jan 22 13:25:27 2019  (    491520 inodes with total       8201 MB data processed)
 100.00 % complete on Tue Jan 22 13:25:38 2019  (    491520 inodes with total      19189 MB data processed)
Scan completed successfully.
[root@gpfs01 ~]#


[root@gpfs02 shared]#  mmlsnsd -a -L

 File system   Disk name    NSD volume ID      NSD servers
---------------------------------------------------------------------------------------------
 shared        nsd1         0A220B165C3F0856   gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
 shared        nsd2         0A220B165C46EC7D   gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org
 shared        nsd3         0A220B165C46EC83   gpfs01.localdomain.org,gpfs02.localdomain.org,gpfs03.localdomain.org

[root@gpfs02 shared]#  mmlsnsd -X -d nsd1

 Disk name    NSD volume ID      Device         Devtype  Node name                Remarks
---------------------------------------------------------------------------------------------------
 nsd1         0A220B165C3F0856   /dev/vdb       generic  gpfs01.localdomain.org   server node
 nsd1         0A220B165C3F0856   /dev/vdb       generic  gpfs02.localdomain.org   server node
 nsd1         0A220B165C3F0856   /dev/vdb       generic  gpfs03.localdomain.org   server node

[root@gpfs02 shared]#  mmlsnsd -X -d nsd2

 Disk name    NSD volume ID      Device         Devtype  Node name                Remarks
---------------------------------------------------------------------------------------------------
 nsd2         0A220B165C46EC7D   /dev/vdc       generic  gpfs01.localdomain.org   server node
 nsd2         0A220B165C46EC7D   /dev/vdc       generic  gpfs02.localdomain.org   server node
 nsd2         0A220B165C46EC7D   /dev/vdc       generic  gpfs03.localdomain.org   server node

[root@gpfs02 shared]#  mmlsnsd -X -d nsd3

 Disk name    NSD volume ID      Device         Devtype  Node name                Remarks
---------------------------------------------------------------------------------------------------
 nsd3         0A220B165C46EC83   /dev/vdd       generic  gpfs01.localdomain.org   server node
 nsd3         0A220B165C46EC83   /dev/vdd       generic  gpfs02.localdomain.org   server node
 nsd3         0A220B165C46EC83   /dev/vdd       generic  gpfs03.localdomain.org   server node

[root@gpfs02 shared]#
[root@gpfs02 shared]# mmlsdisk shared
disk         driver   sector     failure holds    holds                            storage
name         type       size       group metadata data  status        availability pool
------------ -------- ------ ----------- -------- ----- ------------- ------------ ------------
nsd1         nsd         512           1 Yes      Yes   ready         up           system
nsd2         nsd         512          -1 Yes      Yes   ready         up           system
nsd3         nsd         512          -1 Yes      Yes   ready         up           system
[root@gpfs02 shared]#


[root@gpfs02 shared]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   48G  7.4G   41G  16% /
devtmpfs               8.0G     0  8.0G   0% /dev
tmpfs                  8.0G     0  8.0G   0% /dev/shm
tmpfs                  8.0G   26M  8.0G   1% /run
tmpfs                  8.0G     0  8.0G   0% /sys/fs/cgroup
/dev/vda2             1014M  209M  806M  21% /boot
/dev/loop0             3.2G  3.2G     0 100% /var/repo
tmpfs                  1.6G     0  1.6G   0% /run/user/0
shared                  60G   19G   42G  32% /shared
[root@gpfs02 shared]#



Yorumlar

Bu blogdaki popüler yayınlar

Günün komutu - Linux sistemlerinde {2015.12.26} - memhog - basitce hafıza testi

Bu yılda bitmek üzere. Bu yıl biterken faydalı bir komutla yeni yıllar dilemek istiyorum. Bu günkü komutumuz memhog. memhog'u kullanmak için numactl paketinin yüklü olması, yüklü değilse kurmak gerekiyor, oracle linux(rhel tabanlı sistemlerde); yum install numactl Kullanımıda oldukça basit: # memhog memhog [-rNUM] size[kmg] [policy [nodeset]] -rNUM repeat memset NUM times -H disable transparent hugepages Policies: interleave membind preferred default  # memhog 17G numactl: mmap: Cannot allocate memory # memhog 512M .................................................... # Bu faydalı komutu kullanmadan önce swap devre dışı bırakmanız iyi olur.

ttnet tilgin hg1332 modem(router) kablosuz özelliğini güçlendirmek

Bu gün ttnetin hediyesi olan tilgin yönlendiriciyle biraz oynayayım dedim Matkap, ve rg316-rp-sma kablo alıp cihazın kapağını tekrar açtım. Matkapla usb çıkışın yanına bir delik açarak kaployu taktım. Sonra elimdeki antenlerden ikiti tanesini takıp test ettim. . Bu iki antenin, gözle farkedilir derecede sinyalleri kuvvetlendirdiğini fark ettim.. Normalde bu cihaz ile evin iki en uc noktaları arasında haberleşme olmaz iken şimdi en kör iki uç arasında sorun olmadan kablosuz kullanılabildiğini gördüm. Arada 4 tane kuvvetli beton duvar mevcut. Deneme bitti, tilgin rafa kalktı yine. Her nekadar ben bu cihazı kaldırsamda, kullanmak zorunda olan arkadaşlar, bir kablo ve ikitane anten takarak her herde kullanabilirler. İyi eğlenceler.