AIX Power 740

De Wiki de Romain RUDIGER
Aller à : navigation, rechercher


Matériel

Vue de face d'un P740
Vue de derrière d'un P740

Un P740 est généralement divisé en deux parties, une pour chaque VIO. On a donc :

  • 2 cartes RAID
  • 2*2 disques dur (pour mettre les disques en miroir sur chaque VIO)
  • 2 cartes réseau
  • 2 cartes fible (FC)
  • 1 lecteur optique

Il y a quatre ports réseaux qui sont indépendants nommés "LHEA", on peut affecter le port 1 au VIO1, le port 2 au VIO2, le port 3 au VIO1...

Exemple de répartition du matériel sur les 2 VIO

Configuration de la virtualisation du réseau

Sans agrégation

Le Vio n'est relié au monde extérieur que pas un lien réseau, on peut créer une redondance en connectant un VIO sur un commutateur et l'autre sur un autre commutateur. On évite ainsi un SPOF.

AIX-SEA-Details.JPG

Le port physique correspond à ent0, il y a un VLAN par défaut et un VLAN transporté (60).
L'ent9 est la couche qui permet le partage de cette interface physique.
L'ent5 correspond à l'interface virtuelle que l'on a affecté à ce VIO, elle est le point d'entrée des VIO. Elle transporte le VLAN 60.
L'ent7 contrôle l'ent5, elle permet de discuter avec l'autre VIO pour se partage le trafic.
L'en9 est l'interface IP locale au VIO, on lui affecte l'adresse d'administration. Il n'y a pas de tag VLAN ID, c'est au niveau du commutateur que l'on taguera ce trafic.

Sans agrégation, chaque VIO est connecté au réseau par un seul port physique.
mkvdev -sea ent0 -vadapter ent5 -default ent5 -defaultid 1
chdev -dev ent9 -attr ha_mode=auto ctl_chan=ent7
mktcpip -hostname SERV1 -inetaddr 1.1.1.3 -interface en9 -netmask 255.255.255.240 -gateway 1.1.1.1

lsmap -all -net
[...]
SVEA   Physloc
------ --------------------------------------------
ent5   UXXXX-V1-C10-T1

SEA                   ent9
Backing device        ent0
Status                Available
Physloc               UXXXX-P1-C3-T1

lsdev -Ccadapter | grep ent
[...]
ent0     Available       Logical Host Ethernet Port (lp-hea)
ent5     Available       Virtual I/O Ethernet Adapter (l-lan)
ent7     Available       Virtual I/O Ethernet Adapter (l-lan)
ent9     Available       Shared Ethernet Adapter

Avec agrégation

On peut renforcer le réseau en utilisant deux interfaces physique par VIO. Ainsi même si un VIO tombe + un commutateur, il n'y a pas de coupure de service.

AIX-SEA-Aggregation-Details.JPG

l'ent1 et ent2 sont les deux interfaces physiques.
L'ent10 est la couche qui permet de faire l'agrégation des deux cartes physiques.
L'ent11 permet de partager l'agrégat avec des cartes virtuelles.
L'ent6 correspond à l'interface virtuelle que l'on a affecté à ce VIO, elle est le point d'entrée des VIO. Elle transporte le VLAN 40.
L'ent8 contrôle l'ent6, elle permet de discuter avec l'autre VIO pour se partage le trafic.

AIX-SEA-Aggregation.JPG
smit dev -> communication -> EtherChannel -> add ether Channel
EtherChannel / Link Aggregation Adapters            	ent1,ent2                                                
  Enable Alternate Address                            		no                                                       
  Alternate Address                                  		[]                                                        
  Enable Gigabit Ethernet Jumbo Frames                	no                                                       
  Mode                                                		8023ad                                                   
  IEEE 802.3ad Interval                               		long                                                     
  Hash Mode                                           		default       

mkvdev -sea ent10 -vadapter ent6 -default ent6 -defaultid 2
chdev -dev ent11 -attr ha_mode=auto ctl_chan=ent8

lsmap -all -net
[...]
SVEA   Physloc
------ --------------------------------------------
ent6   UXXXX-V1-C11-T1

SEA                   ent11
Backing device        ent10
Physloc

lsdev -Ccadapter | grep ent
[...]
ent1     Available 04-00 4-Port 10/100/1000 Base-TX PCI-Express Adapter (14106803)
ent2     Available 04-01 4-Port 10/100/1000 Base-TX PCI-Express Adapter (14106803)
ent6     Available       Virtual I/O Ethernet Adapter (l-lan)
ent8     Available       Virtual I/O Ethernet Adapter (l-lan)
ent10    Available       EtherChannel / IEEE 802.3ad Link Aggregation
ent11    Available       Shared Ethernet Adapter

Configuration de la virtualisation du SAN

L'idée est du faire du NPIV pour voir sur la baie SAN chaque partition comme un serveur. Sans NPIV, on verrait les 2 VIO au niveau de la baie SAN mais il est ensuite difficile d'administrer les LUNs et ce sera une source d'erreur humaine !

AIX-Infra-SAN.JPG

On a deux cartes FC physique, une pour chaque VIO. Chaque carte est reliée par deux fibres sur les deux fabriques SAN.

Ensuite chaque partition dispose d'une carte virtuelle par VIO + une carte virtuelle correspondant les VIO. Une partition = 4 cartes virtuelles sur la partition et 2 cartes virtuelles sur chaque VIO. Il y a peut être mieux car l'on se retrouve avec 8 chemins du coup ! Mais bon quelque soit l'état du SAN et des fabriques, nous pouvons redémarrer un VIO les yeux fermés.

Les étapes :

  • Créer les cartes fibre virtuelles sur la LPAR : ID 20 21 22 23
AIX-SAN-adapt-AIX.JPG
  • Créer les cartes fibre virtuelles sur le VIO1 : ID 20 21
Penser à affecter ces cartes à la partition nouvellement créée
AIX-SAN-adapt-vio1.JPG
  • Créer les cartes fibre virtuelles sur le VIO2 : ID 22 23
Penser à affecter ces cartes à la partition nouvellement créée
  • Résultat de la configuration de la LPAR :
AIX-SAN-profil-virt-AIX.JPG
  • Résultat de la configuration du VIO1 :
AIX-SAN-profil-virt-VIO1.JPG
  • Redémarrer les deux VIO pour qu'il détecte les nouvelles cartes
  • Sur chaque VIO, il faut mapper les cartes virtuelles (vfchost) sur les cartes physiques (fcs) :
$ lsdev -dev vfchost*
name             status      description
vfchost0         Available   Virtual FC Server Adapter
vfchost1         Available   Virtual FC Server Adapter
$ lsdev -dev fcs*
name             status      description
fcs0             Available   8Gb PCI Express Dual Port FC Adapter (df1000xxxx)
fcs1             Available   8Gb PCI Express Dual Port FC Adapter (df1000xxxx)
$ lsnports
name             physloc                        fabric tports aports swwpns  awwpns
fcs0             XXX-P1-C4-T1                        1     64     64   2048    2048
fcs1             XXX-P1-C4-T2                        1     64     64   2048    2042
$ lsmap -all -npiv
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost0      XXX-V1-C20                              3

Status:NOT_LOGGED_IN
FC name:                        FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name:                VFC client DRC:

Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost1      XXX-V1-C21                              3

Status:NOT_LOGGED_IN
FC name:                        FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name:                VFC client DRC:



$ vfcmap -vadapter vfchost0 -fcp fcs0
$ vfcmap -vadapter vfchost1 -fcp fcs1



$ lsmap -all -npiv
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost0      XXX-V1-C20                              3

Status:NOT_LOGGED_IN
FC name:fcs0                    FC loc code:XXX-P1-C4-T1
Ports logged in:0
Flags:4<NOT_LOGGED>
VFC client name:                VFC client DRC:

Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost1      XXX-V1-C21                              3

Status:NOT_LOGGED_IN
FC name:fcs1                    FC loc code:XXX-P1-C4-T2
Ports logged in:0
Flags:4<NOT_LOGGED>
VFC client name:                VFC client DRC:

Il est possible de voir le WWN créé au niveau du profil de la LPAR en visualisant les propriétés de la carte virtuelle :

AIX-SAN-adapt-AIX-WWN.JPG

Activer un initiator pour faire le zoning SAN

En mode SMS, on peut activer un par un les ports FC pour qu'ils soient visible sur la fabrique SAN :

Main Menu
 5.   Select Boot Options
Multiboot
 4.   SAN Zoning Support
>Enable each FC adapter and do the zonning
Select Media Adapter
 1.  XXX-V8-C70-T1    /vdevice/vfc-client@30000046
 2.  XXX-V8-C71-T1    /vdevice/vfc-client@30000047
 3.  XXX-V8-C72-T1    /vdevice/vfc-client@30000048
 4.  XXX-V8-C73-T1    /vdevice/vfc-client@30000049
   .-----------------------------------------------------------------------.
   |  The selected adapter has been opened.                                |
   |  Zoning of attached disks may now be possible.                        |
   |  Press any key to close the adapter and return to the previous menu.  |
   `-----------------------------------------------------------------------'

Disque striping

La baie SAN fait déjà du striping mais on peut également en faire au niveau software en faisant reposer un VG sur plusieurs PV avec certaines options.

striping des FS contenants les dbf sur 7 disques
répartition des PE sur les 7 disques

Répartition :

-8 LUN de 150Go – RAID 5 – FRA + oradata + oratemp
-1 LUN REDO1 – RAID 10 – Redo logs 1
-1 LUN REDO2 – RAID 10 – Redo logs 2

Création du VG

root [SERV]:/oracle/oradata> mkvg -S -y bddvg hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9
bddvg
root [SERV]:/> lsvg -p bddvg
bddvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            1598        40          00..00..00..00..40
hdisk1            active            1598        41          00..00..00..00..41
hdisk2            active            1598        41          00..00..00..00..41
hdisk3            active            1598        42          00..00..00..00..42
hdisk4            active            1598        42          00..00..00..00..42
hdisk5            active            1598        43          00..00..00..00..43
hdisk6            active            1598        43          00..00..00..00..43
hdisk7            active            430         430         86..86..86..86..86
hdisk8            active            62          62          13..12..12..12..13
hdisk9            active            62          62          13..12..12..12..13

Création des LV

root [SERV]:/> mklv -e x -t jfs2log -y lv_bdd_log bddvg 14 hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6
lv_bdd_log
root [SERV]:/oracle/oradata/BDD> mklv -t jfs2log -y lv_bdd_log1 bddvg 1 hdisk7
lv_bdd_log1
root [SERV]:/oracle/oradata/BDD> mklv -t jfs2log -y lv_bdd_log2 bddvg 1 hdisk8
lv_bdd_log2
root [SERV]:/oracle/oradata/BDD> mklv -t jfs2log -y lv_bdd_log3 bddvg 1 hdisk9
lv_bdd_log3
root [SERV]:/oracle/oradata/BDD> for i in lv_bdd_log lv_bdd_log1 lv_bdd_log2 lv_bdd_log3; do logform /dev/$i; done
logform: destroy /dev/rlv_bdd_log (y)?
logform: destroy /dev/rlv_bdd_log1 (y)?
logform: destroy /dev/rlv_bdd_log2 (y)?
logform: destroy /dev/rlv_bdd_log3 (y)?
root [SERV]:/> mklv -e x -t jfs2 -y lv_bdd_d01 bddvg 75G hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6
lvbdd_d01
root [SERV]:/> mklv -e x -t jfs2 -y lv_bdd_d02 bddvg 335G hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6
lv_bdd_d02
root [SERV]:/> mklv -e x -t jfs2 -y lv_bdd_d03 bddvg 270G hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6
lv_bdd_d03
root [SERV]:/> mklv -t jfs2 -y lv_bdd_fra bddvg 429 hdisk7
lv_bdd_fra
root [SERV]:/> mklv -t jfs2 -y lv_bdd_r01 bddvg 61 hdisk8
lv_bdd_r01
root [SERV]:/> mklv -t jfs2 -y lv_bdd_r02 bddvg 61 hdisk9
lv_bdd_r02
root [SERV]:/> lsvg -l bddvg
bddvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
lv_bdd_log      jfs2log    14      14      7    open/syncd    N/A
lv_bdd_d01      jfs2       1200    1200    7    open/syncd    /oracle/oradata/BDD
lv_bdd_d02      jfs2       5360    5360    7    open/syncd    /oracle/oradata2/BDD
lv_bdd_d03      jfs2       4320    4320    7    open/syncd    /oracle/oradata3/BDD
lv_bdd_log1     jfs2log    1       1       1    open/syncd    N/A
lv_bdd_log2     jfs2log    1       1       1    open/syncd    N/A
lv_bdd_log3     jfs2log    1       1       1    open/syncd    N/A
lv_bdd_fra      jfs2       429     429     1    open/syncd    /oracle/flash_recovery_area/BDD
lv_bdd_r01      jfs2       61      61      1    open/syncd    /oracle/redolog01/BDD
lv_bdd_r02      jfs2       61      61      1    open/syncd    /oracle/redolog02/BDD

Formatage des FS

root [SERV]:/> crfs -v jfs2 -a logname=rlv_bdd_log -d lvbdd_d01 -m /oracle/oradata/BDD -p rw -A yes
File system created successfully.
78640596 kilobytes total disk space.
New File System size is 157286400
root [SERV]:/> crfs -v jfs2 -a logname=rlv_bdd_log -d lv_bdd_d02 -m /oracle/oradata2/BDD -p rw -A yes
File system created successfully.
351262036 kilobytes total disk space.
New File System size is 702545920
root [SERV]:/> crfs -v jfs2 -a logname=rlv_bdd_log -d lv_bdd_d03 -m /oracle/oradata3/BDD -p rw -A yes
File system created successfully.
283106676 kilobytes total disk space.
New File System size is 566231040
root [SERV]:/> crfs -v jfs2  -a logname=rlv_bdd_log1 -d lv_bdd_fra -m /oracle/flash_recovery_area/BDD -p rw -A yes
File system created successfully.
28113880 kilobytes total disk space.
New File System size is 56229888
root [SERV]:/> crfs -v jfs2  -a logname=rlv_bdd_log2 -d lv_bdd_r01 -m /oracle/redolog01/BDD -p rw -A yes
File system created successfully.
3997368 kilobytes total disk space.
New File System size is 7995392
root [SERV]:/> crfs -v jfs2  -a logname=rlv_bdd_log3 -d lv_bdd_r02 -m /oracle/redolog02/BDD -p rw -A yes
File system created successfully.
3997368 kilobytes total disk space.
New File System size is 7995392
root [SERV]:/> for i in oradata oradata2 oradata3 flash_recovery_area redolog01 redolog02; do mount /oracle/${i}/BDD;done
root [SERV]:/> df -Ig | grep INFH
/dev/lv_bdd_d01     75.00      0.01     74.99    1% /oracle/oradata/BDD
/dev/lv_bdd_d02    335.00      0.05    334.95    1% /oracle/oradata2/BDD
/dev/lv_bdd_d03    270.00      0.04    269.96    1% /oracle/oradata3/BDD
/dev/lv_bdd_fra     26.81      0.00     26.81    1% /oracle/flash_recovery_area/BDD
/dev/lv_bdd_r01      3.81      0.00      3.81    1% /oracle/redolog01/BDD
/dev/lv_bdd_r02      3.81      0.00      3.81    1% /oracle/redolog02/BDD

Vérification du striping

Les PE sont répartis sur tous les disques :

root [SERV]:/> lslv -m lv_bdd_d01 | head -20
lv_bdd_d01:/oracle/oradata/BDD
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0323 hdisk0
0002  0323 hdisk1
0003  0323 hdisk2
0004  0323 hdisk3
0005  0323 hdisk4
0006  0323 hdisk5
0007  0323 hdisk6
0008  0324 hdisk0
0009  0324 hdisk1
0010  0324 hdisk2
0011  0324 hdisk3
0012  0324 hdisk4
0013  0324 hdisk5
0014  0324 hdisk6
0015  0325 hdisk0
0016  0325 hdisk1
0017  0325 hdisk2
0018  0325 hdisk3

SAN MPIO

MPIO natif sur VNX avec le driver ODM de EMC

Voir la documentation :

  • EMC_AIX-Native-MPIO-for-Clariion-Storage-Systems
  • EMC_PowerPath_For_Aix-Installation_Administration_Guide

On peut voir que les disques sont d’un type inconnu et qu’il y a autant de disque que de chemin :

root [SERV]:/home> lsdev -c disk
hdisk0  Available 50-T1-01 Other FC SCSI Disk Drive
hdisk1  Available 50-T1-01 Other FC SCSI Disk Drive
hdisk2  Available 50-T1-01 Other FC SCSI Disk Drive
hdisk3  Available 50-T1-01 Other FC SCSI Disk Drive
hdisk4  Available 51-T1-01 Other FC SCSI Disk Drive
hdisk5  Available 51-T1-01 Other FC SCSI Disk Drive
hdisk6  Available 51-T1-01 Other FC SCSI Disk Drive
hdisk7  Available 51-T1-01 Other FC SCSI Disk Drive
hdisk8  Available 52-T1-01 Other FC SCSI Disk Drive
hdisk9  Available 52-T1-01 Other FC SCSI Disk Drive
hdisk10 Available 52-T1-01 Other FC SCSI Disk Drive
hdisk11 Available 52-T1-01 Other FC SCSI Disk Drive
hdisk12 Available 53-T1-01 Other FC SCSI Disk Drive
hdisk13 Available 53-T1-01 Other FC SCSI Disk Drive
hdisk14 Available 53-T1-01 Other FC SCSI Disk Drive
hdisk15 Available 53-T1-01 Other FC SCSI Disk Drive

On peut aussi constater l’absence du driver ODM de EMC :

root [SERV]:/home> lslpp -L | grep EMC
root [SERV]:/home>

Installation du driver avec SMIT :

root [SERV]:/tmp/EMC> ll EMC.AIX.5.3.0.5
-rw-r--r--    1 root     sys        52172800 Jan 20 2011  EMC.AIX.5.3.0.5
root [SERV]:/tmp/EMC> smit installp

Install Software

* INPUT device / directory for software              [.]

* SOFTWARE to install                                       [+ 5.3.0.5  EMC CLARiiON AIX Support Software + 5.3.0.5  EMC CLARiiON FCP MPIO Support Software]
  PREVIEW only? (install operation will NOT occur)    no
  ACCEPT new license agreements?                      yes

[…]
installp: APPLYING software for:
        EMC.CLARiiON.aix.rte 5.3.0.5
        EMC.CLARiiON.fcp.MPIO.rte 5.3.0.5
[…]
Installation Summary
--------------------
Name                        Level           Part        Event       Result
-------------------------------------------------------------------------------
EMC.CLARiiON.aix.rte        5.3.0.5         USR         APPLY       SUCCESS
EMC.CLARiiON.fcp.MPIO.rte   5.3.0.5         USR         APPLY       SUCCESS
EMC.CLARiiON.aix.rte        5.3.0.5         ROOT        APPLY       SUCCESS
EMC.CLARiiON.fcp.MPIO.rte   5.3.0.5         ROOT        APPLY       SUCCESS

On peut vérifier la présence du driver ODM de EMC :
<div style="padding-left: 2em;"><source>
root [SERV]:/tmp/EMC> lslpp -L | grep EMC
  EMC.CLARiiON.aix.rte       5.3.0.5    C     F    EMC CLARiiON AIX Support
  EMC.CLARiiON.fcp.MPIO.rte  5.3.0.5    C     F    EMC CLARiiON FCP MPIO Support
Relance du serveur pour charger le driver :
root [SERV]:/tmp/EMC> reboot
Rebooting . . .

On peut constater qu’il n’y a plus que la présence d’un seul disque :

root [NIM]:/> lsdev -c disk
hdisk0 Available 21-T1-01 EMC CLARiiON FCP MPIO VRAID Disk
hdisk1 Available 21-T1-01 EMC CLARiiON FCP MPIO VRAID Disk
hdisk2  Defined   50-T1-01 Other FC SCSI Disk Drive
hdisk3  Defined   50-T1-01 Other FC SCSI Disk Drive
hdisk4  Defined   51-T1-01 Other FC SCSI Disk Drive
hdisk5  Defined   51-T1-01 Other FC SCSI Disk Drive
hdisk6  Defined   51-T1-01 Other FC SCSI Disk Drive
hdisk7  Defined   51-T1-01 Other FC SCSI Disk Drive
hdisk8  Defined   52-T1-01 Other FC SCSI Disk Drive
hdisk9  Defined   52-T1-01 Other FC SCSI Disk Drive
hdisk10 Defined   52-T1-01 Other FC SCSI Disk Drive
hdisk11 Defined   52-T1-01 Other FC SCSI Disk Drive
hdisk12 Defined   53-T1-01 Other FC SCSI Disk Drive
hdisk13 Defined   53-T1-01 Other FC SCSI Disk Drive
hdisk14 Defined   53-T1-01 Other FC SCSI Disk Drive
hdisk15 Defined   53-T1-01 Other FC SCSI Disk Drive

Et la visibilité de plusieurs chemins :

root [NIM]:/> lspath
Enabled hdisk0 fscsi0
Enabled hdisk0 fscsi0
Enabled hdisk1 fscsi0
Enabled hdisk1 fscsi0
Enabled hdisk0 fscsi1
Enabled hdisk1 fscsi1
Enabled hdisk0 fscsi1
Enabled hdisk1 fscsi1
Enabled hdisk0 fscsi2
Enabled hdisk1 fscsi2
Enabled hdisk0 fscsi2
Enabled hdisk1 fscsi2
Enabled hdisk0 fscsi3
Enabled hdisk1 fscsi3
Enabled hdisk0 fscsi3
Enabled hdisk1 fscsi3

Pour un seul disque :

# lspath -l hdisk2 -H -F"name parent path_id connection status"
name   parent path_id connection                     status

hdisk2 fscsi0 0       500601693ea02fb3,2000000000000 Enabled
hdisk2 fscsi0 1       500601623ea02fb3,2000000000000 Enabled
hdisk2 fscsi1 2       500601613ea02fb3,2000000000000 Enabled
hdisk2 fscsi1 3       5006016a3ea02fb3,2000000000000 Enabled
hdisk2 fscsi2 4       500601693ea02fb3,2000000000000 Enabled
hdisk2 fscsi2 5       500601623ea02fb3,2000000000000 Enabled
hdisk2 fscsi3 6       500601613ea02fb3,2000000000000 Enabled
hdisk2 fscsi3 7       5006016a3ea02fb3,2000000000000 Enabled

Il faut supprimer les anciens disques :

root [SERV]:/> for disk in $(seq 2 15); do rmdev -d -l hdisk$disk; done
hdisk2 deleted
hdisk3 deleted
hdisk4 deleted
hdisk5 deleted
hdisk6 deleted
hdisk7 deleted
hdisk8 deleted
hdisk9 deleted
hdisk10 deleted
hdisk11 deleted
hdisk12 deleted
hdisk13 deleted
hdisk14 deleted
hdisk15 deleted

Il faut que l’on utilise la répartition des IO sur les chemins et donc avoir les attributs algorithm=round_robin et reserve_policy=no_reserve :

root [SERV]:/> chdev -l hdisk0 -a algorithm=round_robin -a reserve_policy=no_reserve -P
hdisk0 changed
root [SERV]:/> chdev -l hdisk1 -a algorithm=round_robin -a reserve_policy=no_reserve -P
hdisk1 changed

Pour vérifier :

root [SERV]:/> lsattr -E -l hdisk0
PCM             PCM/friend/MCLAR_VDISK           Path Control Module              True
PR_key_value    none                             Persistant Reserve Key Value     True
algorithm       round_robin                      Algorithm                        True
clr_q           no                               Device CLEARS its Queue on error True
dist_err_pcnt   0                                Distributed Error Percentage     True
dist_tw_width   50                               Distributed Error Sample Time    True
hcheck_cmd      inquiry                          Health Check Command             True
hcheck_interval 60                               Health Check Interval            True
hcheck_mode     nonactive                        Health Check Mode                True
location                                         Location Label                   True
lun_id          0x0                              Logical Unit Number ID           False
lun_reset_spt   yes                              FC Forced Open LUN               True
max_coalesce    0x100000                         Maximum Coalesce Size            True
max_transfer    0x100000                         Maximum TRANSFER Size            True
node_name       0x50060160bea02fae               FC Node Name                     False
pvid            00f6fa06a7ad15260000000000000000 Physical volume identifier       False
q_err           yes                              Use QERR bit                     True
q_type          simple                           Queue TYPE                       True
queue_depth     32                               Queue DEPTH                      True
reassign_to     120                              REASSIGN time out value          True
reserve_policy  no_reserve                       Reserve Policy                   True
rw_timeout      30                               READ/WRITE time out value        True
scsi_id         0x10d00                          SCSI ID                          False
start_timeout   60                               START UNIT time out value        True
ww_name         0x500601693ea02fae               FC World Wide Name               False

Il faut également que la politique de détection d’erreur sur un lien soit fc_err_recov=fast_fail :

root [SERV]:/> chdev -l fscsi0 -a fc_err_recov=fast_fail -P
fscsi0 changed
root [SERV]:/> chdev -l fscsi1 -a fc_err_recov=fast_fail -P
fscsi1 changed
root [SERV]:/> chdev -l fscsi2 -a fc_err_recov=fast_fail -P
fscsi2 changed
root [SERV]:/> chdev -l fscsi3 -a fc_err_recov=fast_fail -P
fscsi3 changed

Pour vérifier :

root [SERV]:/> lsattr -E -l fscsi0
attach       switch    How this adapter is CONNECTED         False
dyntrk       yes       Dynamic Tracking of FC Devices        True
fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True
scsi_id      0x10b47   Adapter SCSI ID                       False
sw_fc_class  3         FC Class for Fabric                   True

Multipathing and loadbalancing of the load on multiple FC links

For EMC VNX5400 and the EMC ODM driver, you can enable the sharing of a same disk between servers and load balance the load on all FC links.

Exemple for one disk, disable device reserving and enable round_robin (chdev without the -P option change the current configuration and in the database but the devise must not be busy):

root [SERV]:/> lsattr -E -l hdisk20 | grep -E "(algorithm|reserve_policy)"
algorithm       fail_over                        Algorithm                        True
reserve_policy  single_path                      Reserve Policy                   True
root [SERV]:/> chdev -l hdisk20 -a algorithm=round_robin -a reserve_policy=no_reserve
root [SERV]:/> lsattr -E -l hdisk20 | grep -E "(algorithm|reserve_policy)"
algorithm       round_robin                      Algorithm                        True
reserve_policy  no_reserve                       Reserve Policy                   True

For an HP EVA P6300 on an AIX 5.3, disable device reserving (ASM storage on multiple servers) and set path algorithm to fail_over since round_robin is not supported with the HP ODM driver (devices.fcp.disk.HP.hsv.mpio.rte 1.0.5.0 ODM definitions for EVA disk arrays). See AIX - IBM AIX Native MPIO with HP EVA.

for dev in $(lsdev -c disk | awk '/HP/ {print $1}'); do echo "--$dev--"; lsattr -E -l $dev | awk '/algorithm|reserve_policy/ {print $1"\t"$2}'; chdev -l $dev -a algorithm=fail_over -a reserve_policy=no_reserve; lsattr -E -l $dev | awk '/algorithm|reserve_policy/ {print $1"\t"$2}'; done
--hdisk2--
algorithm       round_robin
reserve_policy  no_reserve
hdisk2 changed
algorithm       round_robin
reserve_policy  no_reserve
--hdisk3--
...
algorithm       round_robin
reserve_policy  no_reserve

If you enable the round_robin on an unsupported driver you can have this kind of error:

WARNING: IO Failed.  au:0 diskname:/dev/rac_dg2_02
         rq:1104882a0 buffer:1105b0c00 au_offset(bytes):0 iosz:4096 operation:0
         status:2
WARNING: IO Failed.  au:37802 diskname:/dev/rac_dg2_02
         rq:1104e1fd0 buffer:110482000 au_offset(bytes):16384 iosz:32768 operation:0
         status:2

Ici on peut voir que les IO sont bien réparties sur les 7 disques (hdisk0 à 6), on peut aussi voir que les 4 liens FC sont chargés :

+-topas_nmon--h=Help-------------Host=SERV---Refresh=2 secs---13:22.48--+
¦ Disk-Adapter-I/O -------------------------------------------------------------¦
¦Name          %busy     read    write        xfers Disks Adapter-Type          ¦
¦fcs7          120.0     13.8  97811.6 KB/s   767.6 28    Virtual Fibre Channel ¦
¦fcs6          150.6      7.9  97874.8 KB/s   766.6 28    Virtual Fibre Channel ¦
¦fcs5          118.5     13.8  97685.2 KB/s   766.6 28    Virtual Fibre Channel ¦
¦fcs4          255.7 390906.8  97811.6 KB/s  2334.4 28    Virtual Fibre Channel ¦
¦TOTALS  4 adapters  390942.3 391183.2 KB/s  4635.3 112   TOTAL(MB/s)=763.8     ¦
¦ Disk-KBytes/second-(K=1024,M=1024*1024) --------------------------------------¦
¦Disk     Busy  Read  Write 0----------25-----------50------------75--------100 ¦
¦ Name          KB/s   KB/s |           |            |             |          | ¦
¦hdisk19   96% 391645      0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>| ¦
¦hdisk1    39%      0  60974|WWWWWWWWWWWWWWWWWWWW                    >        | ¦
¦hdisk3    32%      0  54340|WWWWWWWWWWWWWWWWW                          >     | ¦
¦hdisk5    33%      0  49411|WWWWWWWWWWWWWWWWW                       >        | ¦
¦hdisk7     0%      0      0|                    >                            | ¦
¦hdisk9     0%      0      0|                                              >  | ¦
¦hdisk4    28%      0  46947|WWWWWWWWWWWWWWW                         >        | ¦
¦hdisk8     0%      0      0|                                              >  | ¦
¦hdisk6    33%      0  49854|WWWWWWWWWWWWWWWWW                          >     | ¦
¦hdisk0    39%     49  64702|RWWWWWWWWWWWWWWWWWWWW                            > ¦
¦hdisk2    42%      0  64702|WWWWWWWWWWWWWWWWWWWWW                      >     | ¦
¦Totals        391695 390930+-----------|------------|-------------|----------+ ¦
¦-------------------------------------------------------------------------------¦

En arretant le VIO1, on voit que les IO se retrouvent sur seulement 2 liens FS :

+-topas_nmon--p=Partitions-------Host=SERV---Refresh=2 secs---13:29.35--+
¦ Disk-Adapter-I/O -------------------------------------------------------------¦
¦Name          %busy     read    write        xfers Disks Adapter-Type          ¦
¦fcs7          108.9     22.0 184634.5 KB/s  1448.0 28    Virtual Fibre Channel ¦
¦fcs6          214.3 368731.4 184678.5 KB/s  2643.6 28    Virtual Fibre Channel ¦
¦fcs5            0.0      0.0      0.0 KB/s     0.0 28    Virtual Fibre Channel ¦
¦fcs4            0.0      0.0      0.0 KB/s     0.0 28    Virtual Fibre Channel ¦
¦TOTALS  4 adapters  368753.4 369313.0 KB/s  4091.5 112   TOTAL(MB/s)=720.8     ¦
¦ Disk-KBytes/second-(K=1024,M=1024*1024) --------------------------------------¦
¦Disk     Busy  Read  Write 0----------25-----------50------------75--------100 ¦
¦ Name          KB/s   KB/s |           |            |             |          | ¦
¦hdisk19   96% 369197      0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > ¦
¦hdisk1    23%      0  65489|WWWWWWWWWWWW                            >        | ¦
¦hdisk3    21%      0  65489|WWWWWWWWWWW                                      > ¦
¦hdisk5     2%      0   5500|WW                                      >        | ¦
¦hdisk7     0%      0      0|                    >                            | ¦
¦hdisk9     0%      0      0|                                              >  | ¦
¦hdisk4    14%      0  45343|WWWWWWW                                 >        | ¦
¦hdisk8     0%      0      0|                                              >  | ¦
¦hdisk6    18%      0  56727|WWWWWWWWW                                  >     | ¦
¦hdisk0    23%     46  65489|RWWWWWWWWWWWW                                    > ¦
¦hdisk2    20%      0  65489|WWWWWWWWWWW                                >     | ¦
¦Totals        369249 369633+-----------|------------|-------------|----------+ ¦
¦-------------------------------------------------------------------------------¦

Si le VIO1 revient, tout redevient normal :

+-topas_nmon--q=Quit-------------Host=SERV---Refresh=2 secs---13:31.57--+
¦ Disk-Adapter-I/O -------------------------------------------------------------¦
¦Name          %busy     read    write        xfers Disks Adapter-Type          ¦
¦fcs7           89.9     16.0  81505.9 KB/s  1063.1 28    Virtual Fibre Channel ¦
¦fcs6          184.9 317800.1  90434.6 KB/s  1869.5 28    Virtual Fibre Channel ¦
¦fcs5           75.9      0.0  71216.2 KB/s   860.3 28    Virtual Fibre Channel ¦
¦fcs4           87.4      0.0  74411.6 KB/s   859.8 28    Virtual Fibre Channel ¦
¦TOTALS  4 adapters  317816.1 317568.3 KB/s  4652.7 112   TOTAL(MB/s)=620.5     ¦
¦ Disk-KBytes/second-(K=1024,M=1024*1024) --------------------------------------¦
¦Disk     Busy  Read  Write 0----------25-----------50------------75--------100 ¦
¦ Name          KB/s   KB/s |           |            |             |          | ¦
¦hdisk19   82% 317522      0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR        > ¦
¦hdisk1    29%      0  63500|WWWWWWWWWWWWWWW                                  > ¦
¦hdisk3    21%      0  65483|WWWWWWWWWWW                                      > ¦
¦hdisk5    12%      0  32232|WWWWWWW                                          > ¦
¦hdisk7     0%      0      0|                                                 > ¦
¦hdisk9     0%      0      0|                                                 > ¦
¦hdisk4    23%      0  65483|WWWWWWWWWWWW                                     > ¦
¦hdisk8     0%      0      0|                                                 > ¦
¦hdisk6     0%      0      0|                                                 > ¦
¦hdisk0    13%     38  25515|RWWWWWWW                                         > ¦
¦hdisk2    26%      0  65483|WWWWWWWWWWWWW                                    > ¦
¦Totals        317560 317696+-----------|------------|-------------|----------+ ¦
¦-------------------------------------------------------------------------------¦

Create alias of disks for ASM

In order to be able to remap disks to ASM without the need to touch ASM configuration you can create new dev files with a human readable name to present at ASM.

Display disk UID, size, minor and major number:

for dev in $(lsdev -c disk | awk '/HP/ {print $1}'); do echo -ne "$dev\t"; lsattr -E -l $dev -a unique_id | awk '{print $2}' | tr '\n' ' '; getconf DISK_SIZE /dev/$dev| tr '\n' ' '; ls -laF /dev/$dev |awk '{print $5" "$6}'; done
hdisk2  342136001438009B0616D000050000144000006HSV34002HPfcp 71680 41, 0
hdisk3  342136001438009B0616D00005000007D000006HSV34002HPfcp 512000 41, 3
hdisk4  342136001438009B0616D000050000081000006HSV34002HPfcp 512000 41, 2
hdisk5  342136001438009B0616D000050000085000006HSV34002HPfcp 512000 41, 7
hdisk6  342136001438009B0616D000050000089000006HSV34002HPfcp 512000 41, 6
hdisk7  342136001438009B0616D00005000008D000006HSV34002HPfcp 512000 41, 1
hdisk8  342136001438009B0616D000050000091000006HSV34002HPfcp 512000 41, 9
hdisk9  342136001438009B0616D000050000095000006HSV34002HPfcp 512000 41, 4
hdisk10 342136001438009B0616D000050000099000006HSV34002HPfcp 512000 41, 8
hdisk11 342136001438009B0616D000050000041000006HSV34002HPfcp 1024 41, 10
hdisk12 342136001438009B0616D00005000003D000006HSV34002HPfcp 5120 41, 5

Check the correspondence of the UID and the LUN name on the SAN. For an EVA:

/opt/hphsv/bin/hsvpaths
Disk     LUNx LUNd VG           S Status   T  E  D  F  M  HBAs                             Node WWN         Ports
hdisk2      B   11 vg_oracle    A Good     8  8  0  0  0  fscsi0,fscsi1,fscsi2,fscsi3      5001438024482d10 8,c,9,d
hdisk3      3    3 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk4      4    4 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk5      5    5 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk6      6    6 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk7      7    7 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk8      8    8 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk9      9    9 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk10     A   10 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk11     2    2 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d
hdisk12     1    1 None         - Good     8  8  0  0  0  fscsi0,fscsi1,fscsi3,fscsi2      5001438024482d10 8,c,9,d

Create new special files:

mknod /dev/rac_dg2_06 c 41 6
…
mknod /dev/rac_ocr_01 c 41 8
sudo chown oracle:dba /dev/rac_dg* /dev/rac_ocr_01 /dev/rac_vot_01
sudo chmod 660 /dev/rac_dg* /dev/rac_ocr_01 /dev/rac_vot_01
ls -laF /dev/rac_dg* /dev/rac_ocr_01 /dev/rac_vot_01
crw-rw----   1 oracle   dba          41, 22 May 21 11:00 /dev/rac_dg1_01
...
crw-rw----   1 oracle   dba          41, 15 May 21 10:58 /dev/rac_vot_01

Display the mapping:

printf "%7s <-> %10s <-> %9s <-> %s\n" disk alias "size MB" UID; for dev in $(lsdev -c disk | awk '/HP/ {print $1}'); do  major=$(ls -laF /dev/$dev |awk '{print $5}'|sed "s/\,//"); minor=$(ls -laF /dev/$dev |awk '{print $6}'); alias="$(ls -alF /dev/rac_* | grep -E "$major,\ +$minor\ "|awk '{print $10}' | sed -e "s/^.*\/dev\///")"; [ -z $alias ] && alias=NONE; size="$(getconf DISK_SIZE /dev/$dev)"; uid="$(lsattr -E -l $dev -a unique_id | sed -e "s/^.*342136001438009B0616D00005000//" -e "s/000006HSV34002HPfcp.*//")"; printf "%7s <-> %10s <-> %6.d MB <-> %s\n" $dev $alias $size $uid; done
   disk <->      alias <->   size MB <-> UID
 hdisk2 <->       NONE <->  71680 MB <-> 0144
 hdisk3 <-> rac_dg1_01 <-> 512000 MB <-> 007D
 hdisk4 <-> rac_dg1_02 <-> 512000 MB <-> 0081
 hdisk5 <-> rac_dg2_01 <-> 512000 MB <-> 0085
 hdisk6 <-> rac_dg2_02 <-> 512000 MB <-> 0089
 hdisk7 <-> rac_dg2_03 <-> 512000 MB <-> 008D
 hdisk8 <-> rac_dg2_04 <-> 512000 MB <-> 0091
 hdisk9 <-> rac_dg2_05 <-> 512000 MB <-> 0095
hdisk10 <-> rac_dg2_06 <-> 512000 MB <-> 0099
hdisk11 <-> rac_vot_01 <->   1024 MB <-> 0041
hdisk12 <-> rac_ocr_01 <->   5120 MB <-> 003D

Remove all missing path

To remove all Missing path in one command:

lspath -H -F"name parent path_id connection status" | awk '/Missing/ {print "rmpath -l "$1" -p "$2" -w "$4" -d" | "bash"}'
path Deleted
...
path Deleted

Display the number of path by device

printf "%7s - %5s - %6s\n" disk total enable; for disk in $(lsdev -c disk | awk '{print $1}'); do total=$(lspath -l $disk | wc -l | sed "s/\ *//"); ena=$(lspath -l $disk -s Enabled | wc -l | sed "s/\ *//"); printf "%7s -   %2.d  / %2.d\n" $disk $total $ena; done
   disk - total - enable
 hdisk0 -    1  /  1
 hdisk1 -    1  /  1
 hdisk2 -    8  /  8
...
hdisk11 -    8  /  8

Remove a path

# lspath -l hdisk21 -H -F"name parent path_id connection status"
name    parent path_id connection                     status

hdisk21 fscsi2 0       5001438024482d18,6000000000000 Enabled
hdisk21 fscsi0 2       5001438024482d18,4000000000000 Enabled
hdisk21 fscsi0 3       5001438024482d1c,4000000000000 Enabled
hdisk21 fscsi2 1       5001438024482d1c,6000000000000 Enabled
hdisk21 fscsi3 4       5001438024482d19,6000000000000 Enabled
hdisk21 fscsi3 5       5001438024482d1d,6000000000000 Enabled
hdisk21 fscsi1 6       5001438024482d19,4000000000000 Enabled
hdisk21 fscsi1 7       5001438024482d1d,4000000000000 Enabled
hdisk21 fscsi0 8       5001438024482d18,6000000000000 Defined
# rmpath

Usage:
rmpath [-l Name] [-p ParentName] [-w ConnectionLocation] [-d]
rmpath -h

# rmpath -l hdisk21 -p fscsi0 -w 5001438024482d18,6000000000000 -d
path Deleted
# lspath -l hdisk21 -H -F"name parent path_id connection status"
name    parent path_id connection                     status

hdisk21 fscsi2 0       5001438024482d18,6000000000000 Enabled
hdisk21 fscsi0 2       5001438024482d18,4000000000000 Enabled
hdisk21 fscsi0 3       5001438024482d1c,4000000000000 Enabled
hdisk21 fscsi2 1       5001438024482d1c,6000000000000 Enabled
hdisk21 fscsi3 4       5001438024482d19,6000000000000 Enabled
hdisk21 fscsi3 5       5001438024482d1d,6000000000000 Enabled
hdisk21 fscsi1 6       5001438024482d19,4000000000000 Enabled
hdisk21 fscsi1 7       5001438024482d1d,4000000000000 Enabled

Afficher l'utilisation, le nom du lv le nom du vg et le point de montage

En une commande, used space %used lv vg mount-point :

IFS=$(echo -en "\n\b")
echo -e "used \t size \t %used \t LV \t\t VG \t\t mount"; for fs in $(df -k|sort|grep lv); do lv=$(echo $fs | awk '{print $1}'|awk -F/ '{print $3}'); vg=$(lslv $lv|grep "GROUP"|awk '{print $6}');s=$(echo $fs | awk '{print $2}'); f=$(echo $fs | awk '{print $3}'); dir=$(echo $fs | awk '{print $7}'); u=$(echo "($s - $f)/1024"|bc); echo -e  "$u \t$(echo "$s/1024"|bc)\t$(echo "($u * 100)/($s/1024)"|bc) \t $lv \t $vg \t $dir"; done
used     size    %used   LV              VG              mount
17642   20480   86       fslv01          rootvg          /install
24017   30720   78       fslv02          rootvg          /opt/oracle
787     2048    38       fslv03          rootvg          /share/bvistrn1rac01-app.nga
82      4480    1        fslv04          rootvg          /share/bvistrn1rac01-app.tools
0       128     0        fslv05          rootvg          /ngadb
10804   45888   23       lv_ora_RMAN     vg_oracle       /opt/oracle/RMAN
6919    25600   27       lv_ora_backup   vg_oracle       /opt/oracle/backup

Augmenter la taille d'un LV

#####################
#### Increase LV ####
#####################
root [SERV]:/> df -g /dev/lv_BDD_d04
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/lv_BDD_d04    274.00     20.21   93%       10     1% /oracle/oradata4/BDD
root [SERV]:/> lslv lv_BDD_d04

LOGICAL VOLUME:     lv_BDD_d04         VOLUME GROUP:   BDDvg2
LV IDENTIFIER:      00f6fa0600004c0000000135b0582f02.8 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            2192                   PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                2192                   PPs:            2192
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       maximum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    1024
MOUNT POINT:        /oracle/oradata4/BDD LABEL:          /oracle/oradata4/BDD
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO
DEVICESUBTYPE : DS_LVZ
root [SERV]:/> lslv -l lv_BDD_d04
lv_BDD_d04:/oracle/oradata4/BDD
PV                COPIES        IN BAND       DISTRIBUTION
hdisk2            220:000:000   0%            125:000:000:095:000
hdisk22           220:000:000   0%            125:000:000:095:000
hdisk23           219:000:000   0%            126:000:000:093:000
hdisk24           219:000:000   0%            126:000:000:093:000
hdisk25           219:000:000   0%            126:000:000:093:000
hdisk26           219:000:000   0%            126:000:000:093:000
hdisk27           219:000:000   0%            126:000:000:093:000
hdisk28           219:000:000   0%            126:000:000:093:000
hdisk29           219:000:000   0%            127:000:000:092:000
hdisk30           219:000:000   0%            127:000:000:092:000
#Get new size to use 85%
root [SERV]:/> echo "(274-20)*100/85"|bc
298
root [SERV]:/> echo "(298*1024)/128"|bc
2384
# ->> 298 GB or 2384 PP
#Check available PP
root [SERV]:/> for d in 2 22 23 24 25 26 27 28 29 30; do lspv hdisk$d|grep "FREE PPs";done
FREE PPs:           81 (10368 megabytes)     HOT SPARE:        no
FREE PPs:           81 (10368 megabytes)     HOT SPARE:        no
FREE PPs:           83 (10624 megabytes)     HOT SPARE:        no
FREE PPs:           83 (10624 megabytes)     HOT SPARE:        no
FREE PPs:           84 (10752 megabytes)     HOT SPARE:        no
FREE PPs:           84 (10752 megabytes)     HOT SPARE:        no
FREE PPs:           84 (10752 megabytes)     HOT SPARE:        no
FREE PPs:           84 (10752 megabytes)     HOT SPARE:        no
FREE PPs:           86 (11008 megabytes)     HOT SPARE:        no
FREE PPs:           86 (11008 megabytes)     HOT SPARE:        no
# Check OK
#Increase number of Max PP
root [SERV]:/> smit lv
  Set Characteristic of a Logical Volume
    Change a Logical Volume
->LOGICAL VOLUME name = lv_BDD_d04
->MAXIMUM NUMBER of LOGICAL PARTITIONS = 2384
# Check MAX PP :
root [SERV]:/> lslv lv_BDD_d04
LOGICAL VOLUME:     lv_BDD_d04         VOLUME GROUP:   BDDvg2
LV IDENTIFIER:      00f6fa0600004c0000000135b0582f02.8 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            2384                   PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                2192                   PPs:            2192
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       maximum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    1024
MOUNT POINT:        /oracle/oradata4/BDD LABEL:          /oracle/oradata4/BDD
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO
DEVICESUBTYPE : DS_LVZ
# Increase the FS :
root [SERV]:/> smit fs
Change / Show Characteristics of an Enhanced Journaled File System
->Unit Size = Gigabytes
->Number of units = [298]
# Check :
root [SERV]:/> df -g /dev/lv_BDD_d04
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/lv_BDD_d04    298.00     44.21   86%       10     1% /oracle/oradata4/BDD
root [SERV]:/> lslv lv_BDD_d04
LOGICAL VOLUME:     lv_BDD_d04         VOLUME GROUP:   BDDvg2
LV IDENTIFIER:      00f6fa0600004c0000000135b0582f02.8 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            2384                   PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                2384                   PPs:            2384
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       maximum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    1024
MOUNT POINT:        /oracle/oradata4/BDD LABEL:          /oracle/oradata4/BDD
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO
DEVICESUBTYPE : DS_LVZ

root [SERV]:/> lslv -l lv_BDD_d04
lv_BDD_d04:/oracle/oradata4/BDD
PV                COPIES        IN BAND       DISTRIBUTION
hdisk2            240:000:000   0%            125:000:000:095:020
hdisk22           240:000:000   0%            125:000:000:095:020
hdisk23           238:000:000   0%            126:000:000:093:019
hdisk24           238:000:000   0%            126:000:000:093:019
hdisk25           238:000:000   0%            126:000:000:093:019
hdisk26           238:000:000   0%            126:000:000:093:019
hdisk27           238:000:000   0%            126:000:000:093:019
hdisk28           238:000:000   0%            126:000:000:093:019
hdisk29           238:000:000   0%            127:000:000:092:019
hdisk30           238:000:000   0%            127:000:000:092:019

Restaurer un mksysb sur une partition avec nim

Configuration du serveur NIM

Récupérer le mksysb depuis les dvd :

loopmount -i cd_image_1896530.vol1 -o "-V cdrfs -o ro" -m /mnt
cat /mnt/usr/sys/inst.images/mksysb_image /mnt/usr/sys/inst.images/mksysb_image2 > mksysb_servbackup_20111208.vol1
umount /mnt
loopmount -i cd_image_1896530.vol2 -o "-V cdrfs -o ro" -m /mnt
cat /mnt/usr/sys/inst.images/mksysb_image > mksysb_servbackup_20111108.vol2
umount /mnt
cat mksysb_servbackup_20111208.vol1 mksysb_servbackup_20111108.vol2 > ../mksysb_servbackup_20111108

Déclarer le mksysb dans nim :

smit nim
	Perform NIM Administration Tasks > Manage Resources > Define a Resource > mksysb = a mksysb image
	Resource Name=mksysb_servbackup_20111108
	Server of Resource=master
	Location of Resource=/export/mksysb/mksysb_servbackup_20111108

Configurer le service nim bosinst :

smit nim_bosinst
	Select a TARGET for the operation = SERVER     machines       standalone
	Select the installation TYPE = mksysb - Install from a mksysb
	Select the MKSYSB to use for the installation = mksysb_servbackup_20111108     resources       mksysb
	Select the SPOT to use for the installation = spot_AIX53tl11sp07     resources       spot
	LPP_SOURCE = lpp_source_AIX53tl11sp07
	ACCEPT new license agreements? = yes
	Initiate reboot and installation now? = no

On peut vérifier les export nfs

exportfs
	/export/spot/spot_AIX53tl11sp07/usr         -ro,root=SERVER,access=SERVER
	/export/lpp_source/lpp_source_AIX53tl11sp07 -ro,root=SERVER,access=SERVER
	/export/mksysb/mksysb_servbackup_20111108   -ro,root=SERVER,access=SERVER
	/export/nim/scripts/SERVER.script     -ro,root=SERVER,access=SERVER

et la configuration du bootp :

cat /etc/bootptab | grep -Ev "^#"
SERVER:bf=/tftpboot/SERVER:ip=192.168.1.8:ht=ethernet:sa=192.168.1.9:sm=255.255.255.192:

Configuration de la partition et restauration

Zoning SAN :

Main Menu
 5.   Select Boot Options
Multiboot
 4.   SAN Zoning Support
>Enable each FC adapter and do the zonning
Select Media Adapter
 1.  U8205.E6B.06FA06P-V8-C70-T1    /vdevice/vfc-client@30000046
 2.  U8205.E6B.06FA06P-V8-C71-T1    /vdevice/vfc-client@30000047
 3.  U8205.E6B.06FA06P-V8-C72-T1    /vdevice/vfc-client@30000048
 4.  U8205.E6B.06FA06P-V8-C73-T1    /vdevice/vfc-client@30000049
   .-----------------------------------------------------------------------.
   |  The selected adapter has been opened.                                |
   |  Zoning of attached disks may now be possible.                        |
   |  Press any key to close the adapter and return to the previous menu.  |
   `-----------------------------------------------------------------------'

Configurer le bootp :

Main Menu
 2.   Setup Remote IPL (Initial Program Load)
NIC Adapters
 2.  Interpartition Logical LAN      U8205.E6B.06FA06P-V8-C11-T1  fa051b9ab60b
Select Internet Protocol Version.
 1.   IPv4 - Address Format 123.231.111.222
Select Network Service.
 1.   BOOTP
Network Parameters
 1.   IP Parameters
	 1.   Client IP Address                    [192.168.1.8]
	 2.   Server IP Address                    [192.168.1.9]
	 3.   Gateway IP Address                   [192.168.1.5]
	 4.   Subnet Mask                          [255.255.255.192]
Network Parameters
 3.   Ping Test
192.168.1.8:    24  bytes from 192.168.1.9:  icmp_seq=10  ttl=? time=11  ms
                              .-----------------.
                              |  Ping  Success. |
                              `-----------------'

booter sur le serveur NIM en bootp :

Main Menu
 5.   Select Boot Options
Multiboot
 1.   Select Install/Boot Device
Select Device Type
 6.   Network
Select Network Service.
 1.   BOOTP
Select Device
 2.        -      Interpartition Logical LAN
	( loc=U8205.E6B.06FA06P-V8-C11-T1 )
Select Task
 2.   Normal Mode Boot
Are you sure you want to exit System Management Services?
 1.   Yes
IBM IBM IBM IBM IBM IBM                             IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM     STARTING SOFTWARE       IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM        PLEASE WAIT...       IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM                             IBM IBM IBM IBM IBM IBM
TFTP BOOT ---------------------------------------------------
Server IP.....................192.168.1.9
Client IP.....................192.168.1.8
Gateway IP....................192.168.1.5
Subnet Mask...................255.255.255.192
( 1  ) Filename................./tftpboot/SERVER
TFTP Retries..................5
Block Size....................512
FINAL PACKET COUNT = 30614
FINAL FILE SIZE = 15673856  BYTES

Elapsed time since release of system processors: 60593 mins 15 secs

-------------------------------------------------------------------------------
                                Welcome to AIX.
                       boot image timestamp: 08:51 11/15
                 The current time and date: 16:26:36 12/14/2011
               number of processors: 1    size of memory: 4096MB
boot device: /vdevice/l-lan@3000000b:speed=auto,duplex=auto,192.168.1.9,,192.168.1.8,192.168.1.5,5,5,255.255.255.192,512
                     kernel size: 15417796; 64 bit kernel
-------------------------------------------------------------------------------
Type a 1 and press Enter to use this terminal as the system console.

>>>  1 Type 1 and press Enter to have English during install.
An invalid disk (04-08-00-8,0) was specified in the location field of the data file.
>>> 1 Continue with Install
    2 Change/Show Installation Settings and Install
    1 Disk(s) where you want to install ...... hdisk0
>>>  1  hdisk0   70-T1-01        112640   other vg        Yes    No
>>>  2  hdisk1   70-T1-01        112640   none            Yes    No
>>>  0   Continue with choices indicated above
>>> 0 Install with the settings listed above.
                       Installing Base Operating System

        Please wait...

        Approximate     Elapsed time
     % tasks complete   (in minutes)

          0               0
          7               0      0% of mksysb data restored.
         14               0      9% of mksysb data restored.
         40               3      45% of mksysb data restored.
		 52               4      60% of mksysb data restored.
         83               7      Over mounting /.

installp: APPLYING software for:
        bos.rte.install 5.3.11.7

Finished processing all filesets.
Filesets processed:   80 of 277
System Installation Time: 7 minutes       Tasks Complete: 87%
Filesets processed:  217 of 277
System Installation Time: 12 minutes       Tasks Complete: 86%
Filesets processed:  57 of 58
System Installation Time: 14 minutes       Tasks Complete: 87%

         89               14      Copying Cu* to disk
         90               16      Creating boot image.
		 
forced unmount of /var
Rebooting . . .

Reset le compte root

Il faut configurer une serveur bootp avec une ISO d'AIX et booter la partition dessus.

Ensuite on démarre le serveur en maintenance mode pour éditer le fichier :

boot on network
    3 Start Maintenance Mode for System Recovery
	>>> 1 Access a Root Volume Group
   0 Continue
Type the number for a volume group to display the logical volume information
and press Enter.

   1)   Volume Group 00f6fa0600004c00000001344128101e contains these disks:
   1) Access this Volume Group and start a shell
# export TERM=vt100
# set -o vi
# vi /etc/security/passwd
root:
        password = RTs8kI6LqaC6c
        lastupdate = 1289323087
        flags =
(RTs8kI6LqaC6c=root)
# vi /etc/resolv.conf
domain  dom.com
nameserver      192.168.1.2
# vi /etc/hosts

>>REBOOT<<