In a previous article I explained how a defect hard drive can be replaced in a HP Proliant Server (using a HP Raid Controller) and running on Solaris.
This time I had to add new disks to an existing zfs pool (zpool). There are a lot of howtos on the Internet (a good one is http://docs.oracle.com/cd/E19253-01/819-5461/gazgw/index.html), however almost all howtos don't mention the hpacucli command which must be used when the HP Raid Controller presents the drives to the Operating System.
Once the new disks are physically installed, their presence can be verified in hpacucli:
/opt/HPQacucli/sbin/hpacucli
HP Array Configuration Utility CLI 8.0-14.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.
=> ctrl slot=1 show config
Smart Array P400 in Slot 1 (sn: PA2240J9STL1WU)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 72 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, OK)
array C (SAS, Unused Space: 0 MB)
logicaldrive 3 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 72 GB, OK)
array D (SAS, Unused Space: 0 MB)
logicaldrive 4 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 72 GB, OK)
unassigned
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 146 GB, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 146 GB, OK)
The new drives appear as "unassigned" at the bottom.
Now these drives must be configured like the other drives, as single logical drives with a raid 0 (sounds strange, indeed).
So still in the hpacucli, launch the following commands with the ID's of your new physical drives:
=> ctrl slot=1 create type=ld drives=1I:1:5 raid=0
=> ctrl slot=1 create type=ld drives=1I:1:6 raid=0
Now verify the config again and then exit hpacucli:
=> ctrl slot=1 show config
Smart Array P400 in Slot 1 (sn: PA2240J9STL1WU)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 72 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, OK)
array C (SAS, Unused Space: 0 MB)
logicaldrive 3 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 72 GB, OK)
array D (SAS, Unused Space: 0 MB)
logicaldrive 4 (68.3 GB, RAID 0, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 72 GB, OK)
array E (SAS, Unused Space: 0 MB)
logicaldrive 5 (136.7 GB, RAID 0, OK)
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 146 GB, OK)
array F (SAS, Unused Space: 0 MB)
logicaldrive 6 (136.7 GB, RAID 0, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 146 GB, OK)
=> exit
The disks should now be seen in Solaris. Their new "Solaris device names" can be seen with the command format:
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@0,0/pci8086,25e3@3/pci103c,3234@0/sd@0,0
1. c0t1d0
/pci@0,0/pci8086,25e3@3/pci103c,3234@0/sd@1,0
2. c0t2d0
/pci@0,0/pci8086,25e3@3/pci103c,3234@0/sd@2,0
3. c0t3d0
/pci@0,0/pci8086,25e3@3/pci103c,3234@0/sd@3,0
4. c0t4d0
/pci@0,0/pci8086,25e3@3/pci103c,3234@0/sd@4,0
5. c0t5d0
/pci@0,0/pci8086,25e3@3/pci103c,3234@0/sd@5,0
So the new devices are called c0t4d0 and c0t5d0. The disks can now be added to the existing pool.
The command to add new drives to a zpool allows the -n option, which allows a test add (so it's just a test-run):
zpool add -n zonepool mirror c0t4d0 c0t5d0
would update 'zonepool' to the following configuration:
zonepool
mirror
c0t2d0s0
c0t3d0s0
mirror
c0t4d0
c0t5d0
If there are no errors appearing, the drives can definitely be added to the zpool:
zpool add zonepool mirror c0t4d0 c0t5d0
The zpool status then shows the new attached mirror:
zpool status
pool: zonepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zonepool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t2d0s0 ONLINE 0 0 0
c0t3d0s0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
errors: No known data errors
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder