Among its many features, the Infoblox-4010 use a RAID (Redundant Array of Independent Disks) 10 array to provide the optimum mix of high database performance and redundant data storage with recovery features in the event of disk failures. The disk array is completely self managed. There are no maintenance or special procedures required to service the disk subsystem.
...
Caution: Never remove more than one disk at a time from the array. Removing two or more disks at once can cause an array failure and result in an unrecoverable condition. You should replace only one disk at a time, using a replacement disk from Infoblox. For information, see Replacing a Failed Disk Drive.
...
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
RAID 10 (or sometimes called RAID 1+0) uses a minimum of four disk drives to create a RAID 0 array from two RAID 1 arrays, as shown in Figure 8.15. It uses mirroring and striping to form a stripe of mirrored subsets. This means that the array combines—or stripes—multiple disk drives, creating a single logical volume (RAID 0). RAID 10 combines the high performance of RAID 0 and the high fault tolerance of RAID 1. Striping disk drives improves database write performance over a single disk drive for large databases. The disks are also mirrored (RAID 1), so that each disk in the logical volume is fully redundant.
Anchor | ||||
---|---|---|---|---|
|
RAID 1RAID 1
Disk 1 Primary
Disk 1 Backup
Disk 2 Primary
Disk 2 Backup
NIOS 8.1NIOS Administrator Guide (Rev. A)509
Managing Appliance Operations
Drawio | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
When evaluating a fault on the Infoblox-4010, it is best to think of the disk subsystem as a single, integrated unit with four components, rather than four independent disk drives. For information, see Evaluating the Status of the Disk Subsystem.
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
You can monitor the disk subsystem through the Infoblox Grid Manager GUI, the scrolling front panel LCD display, and four front panel LEDs next to the disk drives. In addition, you can monitor the disk status by using the CLI command show hardware_status
. The following example displays the status of an Infoblox-4010 using the command:
Infoblox > show hardware_status
POWER:Power OK
Fan1:7258 RPM
Fan2:6887 RPM
Fan3:7258 RPM
CPU1_TEMP: +20.0 C
CPU2_TEMP: +24.0 C
SYS_TEMP: +35 C
RAID_ARRAY: OPTIMAL
RAID_BATTERY: OK READY Yes 103 HOURS
The Detailed Status panel provides a detailed status report on the appliance and service operations. To see a detailed status report:
...
After displaying the Detailed Status panel, you can view the status of the selected Grid member. For more information on the Detailed Status panel, see Viewing Status.
The RAID icons indicate the status of the RAID array on the Infoblox-4010.
Icon | Color | Meaning |
Green | The RAID array is in an optimal state. |
| Yellow | A new disk was inserted and the RAID array is rebuilding. |
| Red | The RAID array is degraded. At least one disk is not functioning properly. The GUI lists the disks that are online. Replace only the disks that are offline. |
The appliance also displays the type of each disk. In the event of a disk failure, you must replace the failed disk with one that is qualified and shipped from Infoblox and has the same disk type as the rest of the disks in the array.
Infoblox-4010 uses only the IB-Type 3 disk type. All disk drives in the array must have the same disk type for the array to function properly. You can have either IB-Type 1, IB-Type 2, or IB-Type-3, but you cannot mix both in the array. When you have a mismatched disk in the array, you must promptly replace the disk with a replacement disk from Infoblox to avoid operational issues.
510NIOS Administrator Guide (Rev. A)NIOS 8.1
Managing the Disk Subsystem on the Infoblox-4010
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
The disk drives of the Infoblox-4010 are located on the appliance front panel. To the right of each drive, two LEDs display connection and activity status. Table 8.10 lists the disk drive LED combinations and the states they represent.
Anchor | ||||
---|---|---|---|---|
|
Online/Activity |
---|
...
LED (Green) | Fault/UID LED (Amber/Blue) | Description |
---|---|---|
On, off, or blinking | Alternating amber and |
...
blue | The drive has failed, or it has received a predictive failure alert; it also has been selected by a management application. | |
On, off, or blinking | Steadily blue | The drive is operating normally. |
On | Amber, blinking regularly (1 Hz) | The drive has received a predictive failure alert. Replace the drive as soon as possible. |
...
On | Off | The drive is online but it is not currently active. |
Blinking regularly (1Hz) | Off | Do not remove the drive. The drive is rebuilding. Removing the drive may terminate the current operation and cause data loss. |
Blinking irregularly | Amber, blinking regularly (1 Hz) | The drive is active, but it has received a predictive failure alert. Replace the drive as soon as possible. |
Blinking irregularly | Off | The drive is active and operating normally. |
Off | Steadily amber | A critical fault condition has been identified for this drive, |
...
and the controller has placed it offline. Replace the drive as soon as possible. | ||
Off | Amber, blinking regularly (1 Hz) | The drive has received a predictive failure alert. Replace the drive as soon as possible. |
Off | Off | The drive is offline, a spare, or not configured as part of an array. |
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
...
- Identify and verify the failed drive via the Grid Manager, front panel LCD, or CLI.
- Make sure you have identified the correct drive.
Note: Do not remove a correctly functioning drive. - Push in the latch for the drive and pull the release lever out towards you.
...
- When the drive disengages, wait about 30 seconds for the disk to completely stop spinning.
- Slide it out of the slot.
...
- Remove only one disk at a time. Do not remove two or more disks from the appliance at the same time. Removing two or more disks at the same time might result in an appliance failure and require an RMA of the appliance. This rule applies to both powered and powered down appliances.
- If the status of the array is degraded, remove the failed or failing disk drive only. Do not remove an optimally functioning drive.
- If your acceptance procedure requires a test of the RAID hot swap feature, remove only one disk drive at a time. You can remove a second disk only after you replace the first disk and the array completes its rebuilding process.
- Do not remove a disk drive if the array is rebuilding. This could result in an appliance failure. Verify the status of the array before removing a disk drive.
- Use the following procedure to remove a spinning disk:
- Unlatch and pull the disk about two cm (one inch) to disengage contact.
- Wait about 30 seconds for the disk to completely stop spinning.
- Remove the disk and handle it with care. Do not drop the disk or ship it loosely in a carton.
- You can hot swap a drive while the appliance remains in production.
- There are some conditions that may require powering down the appliance to replace a failed unit. This normally happens if the RAID controller detects an error that could damage the array. If you insert a replacement drive into a live array and the controller doesn't recognize the drive, power down the appliance.
- If you inadvertently remove the wrong disk drive, do not immediately remove the disk drive that you originally intended to remove. Verify the status of the array and replace the disk drive that you removed earlier before removing another drive. Removing a second drive could render the appliance inoperable.
- Older appliances have an audio alarm buzzer that sounds if a drive fails. The alarm automatically stops about 20 seconds after a functional disk has been inserted into the array.
- All disks in the RAID array should have the same disk type for the array to function properly.
- In the unlikely event that two disk drives fail simultaneously and the appliance is still operational, remove and replace the failed disk drives one at a time.
- Rebuild time depends on a number of factors, such as the system load and Grid replication activities. On very busy appliances (over 90% utilization), the disk rebuild process can take as long as 40 hours. On a Grid Master serving a very large Grid, expect the rebuild process to take at least 24 hours.
- Replace a failed or mismatched disk only with a replacement disk shipped from Infoblox. When you request a replacement disk, report the disk type displayed in the Detailed Status panel of the GUI or the Infoblox part number on the disk.
...