Back to Contents Page

About PERC 6 and CERC 6i Controllers

Dell™ PERC 6/i, PERC 6/E and CERC 6/i User's Guide

  PERC 6 and CERC 6i Controller Features

  Using the SMART Feature

  Initializing Virtual Disks

  Consistency Checks

  Disk Roaming

  Disk Migration

  Battery Management

  Virtual Disk Write Cache Policies

  Virtual Disk Read Policies

  Reconfiguring Virtual Disks

  Fault Tolerance Features

  Patrol Read


This section describes the features of the Dell™ PowerEdge™ Expandable RAID Controller (PERC) 6 and Dell Cost-Effective RAID Controller (CERC) 6/i family of controllers, such as the configuration options, disk array performance, redundant array of independent disks (RAID) management utilities, and operating system software drivers.


PERC 6 and CERC 6i Controller Features

Table 2-1 compares the hardware configurations for the PERC 6 and CERC 6/i controllers.

Table 2-1. PERC 6 and CERC 6/i Controller Comparisons 

Specification

PERC 6/E Adapter

PERC 6/i Adapter

PERC 6/i Integrated

CERC 6/i Integrated

RAID Levels

0, 1, 5, 6, 10, 50, 60

0, 1, 5, 6, 10, 50, 60

0, 1, 5, 6, 10, 50, 60

0 and 1

Enclosures per Port

Up to 3 enclosures

N/A

N/A

N/A

Ports

2 x4 external wide port

2 x4 internal wide port

2 x4 internal wide port

1 x4 internal wide port

Processor

LSI adapter SAS RAID-on-Chip, 8-port with 1078

LSI adapter SAS RAID-on-Chip, 8-port with 1078

LSI adapter SAS RAID-on-Chip, 8-port with 1078

LSI adapter SAS RAID-on-Chip, 8-port with 1078

Battery Backup Unit

Yes, Transportable

Yesa

Yes

No

Cache Memory

256-MB DDRII cache memory size

Optional 512-MB DIMM

256-MB DDRII cache memory size

256-MB DDRII cache memory size

128-MB DDRII cache memory size

Cache Function

Write-Back, Write-Through, Adaptive Read Ahead, No-Read Ahead, Read Ahead

Write-Back, Write-Through, Adaptive Read Ahead, No-Read Ahead, Read Ahead

Write-Back, Write-Through, Adaptive Read Ahead, No-Read Ahead, Read Ahead

Write-Back, Write-Through, Adaptive Read Ahead, No-Read Ahead, Read Ahead

Maximum Number of Spans per Disk Group

Up to 8 arrays

Up to 8 arrays

Up to 8 arrays

N/A

Maximum Number of Virtual Disks per Disk Group

Up to 16 virtual disks per disk group for non-spanned RAID levels: 0, 1, 5, and 6.

One virtual disk per disk group for spanned RAID levels: 10, 50, and 60.

Up to 16 virtual disks per disk group for non-spanned RAID levels: 0, 1, 5, and 6.

One virtual disk per disk group for spanned RAID levels: 10, 50, and 60.

Up to 16 virtual disks per disk group for non-spanned RAID levels: 0, 1, 5, and 6.

One virtual disk per disk group for spanned RAID levels: 10, 50, and 60.

Up to 16 virtual disks per disk group

RAID 0=16

RAID 1=16

Multiple Virtual Disks per Controller

Up to 64 virtual disks per controller

Up to 64 virtual disks per controller

Up to 64 virtual disks per controller

Up to 64 virtual disks per controller

Support for x8 PCI Express host interface

Yes

Yes

Yes

Yes

Online Capacity Expansion

Yes

Yes

Yes

Yes

Dedicated and Global Hot Spares

Yes

Yes

Yes

Yes

Hot Swap Devices Supported

Yes

Yes

Yes

Yes

Non-Disk Devices Supported

No

No

No

No

Enclosure Hot-Addb

Yes

N/A

N/A

N/A

Mixed Capacity Physical Disks Supported

Yes

Yes

Yes

Yes

Hardware Exclusive-OR (XOR) Assistance

Yes

Yes

Yes

Yes

Revertible Hot Spares Supported

Yes

Yes

Yes

N/A

Redundant Path Support

Yes

N/A

N/A

N/A

a The PERC 6/i adapter supports a battery backup unit (BBU) on selected systems only. For additional information, see the documentation that shipped with the system.

b Using the enclosure Hot-Add feature you can hot plug enclosures to the PERC 6/E adapter without rebooting the system.

NOTE: The maximum array size is limited by the maximum number of drives per disk group (32), the maximum number of spans per disk group (8), and the size of the physical drives.
NOTE: The number of physical disks on a controller is limited by the number of slots in the backplane on which the card is attached.

Using the SMART Feature

The Self-Monitoring Analysis and Reporting Technology (SMART) feature monitors the internal performance of all motors, heads, and physical disk electronics to detect predictable physical disk failures. The SMART feature helps monitor physical disk performance and reliability.

SMART-compliant physical disks have attributes for which data (values) can be monitored to identify changes in values and determine whether the values are within threshold limits. Many mechanical and electrical failures display some degradation in performance before failure.

A SMART failure is also referred to as a predicted failure. There are numerous factors that relate to predicted physical disk failures, such as a bearing failure, a broken read/write head, and changes in spin-up rate. In addition, there are factors related to read/write surface failure, such as seek error rate and excessive bad sectors. For information on physical disk status, see Disk Roaming.

NOTE: For detailed information on Small Computer System Interface (SCSI) interface specifications, see www.t10.org and for detailed information on for Serial Attached ATA (SATA) interface specifications, see www.t13.org.

Initializing Virtual Disks

You can initialize the virtual disks in four ways as described in the following sections.

Background Initialization

Background Initialization (BGI) is an automated process that writes the parity or mirror data on newly created virtual disks. BGI assumes that the data is correct on all new drives. BGI does not run on RAID 0 virtual disks.

NOTE: You cannot permanently disable BGI. If you cancel BGI, it automatically restarts within five minutes. For information on stopping BGI, see Stopping Background Initialization.

The BGI rate is controlled by the Open Manage storage management software. After you have changed the BGI rate in Open Manage storage management software, the change does not take effect until the next BGI is run.

NOTE: Unlike full or fast initialization of virtual disks, background initialization does not clear data from the physical disks.

Consistency Check (CC) and BGI perform similar functions in that they both correct parity errors. However, Consistency Check reports data inconsistencies through an event notification, but BGI does not (BGI assumes the data is correct, as it is run only on a newly created disk). You can start Consistency Check manually, but not Background Initialization.

Full Inititialization of Virtual Disks

Performing a full initialization on a virtual disk overwrites all blocks and destroys any data that previously existed on the virtual disk. A full initialization eliminates the need for that virtual disk to undergo a background initialization and can be performed directly after the creation of a virtual disk.

During full initialization, the host is not be able to access the virtual disk. You can start a full initialization on a virtual disk by using the Slow Initialize option in the Dell OpenManage Storage Management application. To use the BIOS Configuration Utility to perform a full initialization, see Initializing Virtual Disks.

NOTE: If the system is rebooted during a full initialization, the operation aborts and a BGI begins on the virtual disk.

Fast Inititialization of Virtual Disks

A fast initialization on a virtual disk overwrites the first and last 8 MB of the virtual disk, clearing any boot records or partition information. This operation takes only 2-3 seconds to complete and is recommended when recreating virtual disks. To perform a fast initialization using the BIOS Configuration Utility, see Initializing Virtual Disks.


Consistency Checks

Consistency Check is a background operation that verifies and corrects the mirror or parity data for fault tolerant virtual disks. It is recommended that you periodically run a consistency check on virtual disks.

You can manually start a consistency check using the BIOS Configuration Utility or a Open Manage storage management application. To start a consistency check using the BIOS Configuration Utility, see Checking Data Consistency. Consistency checks can be scheduled to run on virtual disks using a Open Manage storage management application.

By default, consistency check automatically corrects mirror or parity inconsistencies. However, you can enable the Abort Consistency Check on Error feature on the controller using Dell™ OpenManage™ Storage Management. With the Abort Consistency Check on Error setting enabled, consistency check notifies if any inconsistency is found and aborts instead of automatically correcting the error.


Disk Roaming

The PERC 6 and CERC 6/i adapters support moving physical disks from one cable connection or backplane slot to another on the same controller. The controller automatically recognizes the relocated physical disks and logically places them in the proper virtual disks that are part of the disk group. You can perform disk roaming only when the system is turned off.

CAUTION: Do not attempt disk roaming during RAID level migration (RLM) or capacity expansion (CE). This causes loss of the virtual disk.

Perform the following steps to use disk roaming:

  1. Turn off the power to the system, physical disks, enclosures, and system components, and then disconnect the power cords from the system.

  2. Move the physical disks to different positions on the backplane or the enclosure.

  3. Perform a safety check. Make sure the physical disks are inserted properly.

  4. Turn on the system.

The controller detects the RAID configuration from the configuration data on the physical disks.


Disk Migration

The PERC 6 and CERC 6/i controllers support migration of virtual disks from one controller to another without taking the target controller offline. However, the source controller must be offline prior to performing the disk migration. The controller can import RAID virtual disks in optimal, degraded, or partially degraded states. You cannot import a virtual disk that is in an offline state.

NOTE: The PERC 6 controllers are not backward compatible with previous Small Computer System Interface (SCSI), PowerEdge Expandable RAID Controller (PERC), and Redundant Array of Independent Disks (RAID) controllers.

When a controller detects a physical disk with a pre-existing configuration, it flags the physical disk as foreign, and it generates an alert indicating that a foreign disk was detected.

CAUTION: Do not attempt disk roaming during RLM or CE. This causes loss of the virtual disk.

Perform the following steps to use disk migration.

  1. Turn off the system that contains the source controller.

  2. Move the appropriate physical disks from the source controller to the target controller.

The system with the target controller can be running while inserting the physical disks.

The controller flags the inserted disks as foreign disks.

  1. Use the Open Manage storage management application to import the detected foreign configuration.

NOTE: Ensure that all physical disks that are part of the virtual disk are migrated.
NOTE: You can also use the controller BIOS configuration utility to migrate disks.

Compatibility With Virtual Disks Created on PERC 5 Controllers

Virtual disks that were created on the PERC 5 family of controllers can be migrated to the PERC 6 and CERC 6i controllers without risking data or configuration loss. Migrating virtual disks from PERC 6 and CERC 6i controllers to PERC 5 is not supported.

NOTE: For more information about compatibility, contact your Dell Technical Support Representative.

Virtual disks that were created on the CERC 6/i controller or the PERC 5 family of controllers can be migrated to PERC 6.

Compatibility With Virtual Disks Created on SAS 6/iR Controllers

The migration of virtual disks created on the SAS 6/iR family of controllers can be migrated to PERC 6 and CERC 6i. However, only virtual disks with boot volumes of the following Linux operating systems successfully boot after migration:

NOTE: The migration of virtual disks with Microsoft Windows operating systems is not supported.
NOTICE: Before migrating virtual disks, back up your data and ensure that the firmware of both controllers is the latest revision. Also ensure that you use the SAS 6 firmware version 00.25.41.00.06.22.01.00 or later version.

Migrating Virtual Disks from SAS 6/iR to PERC 6 and CERC 6i

NOTE: The supported operating systems listed above contain a driver for the PERC 6 and CERC 6i controller family. No additional drivers are needed during the migration process.
  1. If virtual disks with one of the supported Linux operating systems listed above are being migrated, open a command prompt and type the following commands:

modprobe megaraid_sas

mkinitrd -f --preload megaraid_sas /boot/initrd-`uname -r`.img `uname -r`

  1. Turn off the system.

  2. Move the appropriate physical disks from the SAS 6/iR controller to the PERC 6 and CERC 6i. If you are replacing your SAS 6/iR controller with a PERC 6, see the Hardware Owner's Manual that came with your system.

CAUTION: After you have imported the foreign configuration on the PERC 6 or CERC 6i storage controllers, you cannot migrate the storage disks back to the SAS 6/iR controller as it may result in the loss of data.
  1. Boot the system and import the foreign configuration that is detected. You can do this in two ways as described below:

NOTE: For more information on BIOS Configuration Utility, see Entering the BIOS Configuration Utility.
NOTE: For more information on Foreign Configuration View, see Foreign Configuration View
  1. If the migrated virtual disk is the boot volume, ensure that the virtual disk is selected as the bootable volume for the target PERC 6 and CERC 6i controller. See Controller Management Actions.

  2. Exit the BIOS Configuration Utility and reboot the system.

  3. Ensure that all the latest drivers available on the Dell support website at support.dell.com for PERC 6 or CERC 6/i controller are installed. For more information, see "Installing the Drivers" on page 5.

NOTE: For more information about compatibility, contact your Dell Technical Support Representative.

Battery Management

NOTE: Battery management is only applicable to PERC 6 family of controllers.

The Transportable Battery Backup Unit (TBBU) is a cache memory module with an integrated battery pack that enables you to transport the cache module with the battery into a new controller. The TBBU protects the integrity of the cached data on the PERC 6/E adapter by providing backup power during a power outage.

The Battery Backup Unit (BBU) is a battery pack that protects the integrity of the cached data on the PERC 6/i adapter and PERC 6/i Integrated controllers by providing backup power during a power outage.

The battery may provide up to 72 hours for a 256-MB controller cache memory backup power and up to 48 hours for a 512-MB cache when new.

Battery Warranty Information

The BBU offers an inexpensive way to protect the data in cache memory. The lithium-ion battery provides a way to store more power in a smaller form factor than previous batteries.

The BBU shelf life has been preset to last six months from the time of shipment without power. To prolong battery life:

Your PERC 6 battery may provide up to 24 hours of controller cache memory backup power when new. Under the 1-year limited warranty, we warrant that the battery will provide at least 24 hours of backup coverage during the 1-year limited warranty period.

Battery Learn Cycle

Learn cycle is a battery calibration operation performed by the controller periodically to determine the condition of the battery. This operation cannot be disabled.

You can start battery learn cycles manually or automatically. In addition, you can enable or disable automatic learn cycles in the software utility. If you enable automatic learn cycles, you can delay the start of the learn cycles for up to 168 hours (7 days). If you disable automatic learn cycles, you can start the learn cycles manually, and you can choose to receive a reminder to start a manual learn cycle.

You can put the learn cycle in Warning Only mode. In the Warning mode, a warning event is generated to prompt you to start the learn cycle manually when it is time to perform the learn cycle operation. You can select the schedule for initiating the learn cycle. When in Warning mode, the controller continues to prompt you to start the learn cycle every seven days until it is performed.

NOTE: Virtual disks automatically switch to Write-Through mode when the battery charge is low because of a learn cycle.

Learn Cycle Completion Time Frame

The time frame for completion of a learn cycle is a function of the battery charge capacity and the discharge/charge currents used. For PERC 6, the expected time frame for completion of a learn cycle is approximately seven hours and consists of the following parts:

Learn cycles shorten as the battery capacity deteriorates over time.

NOTE: For additional information, see the OpenManage storage management application.

During the discharge phase of a learn cycle, the PERC 6 battery charger is disabled and remains disabled until the battery is discharged. After the battery is discharged, the charger is re-enabled.


Virtual Disk Write Cache Policies

The write cache policy of a virtual disk determines how the controller handles writes to that virtual disk. Write-Back and Write-Through are the two write cache policies and can be set on a virtual disk basis.

Write-Back and Write-Through

In Write-Through caching, the controller sends a data transfer completion signal to the host system when the disk subsystem has received all the data in a transaction.

In Write-Back caching, the controller sends a data transfer completion signal to the host when the controller cache has received all the data in a transaction. The controller then writes the cached data to the storage device in the background.

The risk of using Write-Back cache is that the cached data can be lost if there is a power failure before it is written to the storage device. This risk is mitigated by using a BBU on selected PERC 6 controllers. For information on which controllers support a BBU, see Table 2-1.

Write-Back caching has a performance advantage over Write-Through caching.

NOTE: The default cache setting for virtual disks is Write-Back caching.
NOTE: Certain data patterns and configurations perform better with a Write-Through cache policy.

Conditions Under Which Write-Back is Employed

Write-Back caching is used under all conditions in which the battery is present and in good condition.

Conditions Under Which Write-Through is Employed

Write-Through caching is used under all conditions in which the battery is missing or in a low-charge state. Low-charge state is when the battery is not capable of maintaining data for at least 24 hours in the case of a power loss.

Conditions Under Which Forced Write-Back With No Battery is Employed

Write-Back mode is available when the user selects Force WB with no battery. When Forced Write-Back mode is selected, the virtual disk is in Write-Back mode even if the battery is not present.

CAUTION: It is recommended that you use a power backup system when forcing Write-Back to ensure that there is no loss of data if the system suddenly loses power.

Virtual Disk Read Policies

The read policy of a virtual disk determines how the controller handles reads to that virtual disk. Some read policies are:


Reconfiguring Virtual Disks

There are two different methods to reconfigure RAID virtual disks — RAID Level Migration and Online Capacity Expansion. RAID Level Migrations (RLM) involve the conversion of a virtual disk to a different RAID level and Online Capacity Expansions (OCE) refer to increasing the capacity of a virtual disk by adding drives and/or migrating to a different RAID level. When a RLM/OCE operation is complete and reboot is not necessary. For a list of possible RAID level migrations and whether or not a capacity expansion is possible in that scenario, see Table 2-2.

The source RAID level column indicates the virtual disk level before the RAID level migration and the target RAID level column indicates the RAID level after the operation is complete.

NOTE: If you configure 64 virtual disks on a controller, you cannot perform a RAID level migration or capacity expansion on any of the virtual disks.
NOTE: The controller changes the write cache policy of all virtual disks undergoing a RLM/OCE to Write-Through until the RLM/OCE is complete.

Table 2-2. RAID Level Migration 

Source RAID Level

Target RAID Level

Required Number of Physical Disks (Beginning)

Number of Physical Disks (End)

Capacity Expansion Possible

Description

RAID 0

RAID 1

1

2

No

Converting non-redundant virtual disk into a mirrored virtual disk by adding one drive.

RAID 0

RAID 5

1 or more

3 or more

Yes

At least one drive needs to be added for distributed parity data.

RAID 0

RAID 6

1 or more

4 or more

Yes

At least two drives need to be added for dual distributed parity data.

RAID 1

RAID 0

2

2

Yes

Removes redundancy while doubling capacity.

RAID 1

RAID 5

2

3 or more

Yes

Removes redundancy while doubling capacity.

RAID 1

RAID 6

2

4 or more

Yes

Two drives are required to be added for distributed parity data.

RAID 5

RAID 0

3 or more

2 or more

Yes

Converting to a non-redundant virtual disk and reclaiming disk space used for distributed parity data.

RAID 5

RAID 6

3 or more

4 or more

Yes

At least one drive needs to be added for dual distributed parity data.

RAID 6

RAID 0

4 or more

2 or more

Yes

 

Converting to a non-redundant virtual disk and reclaiming disk space used for distributed parity data.

RAID 6

RAID 5

4 or more

3 or more

Yes

Removing one set of parity data and reclaiming disk space used for it.

NOTE: The total number of physical disks in a disk group cannot exceed 32.
NOTE: You cannot perform RAID level migration and expansion on RAID levels 10, 50, and 60.

Fault Tolerance Features

Table 2-3 lists the features that provide fault tolerance to prevent data loss in case of a failed physical disk.

Table 2-3. Fault Tolerance Features 

Specification

PERC

CERC

Support for SMART

Yes

Yes

Support for Patrol Read

Yes

Yes

Redundant path support

Yes

N/A

Physical disk failure detection

Automatic

Automatic

Physical disk rebuild using hot spares

Automatic

Automatic

Parity generation and checking (RAID 5, 50, 6, and 60 only)

Yes

N/A

Battery backup of controller cache to protect data

Yesa

N/A

Manual learn cycle mode for battery backup

Yes

N/A

Detection of batteries with low charge after bootup

Yes

N/A

Hot-swap manual replacement of a physical disk without reboot

Yes

Yes

a The PERC 6/i adapter supports a BBU on selected systems only. For additional information, see the documentation that was shipped with the system.

Physical Disk Hot Swapping

Hot swapping is the manual substitution of a replacement unit in a disk subsystem for a defective one. The manual substitution can be performed while the subsystem is performing its normal functions.

NOTE: The system backplane or enclosure must support hot swapping in order for the PERC 6 and CERC 6/i controllers to support hot swapping.
NOTE: Ensure that SAS drives are replaced with SAS drives, and SATA drives are replaced with SATA drives.
NOTE: While swapping a disk, ensure that the new disk is of equal or greater capacity than the disk that is being replaced.

Failed Physical Disk Detection

The controller automatically detects and rebuilds failed physical disks when a new drive is placed in the slot where the failed drive resided or when an applicable hot spare is present. Automatic rebuilds can be performed transparently with hot spares. If you have configured hot spares, the controllers automatically try to use them to rebuild failed physical disks.

Redundant Path With Load Balancing Support

The PERC 6/E adapter can detect and use redundant paths to drives contained in enclosures. This provides the ability to connect two SAS cables between a controller and an enclosure for path redundancy. The controller is able to tolerate the failure of a cable or enclosure management module (EMM) by utilizing the remaining path.

When redundant paths exist, the controller automatically balances I/O load through both paths to each disk drive. This load balancing feature increases throughput to each drive and is automatically turned on when redundant paths are detected. To set up your hardware to support redundant paths, see Setting up Redundant Path Support on the PERC 6/E Adapter.

NOTE: This support for redundant paths refers to path-redundancy only and not to controller-redundancy.

Using Replace Member and Revertible Hot Spares

The Replace Member functionality allows a previously commissioned hot spare to be reverted back to a usable hot spare. When a drive failure occurs within a virtual disk, an assigned hot spare (dedicated or global) is commissioned and begins rebuilding until the virtual disk is optimal. After the failed drive is replaced (in the same slot) and the rebuild to the hot spare is complete, the controller automatically starts to copy data from the commissioned hot spare to the newly-inserted drive. After the data is copied, the new drive is part of the virtual disk and the hot spare is reverted back to being a ready hot spare; this allows hot spares to remain in specific enclosure slots. While the controller is reverting the hot spare, the virtual disk remains optimal.

NOTE: The controller automatically reverts a hot spare only if the failed drive is replaced with a new drive in the same slot. If the new drive is not placed in the same slot, a manual Replace Member operation can be used to revert a previously commissioned hot spare.

Automatic Replace Member with Predicted Failure

A Replace Member operation can occur when there is a SMART predictive failure reporting on a drive in a virtual disk. The automatic Replace Member is initiated when the first SMART error occurs on a physical disk that is part of a virtual disk. The target drive needs to be a hot spare that qualifies as a rebuild drive. The physical disk with the SMART error is marked as failed only after the successful completion of the Replace Member. This avoids putting the array in degraded status.

If an automatic Replace Member occurs using a source drive that was originally a hot spare (that was used in a rebuild), and a new drive added for the Replace Member operation as the target drive, the hot spare reverts to the hot spare state after a successful Replace Member operation.

NOTE: To enable the automatic Replace Member, use the Dell OpenManage Storage Management. For more information on automatic Replace Member, see Dell OpenManage Storage Management.
NOTE: For information on manual Replace Member, see Replacing an Online Physical Disk.

Patrol Read

The Patrol Read feature is designed as a preventative measure to ensure physical disk health and data integrity. Patrol Read scans for and resolves potential problems on configured physical disks. The Open Manage storage management application can be used to start Patrol Read and change its behavior.

Patrol Read Feature

The following is an overview of Patrol Read behavior:

  1. Patrol Read runs on all disks on the controller that are configured as part of a virtual disk including hot spares.

  2. Patrol Read does not run on unconfigured physical disks. Unconfigured disks are those that are not part of a virtual disk or are in Ready state.

  3. Patrol Read adjusts the amount of controller resources dedicated to Patrol Read operations based on outstanding disk I/O. For example, if the system is busy processing I/O operation, then Patrol Read uses fewer resources to allow the I/O to take a higher priority.

  4. Patrol Read does not run on any disks that are involved in any of the following operations:

Patrol Read Modes

The following describes each of the modes Patrol Read can be set to:


Back to Contents Page