Как заповедали старшие – изучаю вооружение потенциального противника :)
Нашел тут EMC CLARiiON Best Practices for Performance and Availability (FLARE 30, 01.03.2011) и сижу, читаю.
Особо примечательные места выделяю. Ничего, что я сегодня без перевода? По-моему, в выделенном все достаточно понятно.
Уделяю особое внимание storage pool-ам и LUN-ам в них, так как только в pool-ах возможны новые фишечки, такие как FAST и thin provisioning.
Страница 63 и ниже (специально даю пространное цитирование, чтобы не быть попрекомым в выдирке из контекста.
Conceptually, the storage pool is a file system overlaid onto a traditional RAID group of drives. This file system adds overhead to performance and capacity utilization for thin and thick LUNs. Thick LUNs have less of a performance overhead than thin LUNs due to the granularity of space allocation and mapping between virtual and physical layers.
In addition, availability should be considered when implementing pools. A pool with a large number of drives will be segmented into multiple private RAID groups. Data from many LUNs will be distributed over one or more of the pool’s RAID groups. The availability of the pool includes the availability of all the pool’s RAID groups considered as one. (Availability is at the single RAID group level with traditional LUNs.) Workloads requiring the highest level of performance, availability, and capacity utilization should continue to use traditional FLARE LUNs. As with traditional LUNs, storage pool drive count and capacity need to be balanced with expected LUN count and workload requirements. To get started quickly, begin with the recommended initial pool size, expand in the recommended increments, and don’t greatly exceed the recommended maximum pool size found in the Quick Guidelines” section below.
Тут поясню, что в документе под FLARE LUN, здесь и далее, понимаются “классические” LUN-ы, то есть созданные поверх обычных RAID на физических дисках, в противоположность pool LUN.
Creating storage pools
For the most deterministic performance create few homogeneous storage pools with a large number of storage devices. A homogeneous pool has the same type, speed, and size drives in its initial allocation of drives. Heterogeneous pools have more than one type of drive. See the “Fully Automated Storage Tiering (FAST) Virtual Provisioning” section for heterogeneous pool management.
A single RAID level applies to all the pool’s private RAID groups. Pools may be created to be of RAID types 5, 6, or 10. Use the general recommendations for RAID group provisioning of FLARE LUNs when selecting the provisioning of the storage pool’s RAID types.
When provisioning a pool with SATA drives with capacities of 1 TB or larger, we strongly (выделение в документе, прим romx) recommend RAID level 6. All other drive types can use either RAID level 5 or 10.
Там же, страница 66 и далее.
EMC recommendations for creating homogeneous pools are as follows:
- We recommend Fibre Channel hard drives for Virtual Provisioning pools with thin LUNs due to their overall higher performance and availability.
- Create pools using storage devices that are the same type, speed and size for the most predictable performance. It may be advisable to keep Fibre Channel and SATA hard drives in separate pools to service different workloads with varying performance and storage utilization needs.
- Usually, it is better to use the RAID 5 level for pools. It provides the highest user data capacity per number of pool storage devices and proven levels of availability across all drive types. Use RAID 6 if the pool is composed of SATA drives and will eventually exceed a total of 80 drives. Use RAID 6 if the pool is made up of any number of large capacity (more 1 TB) SATA drives.
- Initially, provision the pool with the largest number of hard drives as is practical within the storage system’s maximum limit. For RAID 5 pools the initial drive allocation should be at least five drives and a quantity evenly divisible by five. RAID 6 pool initial allocations should be evenly divisible by eight. RAID 10 pool initial allocations should be evenly divisible by eight.
- lf you specify 15 drives for a RAID 5 pool - Virtual Provisioning creates three 5-drive (4+1) RAID groups. This is optimal provisioning.
- lf you specify 18 drives for a RAID 5 pool - Virtual Provisioning creates three 5-drive (4+1) RAID groups and one 3-drive (2+1) RAID group. This provisioning is less optimal.
- lf you specify 10 drives for a RAID 6 pool — Virtual Provisioning creates one 10-drive (8+2) RAID group. This is larger than standard, because an additional group cannot be created. It is acceptable, because the RAID groups are the same size.
- lf you specify 10 drives for a RAID 1/0 pool — Virtual Provisioning creates one 8-drive (4+4) and one 2-drive ( 1+1) RAID group. This is not optimal, because some pool resources will be serviced by a single drive pair. For RAID 10 pools, if the number of drives you specify in pool creation or expansion isn’t divisible by eight, and if the remainder is 2, the recommendation is to add additional drives or remove Iwo drives to that disk count to avoid a private RAID group of two drives being created.
- In a storage pool, the subscribed capacity is the amount of capacity that has been assigned to thin and thick LUNs. When designing your system, make sure that the expected subscribed capacity does not exceed the capacity that is provided by maximum number of drives allowed in a storage system’s pool. This ensures that increased capacity utilization of thin LUNs can be catered for by pool expansion as necessary.
Expanding homogeneous storage pools
A homogeneous pool is a storage pool with drives of a single type. For best performance expand storage pools infrequently, maintain the original character of the pool’s storage devices, and make the largest practical expansions.
- Expand the pool using the same type and same speed hard drives used in the original pool.
- Expand the pool in large increments. For RAID leveL 5 pools use increments of drives evenly divisible by five, not Less than five. RAID 6 pools should be expanded using eight-drive evenly divisible increments. Pools may be expanded with any amount of drives. You should expand the pool with the largest practical number of drives. Pools should not be expanded with fewer than a single RAID group’s number of drives. The performance of the private RAID groups within the pool by a smaller, later expansion may be different from the pool’s original RAID groups. Doubling the size of a pool is the optimal expansion.
Creating pool LUNs
The general recommendations toward traditional LUNs apply to pool LUNs. (See the ‘LUN provisioning” section on page 55.)
The largest capacity pool LUN that can be created is 14 TB.
Avoid trespassing pool LUNs. Changing a pool LUN’s SP ownership may adversely affect performance. After a pool LUN trespass, a pool LUN’s private information remains under control of the original owning SP. This will cause the trespassed LUN’s lOs to continue to be handled by the original owning SP. This results in both SPs being used in handling the IOs. Involving both SPs in an I/O increases the time used to complete an I/O. Note that the private RAID groups servicing I/O to trespassed pool LUNs may also be servicing I/O to non-trespassed pool LUNs at the same time. There is the possibility of dual SP access for the period some pool LUNs are trespassed. If a host path failure results in some LUNs trespassing in a shared pool, the failure should be repaired as soon as possible and the ownership of those trespassed LUNs be returned to their default SP.
SP это дисковый контроллер, Storage Processor.
Pools with high bandwidth workloads
When planning to use a pool LUNs in a high bandwidth workload, the required storage for the LUN should be pre-allocated. For FLARE revision 30.0 and later, thick LUNs should be used. For virtual provisions using earlier versions of FLARE a pre-allocation of the storage should be performed.
Pre-allocation results in sequential addressing within the pool’s thin LUN ensuring high bandwidth performance. Pre-allocation can be performed in several ways including migrating from a traditional LUN; performing a full format of the file system, performing a file write from within the host file system; or creating a single Oracle table from within the host application. In addition, only one concurrent preallocation per storage pool should be performed at any one time. More than one thin LUN per pool being concurrently pre-allocated can reduce overall SP performance.
There is a fixed capacity overhead associated with each LUN created in the pool. Take into account the number of LUNs anticipated to be created, particularly with small allocated capacity pools.
A pool LUN is composed of both metadata and user data. both of which come from the storage pool. A pool LUN’s metadata is a capacity overhead that subtracts from the pool’s user data capacity. Thin and thick LUNs make different demands on available pool capacity when they are created. Note the User Consumed Capacity of a thin LUN is some fraction of the User Capacity of the LUN.
Any size thin LUN will consume about 3 GB of pool capacity: slightly more than 1 GB of capacity for metadata, an initial 1 GB of pool capacity for user data. An additional 1 GB of pool capacity is prefetched before the first GB is consumed in anticipation of more usage. This totals about 3 GB. The prefetch of 1 GB of metadata remains about the same from the smallest though to the largest (>2 TB host-dependent) LUNs. Additional metadata is allocated from the first 1 GB of user data as the LUN’s user capacity increases.
To estimate the capacity consumed for a thin LUN follow this rule of thumb:
Consumed capacity = (User Consumed Capacity * 0,02) + 3GB.
LUN compression is a separately licensable feature available with FLARE 30.0 and later.
Compression performs an algorithmic data compression of pool-based LUNs. All compressed LUNs are thin LUNs. Compressed LUNs can be in one or more pools. FLARE LUNs can also be compressed. If a FLARE LUN is provisioned for compression, it will be converted into a designated pool-based thin LUN. Note that a Virtual Provisioning pool with available capacity is required before a FLARE LUN can be converted to a compressed LUN. The conversion ofa FLARE LUN to a compressed thin LUN is considered a LUN migration.
Compressed LU N performance characteristics
Compression should only be used for archival data that is infrequently accessed. Accesses to a compressed LUN may have significantly higher response time accesses to a Virtual Provisioning pool-based LUN. The duration of this response time is dependent on the size and type of the 10, and the degree of compression. Note that at the host Level, with a fully operational write cache, delays for writes to compressed LUNs are mitigated.
Be aware, that as with many semiconductor-based storage devices. that Flash drive uncached write performance is slower than their read performance. To most fully leverage Flash drive performance, they are recommended for use with workloads having a large majority of read lOs to writes.
The following I/O type recommendations should be considered in using Flash drives with parity RAID 5 groups using the default settings:
- For sequential reads: When four or greater threads can be guaranteed, Flash drives have up to twice the bandwidth of Fibre Channel hard drives with large-block, sequential reads. This is because the Flash drive does not have a mechanical drive’s seek time.
- For random reads: Flash drives have the best random read performance of any CLARiiON storage device. Throughput is particularly high with the ideal block size of 4 KB or less. Throughput decreases as the block size increases from the maximum.
- For sequential writes: Flash drives are somewhat slower than Fibre Channel hard drives with single threaded write bandwidth. They are somewhat higher in bandwidth to Fibre Channel when high concurrency is guaranteed.
- For random writes: Flash drives have superior random write performance over Fibre Channel hard drives. Throughput is particularly high with highly concurrent access using a 4 KB block size. Throughput decreases with increasing block size.
Как и ранее, готов обсуждать тему со специалистами в EMC CLARiiON, если они, в свою очередь, готовы соблюдать правила данного блога: вести дискуссию спокойно, аргументированно и вежливо к ее участникам, даже если они и придерживаются иных взглядов.