OK, let’s get this out of the way up front. The perspective this commentary is coming from is this:
- Most organizations are going to evolve their infrastructure toward cloud computing, and virtualization is the on-ramp to the private cloud.
- Shared storage is a requirement for enabling the mobilized-workload benefits of virtualization and therefore private cloud.
This where the flight attendant says “If your destination isn’t New York’s Laguardia airport, you’d better de-plane right away!”
Now that you’re on the right plane, let’s get on our way. You’ve begun to virtualize and are wondering at what point you are going to be able to cross that line to private cloud computing. The answer is that you’ll reach private cloud when you have:
- adopted standardized building blocks for your infrastructure,
- adopted integrated infrastructure management, and
- automated service delivery.
Integrating storage management with virtual server management is critical. However, the dirty little secret of the storage industry is that not all shared storage is actually appropriate for private cloud, though some vendors would have you think otherwise. Here’s why: traditional shared storage still binds storage objects to physical resources! If you want to move toward private cloud, you need to adopt Virtualized Storage. With Virtualized Storage there is an abstraction layer between storage hardware and storage objects, this is directly analagous to server virtualization.
But what does this mean – “an abstraction layer between storage hardware and storage objects”? This isn’t just thin provisioning folks, though certainly thin provisioning is one capability of virtualized storage. This means that administrators no longer have to know what volumes are pinned to what disks, or for that matter what disks are tied to what RAID policy, what trays are connected to what loops etc. etc. etc.. Virtualized storage leaves behind the complexity associated with traditional shared storage.
Extensive automation is a hallmark of virtualized storage. Capacity is used automatically based on policies. When new storage hardware is added to the SAN, the SAN recognizes this and non-disruptively starts to incorporate that storage into existing workloads. Performance is optimized automatically by spreading workloads across as many spindles as possible, AND by moving hot data blocks to higher-performing storage tiers without administrative intervention.
Another aspect of abstracting storage objects from the underlying hardware is the scaling methodology. Traditional SANs with frame-based architectures add performance and capacity within the limitations of a “frame” (or enclosure) that forces the IT department to pony up for a pre-determined maximum. On the other hand, modular scale-out SANs eliminate the need to pre-determine a performance or capacity ceiling, and allow your storage to start small and grow as needed.
So as you’re charting your course toward private cloud, keep the storage piece in mind. Match the capabilities of your storage with that of your other virtualized resources. In future posts I’ll explore other aspects of storage for private cloud, including management integration.