Reduce your taxes # 2 – The SQL Server Backup Tax

In the first of this series I discuss that some organizations are over provisioning their server resources in order to balance competing performance and data protection SLAs. Let’s look in detail at one example of this, “the backup tax” for doing native SQL Server backups.

Challenge: The Backup Tax – The following graph shows the results of testing of backup processing on a transactional (TPC-C style) workload running in SQL Server. It illustrates one example of a tax rate of 40%.

The graph of the results shows that approx. 40% of the CPU cycles provisioned for SQL server must be reserved in order to make sure that service levels for performance are met during backup with the SQL Server native backup utility.

Conclusion: Simultaneously meeting service level agreements for performance and data protection while using the native SQL server backup requires at least 40% over-provisioning of your database server resources. And this assumes that you’re willing to run your database server at 100% utilization during the backup window. This is an exorbitant 40% tax rate that you shouldn’t have to pay.

For more information on the testing that was done please refer to:  http://dell.to/w35fsv

Let’s hear from you, what do you pay for backup taxes? What is your tax reduction strategy?

Posted in SQL Server | Tagged | 1 Comment

Reduce your taxe rate to achieve IT efficiency

Yes it is that time of year again; we all start thinking about taxes. I know what you’re thinking; “What does this have to do with IT and enterprise efficiency?”

It comes down to managing SLAs. As you design your IT environment you need to be cognizant of Service Level Agreements. What are the expectations of your users for performance? What is the agreement you have with them about response time? Also what is the agreement you have with management about data protection, what are your Recovery Point Objectives and Recovery Time Objectives?

Databases are a great place to explore this balancing act.  Databases underlie most of our business systems. As you size your database servers, you must ensure that SLAs for performance are met while at the same time making sure those SLAs for data protection are met. It turns out that this usually results in database servers being over-sized to deliver acceptable performance AND accommodate for the processing load of establishing recovery points. This is a “tax” that almost every IT organization is paying, and with the right storage strategy you can reduce, and even eliminate most of this tax.

In this series we’ll look at some lab testing of transactional database systems (TPC-C style workload) under different scenarios to determine what that tax rate might be.  I think you will be surprised.  The data shows on the order of a 40% tax rate being paid by IT departments. I mean by that on average servers must be sized so that 40% of their headroom lies dormant in order to make sure that performance and data protection SLAs can be met. And this assumes that IT departments are willing to use all available headroom and run their servers at 100% utilization during data protection operations, which is rarely the case in the real world.

It is time to hear from you. What is your experience with this issue? Are you paying these taxes? Would you like to reduce your tax rate? Can you share any specific examples?

Posted in Oracle, SQL Server | Tagged | Leave a comment

Halloween Blackout

The Great Halloween storm of 2011 has come through the Northeast and gone, leaving over 2 million homes without power and hundreds of thousands of homes without power for extended periods of time. We were among the more fortunate ones, having gone without power for just over 2 days. Half of my street still doesn’t have power back. Many of my co-workers are dealing with life without water, heat, light, refrigeration and many of the basic services that we take for granted. I’m not the only one who has reportedly experienced that for a while after the power returned it was a little thrill to have something actually happen when a switch was flipped.

What was really striking to me though was the fact that people seemed to be most upset about losing connectivity to the phone and internet networks! I live in a town that reportedly has the fastest residential internet speeds in the nation. (see the NY Times article “For Idaho and the Internet, Life in the slow lane.”)  We are used to instant gratification when it comes to all-things-digital. But with the power out, and telephone and internet service disrupted, we were stripped of our information connection. Smart phones either ran out completely or had to be powered off to save what little battery was left. Laptops searched constantly for a not-to-be-found wireless network signal. Even with UPS battery-based power supplies, PCs couldn’t get a signal from the wire.

It seemed to be the younger generations who were most at-a-loss. We had to resort to things like reading the paper to get news, playing scrabble for entertainment, and walking to a friend’s house to check in, connecting by offering to share food and warmth with a neighbor.

I sat with a friend last night and recalled a childhood less connected. If we got to watch an hour of television a day – eg a news program followed by a sit-com on a 13” fuzzy black-and-white screen, we were lucky. We couldn’t use the phone because it cost money. We ran out the back yard to join the neighborhood street hockey game. We sat in the tree fort to have a private chat. We hung out at the store to get the gossip. Mom called us in for dinner by ringing the bell hanging on the back porch. We warmed up by the wood-stove in the kitchen.

If we gained something from our experiences in the Halloween storm, it was a reminder of how dependent we now are on our digital communications.

Posted in Personal | Leave a comment

RAC Storage: Scale-out Oracle database meets scale-out storage

 Oracle Real Application Clusters (RAC) is an option to the Oracle Database.  Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database.  This architecture is an example of what is commonly called horizontal scaling or “scale-out”. Scale-out allows multiple computers to be networked together to increases both compute capacity and the tolerance of individual system failure. This is contrasted with vertical scaling which adds compute capacity to an individual computer, but with major limitations. There are limits to how much a single computer can scale with a balanced configuration. Also, vertical scaling increases the impact of a single system failure.

 One of the benefits of this approach is the ability to achieve more “elastic” application infrastructure, deal with seasonal spikes, and relieve the pressure to reduce over-provisioning of compute cycles. With the growing power of commodity X86-based servers and the Linux operating system, this scale-out has also come to be seen as a cost-effective approach to database scalability and reliability.

 A similar trend is now being played out in the storage components of the database infrastructure. Scale-out storage is built on modular storage building blocks or “nodes”, each of which contains storage media, memory, networking and controller resources. The capacity, performance and throughput of a scale-out storage system grow with the number of storage nodes in the system. The more traditional approach of scale-up storage systems features a single enclosure or “frame” with front-end connectivity, internal bandwidth and storage media. The capacity, throughput and performance are limited by the constraints of the frame.

 Scale-out storage is a great choice to complement an Oracle RAC implementation. The benefits include lower CAPEX because storage can start small and grow over time as needed. Unlike scale-up storage, there is no need to purchase up-front a large frame which can accommodate possible storage needs for the life of the project.  In terms of OPEX, in addition to reducing hardware to manage, many newer scale-out offerings provide very strong benefits in terms of management simplification. These systems virtualize storage to abstract storage objects from the underlying storage hardware allowing the simple and quick provisioning of new storage resources without specialized knowledge of storage hardware configuration.  

 Many of the pure scale-up architectures have been around for a long time and have some mainframe-like characteristics from the perspective of tunability. These systems can be, and must be highly customized. Though this tuning can lead to achieving extreme performance on the high-end, it is a cause for longer lead times when adding new storage and higher cost-of-ownership.  Time consuming and disruptive data migrations are a fact of life when moving to a new storage frame.  A migration is required when moving to a new capacity or in the case of a periodic storage architecture refresh.

 So take a look at your storage options when planning an Oracle RAC deployment. The economic and scalability advantages of scale-out storage are a good match for Linux-x86 grid-based database deployments. In a future post I’ll discuss some performance benchmarking that has been done that demonstrates the simple scalability of these types of systems.

Posted in Oracle | Leave a comment

Private Cloud Storage 1: the requirement for storage virtualization

OK, let’s get this out of the way up front.  The perspective this commentary is coming from is this:

  • Most organizations are going to evolve their infrastructure toward cloud computing, and virtualization is the on-ramp to the private cloud.
  • Shared storage is a requirement for enabling the mobilized-workload benefits of virtualization and therefore private cloud. 

This where the flight attendant says “If your destination isn’t New York’s Laguardia airport, you’d better de-plane right away!”

Now that you’re on the right plane, let’s get on our way. You’ve begun to virtualize and are wondering at what point you are going to be able to cross that line to private cloud computing. The answer is that you’ll reach private cloud when you have:

  1. adopted standardized building blocks for your infrastructure,
  2. adopted integrated infrastructure management, and
  3. automated service delivery.

Integrating storage management with virtual server management is critical. However, the dirty little secret of the storage industry is that not all shared storage is actually appropriate for private cloud, though some vendors would have you think otherwise. Here’s why: traditional shared storage still binds storage objects to physical resources! If you want to move toward private cloud, you need to adopt Virtualized Storage. With Virtualized Storage there is an abstraction layer between storage hardware and storage objects, this is directly analagous to server virtualization.

But what does this mean – “an abstraction layer between storage hardware and storage objects”? This isn’t just thin provisioning folks, though certainly thin provisioning is one capability of virtualized storage. This means that administrators no longer have to know what volumes are pinned to what disks, or for that matter what disks are tied to what RAID policy, what trays are connected to what loops etc. etc. etc.. Virtualized storage leaves behind the complexity associated with traditional shared storage.

Extensive automation is a hallmark of virtualized storage. Capacity is used automatically based on policies. When new storage hardware is added to the SAN, the SAN recognizes this and non-disruptively starts to incorporate that storage into existing workloads. Performance is optimized automatically by spreading workloads across as many spindles as possible, AND by moving hot data blocks to higher-performing storage tiers without administrative intervention.

Another aspect of abstracting storage objects from the underlying hardware is the scaling methodology. Traditional SANs with frame-based architectures add performance and capacity within the limitations of a “frame” (or enclosure) that forces the IT department to pony up for a pre-determined maximum. On the other hand, modular scale-out SANs eliminate the need to pre-determine a performance or capacity ceiling, and allow your storage to start small and grow as needed.

So as you’re charting your course toward private cloud, keep the storage piece in mind. Match the capabilities of your storage with that of your other virtualized resources. In future posts I’ll explore other aspects of storage for private cloud, including management integration.

Posted in Cloud Computing | Leave a comment

The Top of the Chart – focus on customer outcomes

As sales and marketing professionals in the high tech world we are responsible for helping create and deliver customer-centric value propositions for our products and services. The idea is that we need to get out of the mode of talking about product features and functions and instead focus on benefits to the customer.

ROI! Is the frequent cry. We must focus on the return that a customer gets from a project. After all, every project has costs, and those outlays must come from the budget. Budgets are shrinking or flat in these challenging times. We must wring cost out of the system! Lower costs mean higher returns right?

Wrong. Well, sort-of wrong. The problem is that we frequently put so much focus on costs, that we neglect the most exciting part of the conversation which is the customer outcome – the new capability that will help grow the top line for our customer.

 There have been a lot of studies which have identified that IT expenditures today are largely comprised of outlays that serve only to maintain the status quo. The lion’s share of IT budgets are for “just keeping the lights on.” The number varies but most estimates reinforce the pareto principle ie around 80% of IT budgets are for maintaining existing capapbilities while around 20% goes toward new capabilities and innovation. (Update: I just read the 10/10/2010 published 2011 IT Budget Planning Guide For CIOs from industry analyst Forrester. They define a concept called MOOSE as being an acronym for IT spending to Maintain and Operate the Organization, Systems, and Equipment. Across all industries and sizes of companies in their survey, MOOSE was slighty over 70%)

I work at Dell which has an interesting way of describing the opportunity with the following chart:

Let’s focus on the“Top of the Chart”. What it says is that if we are able to reduce the percentage of the budget that is spent on maintenance, we can increase the spending focus on strategic initiatives that will improve the top line. Here is a great discussion with Dell’s CIO Robin Johnson during which he describes Dell’s initiative to make this transformation with it’s own IT spending.

Your challenge is to figure out what the “Top of the Chart” looks like for your customer. For example when you are at a hospital talking about IT in health care, the top of the chart might be “improved patient outcomes“.  For a retailer the top of the chart might be “Gross Margin Return on Space“.

This type of conversation is what will win you a seat at the table with the CIO and status as a trusted advisor to your customer.

Posted in Sales and Marketing | Leave a comment

Lessons from Harry – Sometimes it’s better not to know

I’m a private pilot. Or that is to say I have a pilot’s license, though I’m not active right now. Earning my pilot’s license was one of the more interesting things I’ve done, and it gave me a great sense of accomplishment. Like a lot of things in life, you can take a fast track to getting your pilot’s license, or you can take your time and let it develop as just one of many areas of interest, I took the latter route.

Loading up N46525

My first flying instructor was an older gentleman named Harry. I’m not sure how we found him. I was working at Pratt & Whitney in Hartford CT. My friend, co-worker and roommate Tony Salvo and I decided that we would learn to fly. We each spent an hour every week  or so flying with Harry.

Harry had a little white-and-green Cessna 150 that he called “The Grasshopper”. I always thought that he called it that because it was green, but in thinking back about it maybe there was more to it? Much like a young Kwai Chang Caine  who was called “Grasshopper” by his teacher in Kung Fu, I was learning from a master who had many lessons to teach but few words to describe them.

Sometimes Harry would fall asleep when we were flying. I was in the left seat (the pilot’s position) and Harry was in the right. We would take off and go out exploring the Connecticut countryside. The rumbling of the engine, the vibration of the airframe and Harry’s elderly state all combined to lull him into a relaxed state. But when Harry did have something to say, it usually was deep and layered with meaning. I would often think of the things he said later and find new meaning in them. I still do.

One day Harry said to me “Hey Bob, I’ve got a new night-time emergency procedure for you!”. “OK Harry.” I said, “What is it?” Harry replied “If you’re going down at night, make sure to turn on your landing light.” (Airplanes have a “landing light” which is like a headlight on a car, but you only use it right on the final approach as you’re landing so you can judge your landing flare.)  “OK” I said.  Harry then said with a wry smile, “If you like what you see, leave it on. If you don’t, turn it off!”

Posted in Personal | 1 Comment