click to visit StorageSearch.com home page
leading the way to the new (solid state) storage frontier .....
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
the SSD Buyers Guide - click to see article
SSD buyers guide ..
image shows software factory - click to see storage software directory
SSD software ....
cloud storage news, vendors, articles  and directory - click here
cloud storage ..
..

The Cost of Owning and Storing Data

classic article - April 1999 - by Gene Nagle - Product Marketing Manager, Overland Data

Storage management has emerged as the key issue in today's exploding data environment. Hundred terabyte and even petabyte sites is not uncommon forcing managers to look beyond the traditional methods of storing and securing critical corporate information.

A staggering number is that for every dollar spent on storage hardware, companies will spend 3 dollars to manage the information stored on those devices. Given these realities, it is no wonder that storage management cost containment - or the cost of ownership of data - has emerged as a primary IS issue.

Increasingly, IS managers have turned to storage automation technology to solve their data management problems. Advances in storage hardware and software have made solutions such as Hierarchical Storage Management (HSM), archiving and disk grooming automatic functions that are designed to optimize the investment in storage hardware while at the same time protecting valuable corporate information.

A key component in any automated storage architecture is a tape library. These libraries are typically equipped with tape drives capable of storing dozens of gigabytes of data on a single cartridge. Advances in drive technology, robotics, and software have so greatly improved access to information in these devices that tape libraries are increasingly being utilized to store active user data instead of the traditional backup and archival applications. Indeed, most storage automation schemes rely on moving less-frequently accessed files to secondary or tertiary storage - most commonly in an automated tape library - where the cost of storage is significantly less than with disk drives.

There is little argument about the cost benefits of automating storage installations. However, there is a great deal of discussion about how tape libraries fit into the picture, ranging from which tape drive technology to which library designs to utilize. When it comes to total cost of ownership (TCO), the cost benefits of today's scalable library designs stand out.

The return on investment in storage automation hardware and software technology can come quickly, given the rising expenses involved in managing information. According to a report from Strategic Research Corporation, total PC LAN data management costs averaged $206,000 per year, while UNIX data management costs were $147,000 per year. And, fully a third of the expense was due to managing backup. The findings for the PC LAN market were even more dramatic, with those managers spending an average of 954 hours per-site per-year on backup administration - defined as media handling and error handling - making it the most time consuming component of the entire backup process.

Clearly, the role of storage automation will become increasingly important as companies move to reduce these data management expenses and implement a proven cost-saving technology.

While automation is the cornerstone of controlling overwhelming data management costs, administrators are faced with a series of new issues when deciding the best approach to automating their storage environments. They must define an optimal mix of storage technologies using on-line disk and near-line tape, (and possibly optical disk) to maximize data protection and security while minimizing costs. Automated tape libraries have become an indispensable tool as part of this automation solution. But once again, with an almost overwhelming maze of choices, administrators need to be aware of several factors in making a tape library buying decision, including capacity, performance, and the appropriate tape technology - 4mm, 8mm, 3490, Magstar or DLT. And one of the most critical buying factors is also one of the most often overlooked - is total cost of ownership.

Quantifying the cost of ownership can be an imprecise process, but in any analysis there are several key components that must be considered when computing the cost equation. For tape libraries these include initial price of the library hardware, media costs, warranty, maintenance, repairs, cost of upgrading, cost of downtime and the cost savings generated by automating storage management operations.

Most of these cost components are straightforward and easily understood, but the last three items - automation cost savings, cost of downtime and the cost of upgrading - are perhaps the most critical and least discussed factors in the cost of ownership equation.


REDUCING THE COST OF UPGRADING:

Stringent new library up time requirements, as well as customer demands for a solution to the cost-of-upgrading problem have inspired some companies to create a new tape library architecture focused on a scalable, high availability design. The result is that highly scalable DLT libraries are now available. Some manufacturers such as Overland offer a radical departure from the traditional "monolithic" library designs that restrict the user to a maximum number of drives and cartridges inside a single enclosure. Monolithic libraries are also exposed with several single failure points that can cripple the library and eliminate user access to critical data.

The standard in expandable, modular DLT libraries has been available since early 1996. Users can begin a library implementation with a "base module," with one or two DLT drives and an integrated, removable 10-cartridge storage magazine. These high-performance base modules meet today's network backup needs with native sustained transfer rates of over 36 GB per hour and native capacities of up to 350 GB. Then, whenever and wherever storage requirements increase, the scalable design delivers cost-effective modular expansion that can be easily added as either capacity and performance requirements increase.

Why force an IS manager to scrap an existing investment in library hardware and obtain unneeded capabilities simply to upgrade the library? Instead of purchasing a complete new library as demanded with monolithic designs, scaling today's high-performance scalable libraries beyond a single base module simply requires additional modules, providing seamless expansion. Unique pass-through technologies move tapes at high speed from module to module, allowing any tape cartridge to be moved to any available drive in the system, or to any storage slot. And each base module can operate in standalone mode as well, with easy removal from the system when a 150 to 350 GB library is needed for important department backup jobs.

The ability to modularly expand the capacity and performance in reasonable increments is the most cost-effective feature of modular storage. But redundant components also help meet high availability demands of today's network computing environments.

Monolithic library architecture limits expandability to a fixed number of drive and cartridges. Some of these products allow customers to start with a minimum configuration of one or two drives then add more drives, but always within the constraints of the single library enclosure. To expand the library capabilities beyond these limitations, users must purchase another complete library, even if their current storage needs require only a modest increase beyond the current single library capabilities.

The scalable design of the modular systems places no such limits on users. Customers can start as small as a single base module with dual drives then expand the system in customized, modular increments up to capacities for meeting changing storage requirements. The flexibility of this scalable model is advantageous in two key areas: users can expand the library by purchasing only the capabilities they require, and can easily calculate their cost savings vs. multiple stands of discontiguous automated devices.

The expansion options offered by some manufacturers of scalable storage architecture enables users to expand the system based on individual requirements with capacity modules - containing only DLT cartridges to boost overall library capacity - or with modules containing additional DLT drives - to enhance total library throughput and enable more users to retrieve data more efficiently.

The cost savings of this approach can be significant. If a monolithic library system is fully populated with drives and cartridge slots, the only available option to expand is to purchase an additional separate library cabinet with additional robotics, drives, control electronics, and cartridge slots, hardly a cost-effective purchase if the only need is for additional throughput. The customer has no other option, however, as these "all-or-nothing" designs provide only a single, extremely expensive expansion option.


THE PRICE OF DOWNTIME:

It would require another long article to cover all of the issues involved in determining the cost of downtime. However, it is safe to say that in the new applications environment for tape libraries, zero downtime is the only acceptable target, and a modular, fault-tolerant design that provides a level of protection against failure and down time that is simply unavailable from monolithic designs.

Estimates of the cost of downtime to an organization span a vast spectrum, from $1,000 to $100,000 per hour - even $100,000 per minute for real-time transactions. A Gartner Group report posed the question, "How do you come up with a number you're sure is accurate?" Here's the short answer: "You don't. The real cost of downtime is elusive."

To a large extent, the cost of server downtime is tied to the applications environment, producing much higher costs for transaction processing and manufacturing environments, but still expensive for any organization. The Gartner Group also points out that downtime cost computations typically only figure productivity loss to an organization; they ignore transaction loss, loss of business, or customer dissatisfaction.

With the new applications for tape libraries requiring continuous access to information stored on tape, the same zero downtime demands that have traditionally been placed on network servers are now being applied to automated tape libraries.

The highly reliable design of high-performance scalable libraries brings a new level of reliability and fault tolerance to library designs with the goal of minimizing costly downtime. Quantifying the reliability of tape library designs is an issue that has been grossly misrepresented in some circles. Specifically, vendors of monolithic library designs have depended on traditional standalone reliability analysis in an attempt to discredit the reliability of modular design. These detractors have typically referred to outdated reliability models that have proven to be invalid when applied to tape library technology.

According to Strategic Research, "a well-designed library with multiple drives will achieve a lower failure rate than just independent single drives because the library presents a consistent and controlled environment to the drives. Thus, the real world average failure rate for a tape library system is lower than for a combination of single drives."

That assessment of library technology runs counter to the conventional wisdom of design which proclaims as more parts are added to a system, the more likely it is to fail, with all other factors being equal. However, all things are not equal and, as noted by Strategic Research, this model for reliability does not apply to library technology.

A multiple-module installation is highly insulated against a catastrophic failure by having redundant components. Monolithic designs provide no similar insurance against component failure. Single failure points - such as control electronics or robotics - leave monolithic libraries much more susceptible to a catastrophic failure rendering the library inoperable and interrupting user access to data.


QUANTIFYING AUTOMATION COST SAVINGS:

As demonstrated by the Strategic Research findings, data backup is the largest piece of the data management pie, and an area where automating the process with a tape library can pay immediate dividends. In general, the advantages of an automated tape library technology include:

  • Unattended backup of all servers and critical workstation data across multiple tapes; manual intervention typically needed only once a week.
  • Automated media management ensures proper tape rotation and multiple generations of backup, preventing possible disaster if a tape is damaged.
  • User can restore their own data if permitted, without intervention by a network administrator.
  • Using tape for low-cost, near on-line storage is labor intensive unless this is done with a robotic tape library and automated Hierarchical Storage Management system.

The advantages of an automated tape library are even more pronounced in a dual-drive configuration. There are four key points that make multiple drives almost mandatory for today's network storage environments.

  • In many backup applications with shrinking time windows, it is impossible to complete a backup in the allotted time without dual drives, which can cut backup times nearly in half.
  • Having two drives allows for easy creation of off-site copies either by mirroring during backup or by producing off-line copies after the backup is completed.
  • When backing up mission critical data, having two drives eliminates an important potential point of failure.
  • With two or more drives in a robotic library, Hierarchical Storage Management runs more smoothly, with less chance that a user will have to wait in line for the denigration of a file.

CONCLUSION:

Automated tape libraries are an indispensable component in the battle to control storage administration costs. Buyers of library technology must take into account a variety of factors beyond price to make an intelligent buying decision. The only accurate way to ensure the best product is to gauge the total cost of ownership, including factors such as the time and administrative cost savings involved in automating data backup and retrieval processes, the potential impact of library downtime, and the costs of upgrading, if in fact, the library can be upgraded.

This analysis makes a strong case for modular scalability. No other type of library offers the cost savings associated with upgrading the library by virtue of adding only the specific capabilities demanded by unique customer environments. For example, the user is not forced to buy additional drives or robotics - which is the norm with monolithic library designs - when the only need is for additional capacity. The innovative modular design architecture also provides unprecedented protection against a catastrophic failure that impacts library availability as part of the overall network storage environment. Simply put, scalable is better, and the LibraryXpress product from Overland Data represents the state-of-the scalable-library-art. ...Overland Data profile

.
Editor's comments:- 10 years later:- the relative costs hadn't changed very much. As this research press release showed.
IDC Calculates the Cost of Owning Storage
FRAMINGHAM, Mass. - March 16, 2009 - IDC estimates the total annual cost to "manage" the world's installed base of external storage is about 60% of all enterprise storage-related spending, including software, power, cooling, administration personnel, and services.

"As the industry attempts to control IT costs, specifically related to storage, IDC realizes that power and cooling costs are not the only costs associated with external storage," said David Reinsel, group vp for IDC Storage and Semiconductors research. "In fact, in the grand scheme of things, the cost to power and cool external storage pales in comparison with the cost to acquire and manage storage, including the costs for storage software and storage administrators."

While interest around storage efficiency technologies (e.g., deduplication, compression, and thin provisioning) has intensified, impacting power and cooling costs in the longer term, penetration of these technologies is very low. ...IDC profile
Overland Data - click for profile
...
SSDs - reaching for the Petabyte
what do enterprise SSD users want?
History of enterprise disk to disk backup
5 User Value Propositions for buying SSDs
SSD Pricing - where does all the money go?
Efficiency - making the same SSD - with less flash.
.
SSD ad - click for more info
...
"Thanks for the offer, but...
we don't want to deploy any new hard drive arrays.
Not even if you're giving them to us free!"
This classic article described the pivotal future storage market climate in which enterprise users will cease to regard hard drive arrays attractive or usable - even if the cost of buying a new hard drive array drops away to ZERO! - this way to the petabyte SSD
.
SSD ad - click for more info
.
"Enterprise Flash" - is a market phenomenon
not a technology.
Sugaring flash for the enterprise
how the market changed from 2004 to 2013
.
SSD ad - click for more info
.

storage search banner

STORAGEsearch is published by ACSL