|
|
|
|
|
|
|||||||||||||||||||||||||
..... |
Surviving Non-traditional Data Disastersby Ian Masters, sales director at Sunbelt System Software- May 2005Backup Software Can you trust flash SSD specs & SSD benchmarks? |
| ||||||||||||||||||
Many companies
associate disaster recovery with catastrophic events - earthquakes, floods,
fires and other natural or man-made disasters that make data recovery from
production machines nearly or totally impossible. While organisations must plan
for such events, it's just as important to prepare for less cataclysmic
possibilities, which can just as easily bring business to a halt.
Many "non-traditional" disasters can impact the operations of your organisation. For example, gas leaks and other facilities issues typically don't cause permanent damage but they can easily make the entire building unusable for days or even weeks. Police investigations, fumigations and other unavoidable problems can arise without warning, prohibiting users from accessing data systems and possibly your entire office space. Companies can recover from the destruction of data and/or data systems with tape backups, replicated copies and other tools. But what happens when a disaster doesn't take out the data centre - or even destroy the data? Non-catastrophic disasters can still cause a significant period of system downtime. Initially you will need to follow some basic steps of creating any DR plan. Firstly, dedicate an individual or team, dependant on the size of your organisation, who are responsible for ALL aspects of the DR plan. Then continue with the following: | |||||||||||||||||||
Generally, when non-traditional disasters occur, you must make some tough decisions about how to handle the situation. Can you access data systems remotely or will it be necessary to set up everything in a temporary location? If you have remote access, you can find employees temporary space to continue working on the original systems. If employees can't access data systems from another location you must make even tougher choices to determine how to proceed. You must determine how you're going to restore data. | |||||||||||||||||||
If you have replicated data
systems in a disaster recovery location you can decide if you want to wait out
the disaster or failover to the alternative systems.
Remember that failover will require restoration operations to the original systems when the emergency is over; so short-term outages may be something you just need to muddle through. If the outage will continue for a significant period of time (based on your organisation's needs) then it may be necessary to perform failover and eventual restoration operations to get back up and running. If you don't have replication or other mirroring tools, you must either wait out the problem or restore from tape and/or other backups. In this case, you've hopefully been storing tape backups off-site (even if that means you've simply taken them home with you). If not, a non-traditional emergency could create a situation that will take systems offline for the entire duration of the outage, regardless of the length of the problem. If you do have backup tapes, you can restore the tapes to temporary servers in another location to get back to business quickly. Keep in mind that this solution also means you'll need to perform the same operation in reverse with the new tapes you make from the temporary systems in order to get back in action in your original environment. This is why planning and documenting is so important. Your business has to have the ability to make hard decisions even when all the key members of staff are not available. | |||||||||||||||||||
Can you determine how long the
problem is going to last? What DR systems do you have in place? Will
implementation and restoration of your DR plan, or some part of it, actually
have any real business benefit? Will implementation end up restricting access
to data and/or applications in the long term?
Regardless of what type of DR systems you've implemented, non-traditional disasters require making some tough, quick decisions. In many cases, you'll eventually be able to get back to your original location, but what you do in the interim could make or break your business. ...Sunbelt System Software profile | |||||||||||||||||||
Editor's
footnote:- since a gas leak was one of the scenarios mentioned in the above
article, I've included this story from our news archive. ActionFront Recovers Data from 21 Chlorine Drenched Servers after Train Disaster Atlanta, GA - February 24, 2005 - ActionFront Data Recovery Labs recently assisted Avondale Mills to recover from a major disaster. The headline in the Augusta Chronicle on January 12, 2005 could not be any more definite and final: "Avondale Mills' Electronic Records Destroyed in Wreck." This was a follow-up story about the January 6, Graniteville, SC train disaster that precipitated an evacuation and caused loss of life for nine people. The wreck also destroyed computers in seven Avondale sites including the company's data processing center, located just yards from the wreck site. Included in the lost data were production programs and the company's financial records. Due to the diligence of the IT team at Avondale, it turns out the bad news in the headline was premature. While the 90 tons of chlorine gas from the massive train wreck destroyed all the circuit boards, cabling and other parts in the computer-servers, virtually all of the data was in fact recovered. Acting on the recommendation of Restoration Technologies Incorporated, Avondale's CIO Barry Graham located ActionFront Data Recovery Labs and engaged their 24/7 Critical Response Team for the assessment and possible recovery of one of the damaged servers. Soon after the authorities allowed access to the wreck-site, ActionFront's Atlanta lab received the set of damaged drives and went to work. The non-functional server-drives all suffered extensive corrosion and looked like 'complete write-offs' at first glance. Happily it turns out that the corrosion had not seriously compromised the platters (internal disks in the hard drives) or the data on those platters. Wearing protective equipment to shield themselves from toxic materials, the data recovery specialists first removed all damaged parts and cleaned up the drives. Circuit boards and other parts that matched the damaged drives were located in ActionFront's massive parts inventory (20,000 drives and counting) and the drives were revived long enough to access the still-readable raw content and make close-to mirror-image quality copies. ActionFront then used the images to rebuild the data into usable files. Encouraged by the first successful recovery, Avondale hand delivered additional sets of server-drives for recovery. Ultimately, the data from 21 servers (over 100 drives) was recovered in the space of two weeks as the Avondale employees travelled back and forth between Graniteville and Atlanta and multiple ActionFront teams worked 24/7 on the damaged media. ActionFront procured additional parts and equipment as needed and the teams solved many new problems including working with certain enterprise storage architectures that added to the complexity of the process. Mr. Graham says "Of course we could have rebuilt our servers from our backups but we would have lost our most recent transactions and changes. Working with ActionFront meant that we were able to restore up-to-date and complete data-sets, greatly enhancing our business continuity". Avondale has overcome many challenges in the wake of such a tragedy. Thanks to ActionFront, the potential data loss issues were quickly resolved. ...ActionFront Data Recovery profile, Data Recovery |
STORAGEsearch | storage manufacturers | 30 more Articles about Backup Software | Web storage | iSCSI | Backup software |
STORAGEsearch is published by ACSL |