News Stay informed about the latest enterprise technology news and product updates.

Five best practices for successful cloud backup

Even though the cloud can help speed disaster recovery, you still need to do proper backup of your cloud data and applications.

NEW YORK -- What does the Italian Abbey of Monte Cassino have to do with cloud backup? For starters, the abbey contained some treasures akin to the value of data today.

During the session titled "Technical lessons on how to do backup and disaster recovery in the cloud" at the Amazon AWS summit here, attendees got a history lesson from presenter Simone Brunozzi, Amazon Web Services (AWS) senior technology evangelist, on how the story of Italy's Abbey of Monte Cassino relates to cloud backup.

In the early 20th century, Monte Cassino held valuable treasures, including papal documents and paintings by Titian. In 1944, during World War II, the abbey was bombed, but not before two officers moved its treasures to the Vatican for safekeeping. Because of that backup plan, both the artifacts and abbey were able to be restored in 1954.

The Abbey of Monte Cassino helps illustrate the need for these five best practices for cloud backup:

1. Make sure cloud backup is accessible

You should be able to access your backed-up data easily, or else there's no point to cloud backup in the first place.

With AWS, "the customer owns their own data," Brunozzi said. That allows customers to have full control over their backup and be able to access it without involving Amazon when a disaster occurs. You can use redundancy, AWS Import/Export, AWS Storage Gateway, and Direct Connect to provide backup with AWS.

2. Consider scalability

AWS customers can scale backed-up data to multiple regions without informing Amazon, Brunozzi said. Amazon's Simple Storage Service, or S3, and Glacier help you scale your cloud backup to get the most users up and running after a disaster.

3. Keep cloud backup secure

The Vatican was a pretty safe place to store Monte Cassino's treasures. Remember that if cloud backup isn't protected as well as your production data is, it's worthless. Use Secure Sockets Layer endpoints, signed application programming interface, or API, calls and server-side encryption in AWS to keep your backups secure. AWS provides "durability through multiple copies across different data centers," Brunozzi said.

4. Work hand in hand with a DR policy

As you back up cloud data and applications, keep your disaster recovery plan in mind. "Once your backup is safe, you should be able to recover it. You don't want to wait 10 years, like with Monte Cassino," Brunozzi said.

To make sure data can be recovered after a disaster, you can integrate storage with AWS and run services on Elastic Compute Cloud (EC2). Customers can back up snapshots on AWS, then spin up EC2, attach those volumes to that service, and have apps up and running immediately in the cloud.

AWS customers should also decide how much redundancy they want, depending on cost. For instance, you can implement Active/Active or Active/Passive machine clustering to ensure high availability.

5. Identify who cares about the data

Cloud users' treasure is their data, and just like the officers at Monte Cassino, they care about its safety. So, it's important to clarify who is responsible for what by implementing ownership and access policies in your cloud environment.

For instance, you can set up roles and permissions using AWS Identity and Access Management for AWS users and groups. It's also a good idea for IT to create data logs so you know which users are accessing which data.

"Logs are incredibly important, not because you want to know who to blame, but so you can find the bug and fix it," Brunozzi said. "Through logs, you're able to understand what went wrong."

Dig Deeper on Cloud infrastructure monitoring

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.