Storing data in the public cloud has its obvious advantages. Cloud’s elastic provisioning capabilities gives you access to additional storage space when you need it. What you choose to store in the cloud versus on local
Customer-facing data. If your company has large amounts of customer-facing data, such as catalogs of merchandise, it makes sense to host that data in the cloud where it can be copied redundantly as needed, geographically distributed or provisioned up or down according to customer demand. The statement, “Put data closest to the people who need it,” applies here.
Public cloud has fewer physical constraints on storage, but IT admins must also take into account bandwidth requirements and possible latency issues.
Distributed-access data. Data that’s accessed from several locations, particularly read-only data or data that is synchronized periodically from a central source, is a good fit for the cloud. Public cloud has fewer physical constraints on storage -- you can provision out as much as you need and your budget will allow -- but IT admins must also take into account bandwidth requirements and possible latency issues.
Data backups. Backing up data from a local system such as a desktop or an enterprise data center to a cloud host is a good example of an instance in which cloud-based storage makes sense. Bandwidth and storage space are two limiting factors; the more of each you have at your disposal, the easier it is to mirror local data in the cloud. Retrieving data from a cloud-based backup, however, can be tricky if you’re dealing with terabytes of data. If siphoning that data from the cloud over the network isn’t prohibitive, ask your cloud provider to send you a physical copy of your data.
The case for clutching your data
Certain types of data, for one reason or another, are best kept in a local data center or private cloud. Here are a few examples of data that should be kept on-premises.
Mirrored copies of data. In some cases, mirrored copies of data could be considered “backup in reverse.” Copies of data stored in the cloud are synchronized passively to one or multiple hosts. Egnyte, for example, is a service that uses a VMware-hosted appliance to perform local synchronization with an enterprise’s private cloud.
Sensitive data. Some organizations choose to keep sensitive customer data local because of security concerns or to adhere to certain regulatory guidelines, such as the Health Insurance Portability and Accountability Act (HIPAA). On a practical level, at-rest and in-transit encryption, more comprehensive service-level agreements (SLAs) and other safeguards have helped restore enterprises’ trust in housing sensitive data in the cloud. But security is as much about perceptions as it is about actual procedures, and some enterprises are simply more comfortable keeping sensitive data local.
Synchronized data. Even though it’s becoming increasingly possible to ensure multiple copies of a piece of data remain consistent and in-sync, sometimes the only way to guarantee this is to keep one copy where you use it most often -- locally.
Often, enterprises will keep some data in the cloud and related data on-premises. If they must keep that data synchronized, one major consideration is application-aware synchronization. If you’re dealing with data that exists as files, this isn’t complicated. But sophisticated databases, for instance, must be synchronized according to application.
Live-mounted databases need to be synchronized to and from the cloud via attendant applications. In many cases, those apps must be able to see the sync target as a conventional file system, or the apps would need an extension that allows them to easily transfer data in and out of the cloud.
Large databases. In some cases, it’s not practical to remotely host instances of data, or it doesn’t provide any business advantages. For example, you may not need to mirror a large database that only a select number of people access to several locations. On the other hand, housing “big data” in the cloud is a good fit for data that needs to be accessed broadly, whether as a public resource, for data analytics or for business intelligence (BI).
Serdar Yegulalp wrote for Windows Magazine from 1994 through 2001, covering a wide range of technology topics. He now plies his expertise in Windows NT, Windows 2000 and Windows XP as publisher of The Windows 2000 Power Users Newsletter and writes technology columns for TechTarget.
This was first published in May 2012