Getty Images/iStockphoto

Tip

5 centralized logging best practices for cloud admins

Centralized logging -- particularly within hybrid and multi-cloud environments -- can boost an IT team's monitoring strategy and accelerate troubleshooting.

For many enterprises, logging is distributed across multiple cloud providers, data centers, devices and applications -- a model that can complicate cloud administration tasks.

Log sources are likely in different formats and accessed in different ways. These diverse logging habits inhibit the effective use of logs to diagnose problems, perform capacity planning and monitor security and compliance policies. In hybrid and multi-cloud environments, centralized logging is essential to maintain visibility of an application's components and dependencies.

To get started, follow these centralized logging best practices.

1. Understand logging goals

Consider the scope of cloud logging and application performance management (APM) requirements. Exactly how admins achieve a centralized view for cloud APM and logging will depend on the cloud computing model in use:

  • Single public cloud. Opt for cloud-native logging tools available from the cloud provider.
  • Hybrid cloud. Extend current on-premises logging practices to the cloud.
  • Multi-cloud. Use some form of log aggregation and analysis of the collected data.

Additionally, consider the use of the logged data, and whether it's necessary to see logged events in real time. Real-time data collection will add to the cost and complexity of centralized logging. However, it's essential if admins expect to use central logs to diagnose issues as they unfold.

2. Separate application and resource logging

Application logging and resource logging should be distinct layers. Applications will often log their own conditions. Platform resource logs -- for hosting and middleware -- are always available. While the goal of centralization is to bring everything together, don't mix these two logging sources. Effective APM practices depend on isolating problems to applications or resources.

If admins store too much log data, it can negatively affect performance, raise costs and make it hard to discern useful information.

Admins should log information to determine the relationship between applications and their underlying resources. With the cloud and other virtual-hosting technologies, the connection between applications and hosting resources is soft because the application "sees" the virtual resource. It is critical to map that application to the physical resource to make sense of the two log layers. This mapping is also critical to correlate issues, since applications that share physical resources won't share the virtual resources.

3. Know what to log and for how long

Even the best centralized logging strategy can get bogged down in data volume. If admins store too much log data, it can negatively affect performance, raise costs and make it hard to discern useful information. Do not log just for the sake of logging; there should be a specific reason to log something.

Even if admins are careful about what they elect to log centrally, they should review logging processes at least twice a year to identify any collected data that was never used. Additionally, do not log personally identifiable information, as this likely violates security and compliance policies.

Don't forget log security

Every management portal into a system is a potential security breach. Ensure that log access is secure and that applications that access log files are themselves protected. In a very few cases, security and compliance requirements may be stringent enough to discourage some logs from being included in a centralized system.

4. Visualize log data

In addition to textual analysis, create useful visualizations of log data. A large number of companies admit that they don't use their central logs as much as they could because, even when admins avoid superfluous data logging, it can be difficult to find something in a maze of textual entries. Prioritize log visualization capabilities when evaluating log management tools.

5. Choose the right log management tool

There are various options for centralized logging tools. To choose the right one, organizations should compare their cloud provider's offerings against third-party and open source tools.

Provider-native tools. AWS, Microsoft Azure and Google Cloud offer various log management tools, such as Amazon CloudWatch Logs, Azure Monitor Logs and Google Cloud Logging. These providers offer both cloud log data collection and tools that can ingest hybrid and multi-cloud logs. It's generally easiest for companies to adopt the cloud logging framework of their dominant provider and ingest other logs.

Third-party options. Third-party products -- some of which are based on open source technologies --  include Dynatrace, Datadog, New Relic and Splunk. Many of these products are APM suites that include visualization and centralized collection capabilities.

Open source options. Open source options include Elastic Stack, Graylog, Fluentd, Rsyslog, LOGalyze, SolarWinds and NXLog. These are available in both self-support and enterprise editions. Features here vary widely, so look at each product carefully. That said, one of these open source options might be best if a company has no dominant public cloud provider, focuses primarily on on-premises logging or has logs that require custom ingestion.

Carefully pick centralized logging and monitoring products. It's difficult to change products without disrupting APM practices and risking changes in visibility and information clarity. As always, good planning pays off.

Dig Deeper on Cloud app development and management

Data Center
ITOperations
SearchAWS
SearchVMware
Close