The physical limitations of networking, coupled with geographic distances, introduces transmission delays of various magnitudes from servers to client devices. Moving your application off on-premises hardware to gain the advantages of on-demand scalability only solves some scaling and performance problems. Network latency, dropped packets and other networking issues can adversely affect end user experience.
Distance is a key determinant of
Although we cannot bend the laws of physics, we can avoid some of latency's more deleterious effects on application performance, using CDNs, TCP optimization and peering agreements.
Replicating static content using CDNs
Although we cannot bend the laws of physics, we can avoid some of latency's more deleterious effects on application performance.
Static content, such as text, images and audio files, are important elements of websites and applications. Because this content, by definition, does not change frequently, we can take advantage of replication to reduce the time required between a user requesting content and receiving it. Content delivery networks (CDNs) are services composed of geographically distributed servers that store content for their customers. When a user requests content, the request is routed to the closest server. A user in Chicago, for example, may receive content from a server in New York, while a user in Amsterdam requesting the same piece of content may receive it from a server in Berlin.
CDNs manage the replication of content across their servers. You would upload content to one site and the CDN would distribute updates as needed to refresh other servers.
Optimizing the TCP for dynamic content
Often, an application will generate dynamic content in response to a user's request. For example, a user might query an application for a list of transactions that meet some criteria. Replicating static content will not help here.
Consider Transmission Control Protocol (TCP) optimizations when tuning applications that generate dynamic content. TCP is the protocol used by most applications that require reliable communications channels. TCP guarantees that packets will be delivered to their destination device and assembled in the correct order or an error will be raised.
Networks are subject to outside interference, such as noise interfering with a signal, and internal limitations, such as insufficient buffer capacity to handle high traffic volumes. These kinds of problems can cause TCP packets to be dropped.
The device receiving packets must maintain specialized data, structured to track which packets have been received and which are still pending. After a sufficient period of time, the device will send a request to retransmit packets that were expected but not received. TCP has a number of parameters that determine how long a client will wait for an expected packet and how many packets must be retransmitted when a packet is lost. Optimization techniques can adjust these parameters to improve performance.
Optimizations may be built into operating systems, such as Windows, and stand-alone programs are also available to help tune TCP parameters. Specialized transfer programs can also be used when large files are transferred between devices. These programs may use alternatives to TCP or tune it for file transmission. These are viable solutions when you are transferring between a known set of devices or the program is readily available to any user that may need it.
Providers with global networks can optimize traffic between data centers. This allows customers to realize some of the benefits of TCP optimization without having to implement them.
Creating agreements between network service providers
At a technical level, data can readily move between networks, but at a business level, agreements between network service providers dictate what data moves between networks. Providers maintain a set of agreements that allow for the exchange of data between their networks, and they sometimes charge each other for this capability. These arrangements, generally known as peering agreements, can impact latency because traffic may be routed over slower, less efficient paths due to restrictions on peering between providers.
About the author:
Dan Sullivan, M.Sc., is an author, systems architect and consultant with more than 20 years of IT experience. He has had engagements in advanced analytics, systems architecture, database design, enterprise security and business intelligence. He has worked in a broad range of industries, including financial services, manufacturing, pharmaceuticals, software development, government, retail and education. Dan has written extensively about topics that range from data warehousing, cloud computing and advanced analytics to security management, collaboration and text mining.
This was first published in June 2013