Latency Considerations

Overview

Latency is the amount of time needed for data to reach its destination on a network. Latency is a major design consideration when deploying mission-critical databases, and is a particularly-important factor when using a cloud database platform.

Understanding Latency

No network today is faster than the speed of light, 186,282 miles per second (299,792 km/s) in a vacuum. Latency is usually measured in milliseconds (thousandths of a second). Light can travel 186 miles (300 km) per millisecond. Latency is usually measured as round trip time (RTT), which can be no better than the time to travel to a point and back.

For example: London and Paris are approximately 217 miles apart (350 km) as the crow flies. Were we living in a vacuum, it would take 1 millisecond (0.001164901s) for light to travel from London to Paris, and another 1 millisecond for the light to return. The fastest possible round trip is 2 ms. In practice, latency will be higher.

Things that Influence Latency

  • Network processing overhead (present in any network) will increase round trip time.

  • Network processing overhead is generally lowest when traffic stays on a single provider's network, e.g., Google's.

  • Network processing overhead may occur when transiting between two providers, whether directly interconnected (typically better) or over the public internet (typically worse).

  • Network paths are typically not direct between two points; time for data to transit between two points is based on the actual path, not "as the crow flies".

  • Latency between two points can be asymmetrical (inbound and outbound delays can be different).

  • Latency can vary over time based on network conditions, and server conditions.

  • Latency is independent from database connection and query times, which may also be a factor in application performance. Query optimization and connection pooling can reduce those performance impacts.

Solutions

Cloud provider networks are optimized for high availability and low latency.

To minimize the impacts of network latency on your application, host your application servers close to your database, and in a geographic location appropriate for your user base. MariaDB SkySQL is available from a range of AWS and GCP regions worldwide, where application servers can also be deployed.

When application servers and AWS-based database servers cannot be co-homed in the same AWS region, consider using AWS PrivateLink to reduce network processing overhead. AWS PrivateLink allows connections between application and database via internal IP addresses, without transiting the public internet.

When application servers and GCP-based database servers cannot be co-homed in the same GCP region, consider using VPC Peering to reduce network processing overhead. VPC Peering allows connections between application and database via internal IP addresses, without transiting the public internet.

For improved communication latency while limiting fault tolerance, the Single Zone Deployment option is available for Distributed Transactions services.

Testing is recommended, both at the network level, and through application instrumentation.