Latency or round-trip delay is the amount of time it takes for data to travel from a point to a destination and back and is measured and reported in milliseconds. Many factors impact your network latency. One of these is the length of your fiber — the longer the fiber, the longer it takes to transmit information through it. The latency for fiber from Chicago to New York should be 20 ms. Interestingly NEF ran an audit for one of our clients and found that a similar network yielded 45 ms due to an indirect fiber route.

Another key factor impacting your latency is the network architecture. For example, point-to-point networks with a service-level agreement are more consistent than MPLS with variable paths. Equipment and the number of nodes will have an impact because amps and transponders used over long distances increase latency. The processing of data will also impact latency, but that varies widely by application and user equipment, so make sure the network squared away first.

Latency is becoming increasingly important as data demands continue to rise. From usage of 4G wireless connections to streaming online content, businesses and consumers have higher expectations than ever for rapid data sharing. Real-time SaaS applications such as on-demand video and online gaming require an almost live response for survival, and more people are using these applications than ever. A recent study showed Netflix now accounts for nearly 33% of downstream traffic in the U.S between 9pm – 12am. E-commerce sites have a direct revenue impact from low latency: Amazon found that a mere 0.1 seconds of latency has a 1% impact on their revenue.

Many mission-critical business applications (the revenue engines of most corporations) depend on high speeds and low latency to function across large organizations and multiple locations. Virtually any company with a website is impacted by latency. Equation Research found that if a website doesn’t load quickly, 78% of visitors will leave the site; 88% won’t come back at all; and 47% blame the website owner.

Securing low latency for a data center comes from several factors: the specific applications and needs of a data center; proximity to key points of interest; the ability to optimize data center interconnects; and the diversity of carrier options.

Finding the “right” latency solution takes time. Your discovery process should include determining business application needs, maximum limits on latency for customers, a granular traffic assessment (geography), and access methods. You’ll need to identify your candidate locations and assess factors such as the available carrier mix. You should also assess disaster risk and connectivity costs associated with your data center.

NEF has a combined century of experience in the telecom/colo space, with expertise in network and data center planning and implementation. Let us work to find the low latency solution that’s right for your business.

Related Articles

What Did Google Fiber Teach Us?

Since Google’s 2010 announcement of its plan to wire at least 50,000 homes with fiber connections, Google Fiber has not been delivering the way that many hoped that…

More and More Carriers are Shedding their Data Center Businesses

FierceTelecom recently published an article about AT&T wanting to relinquish its data center business, and it is another example in a list of telecom providers who have either…

Rural Fiber Expansion Plans – Who is Leading the Charge?

In January, President Trump spoke at the American Farm Bureau Federation convention in Nashville, where he laid out his plans for rural broadband expansion. This strategy is part…