According to a recent report on Cloud trends, analyst house Gartner recommends that network managers should test or emulate the performance of Cloud -based applications in all geographies where they plan to deploy. There is a need for organisations to adopt a Cloud computing service that addresses specific issues around latency such as distance, congestion and monitoring.
Despite all the hype around the move to the Cloud over the past few years, latency remains the one issue that many organisations are still failing to get to grips with. This is somewhat surprising, as file access and application performance – or in short, how quickly information is delivered to the end user or how long it takes to download or open a file – is often high on an end user’s priority list. High latency has a range of causes. A common one is the physical distance between the office and the data centre. Others include congestion on the network as well as packet loss and windowing. While many organisations have explored the use of various application performance tools that are currently available on the market, it is often the physical issue of distance that needs addressing.
For example, many organisations face the challenge of locating their office in a different country from their data centre. So, if an organisation has an office in the UK and a data centre in Spain, for some applications, the distance between the two, can lead to slow file transfers and inefficient application performance. Selecting a data centre location in the same region as the offices, can be critical for some applications simply to address the issue of slow file downloads and poor application performance. Clearly, organisations should look to service providers that can supply multiple data centres across Europe to help with these needs.
When it comes to tackling high latency, it is not just the physical passage of light that causes issues but also the way the signals are sent through networks. The key issues that conspire to create high latency are congestion, packet loss and windowing. Congestion affects latency because when the amount of data to be sent down any connection is too large the network node equipment holds some of it back in a queue while it sends the data that it received first. Windowing is caused by the way signals check that they have been correctly received, the receiver has to send a message that it has the first packet before the sender will send another, this means more delay and more congestion for every packet and if packets are lost then this gets worse because replacement ones have to be sent. This queuing, confirming receipt and resending obviously slows the speed of travel from A to B.
This can often become the source of poor user experience following the move of traditional desktop applications into a Cloud environment because the network connection to the Cloud service does not prioritise traffic appropriately. Often these applications, such as SharePoint, are critical to the day-to-day running of a business and users do not tolerate ten to twenty second performance delays each time they open, save or close a document. They are, however, experiencing these delays because of congestion caused by applications like Exchange which users are perhaps prepared to tolerate slightly slower response times from.
If you combine the competition for bandwidth between business applications with the increase in online recreational video streaming in the workplace, for example YouTube and BBC iPlayer, the challenge for IT teams managing application performance to end users becomes much harder. Traditional methods of WAN acceleration or network Quality of Service are not well suited to companies trying to prioritise Cloud-based services from public providers. As a result, it is becoming more and more important for enterprises to consider Cloud service providers who can offer application performance guarantees as part of their service and providers who can be measured and backed up by stringent service level agreements (SLAs), when entering into contracts.
Once the issues of distance and latency have been addressed, the service provider monitoring of end user activity in the Cloud can help minimise bottlenecks or capacity shortages. For the IT team managing the performance and capacity of Cloud provided services, more sophisticated monitoring tools can provide them with continuous analysis of service performance to ensure the delivery to their business of the promised benefits of Cloud .
The good news is that many businesses are starting to make strides to overcome distance, congestion and the need for more granular monitoring. At the end of the day, the full benefits of Cloud are through having a flexible, elastic computing environment that guarantees security as well as performance. This is now being achieved by companies using service providers to supply a full enterprise quality Cloud infrastructure service, with performance measures that include the network.
- The Customer Edge Drives the Need for NaaS - June 25, 2023
- Blockchain Evolves And Secures - January 13, 2019
- Bessemer Ventures’ 2018 Cloud Computing Trends - February 25, 2018