Abstract
According to current usage patterns, research trends, and latest reports, the performance of the wide-area networks interconnecting geographically distributed cloud nodes (i.e. inter-datacenter networks) is gaining more and more interest. In this paper we leverage only active approaches—thus we do not rely on information restricted to providers—and propose a deep analysis of these infrastructures for the two public-cloud leading providers: Amazon Web Services and Microsoft Azure. Our study provides an assessment of the performance of these networks as a function of the several configuration factors under the control of the customer and evidences specific cases of particular interest. The analysis of these cases and of their root causes, also related with service fees, provides insights on their impact on both the Quality of Service perceived by cloud customers and the outcomes of studies neglecting them.
Our results show that Azure inter-datacenter infrastructure performs better than Amazon’s in terms of throughput (+56%, on average). On the other hand, the performance of the two providers is comparable in terms of latency, with the exception of limited specific cases. Moreover, some of the configuration factors cloud customers can leverage (such as larger more expensive VM sizes, advertised to have better network performance) may have no effect on the inter-datacenter network performance actually perceived. Counterintuitively, lower performance may even be related to higher costs for the customer. Experimental evidences show that public-cloud providers also rely on external network providers for some geographical regions, which is the cause of lower performance and higher costs. A comparison with previous works show that TCP throughput has not been improved recently, while evidences of higher link capacities have been found.
1. Introduction
Enterprise and government organizations increasingly leverage cloud solutions to supply services across the Internet, taking advantage of the ability to scale resources on demand and experimenting unprecedented opportunities in terms of ease of use, reduced costs, and higher reliability [1]. An increasing number of services and applications is now delivered through cloud-based infrastructures, and a large number of companies more and more depend on the cloud for mission critical workloads. For final consumers, this is reflected in ubiquitous access from multiple devices to content and services, delivered to almost anywhere users are located.
6. Conclusion
The performance of the network between geographicallydistributed cloud datacenters is gaining more and more interest, according to last cloud traffic forecasts, to cutting-edge solutions implemented and technologies developed by cloud providers, and to recent research trends. In this paper we have first provided a performance assessment in terms of throughput and latency for the two leading public cloud providers: Amazon and Azure. Then we have deepened the most interesting outcomes. Experimental results have shown how Azure performs better in terms of maximum achievable throughput (+56%, on average), for slightly higher costs. Moreover, evidences revealing the deployment of high performance infrastructures among cloud datacenters have been found for both providers: both are able to deliver (UDP) traffic between two geographically distributed sites at up to 800 Mbps.