Earlier before this year we did not anticipate the shared bikes would be widely spread all over the world, but now at the end of this year they are already everywhere. Shared bike is one of the instances of the Internet of Things (IoT), and there are many other applications that have witnessed the development of network-dependent technologies, such as self-driving cars, smart mobile phones/pads, etc. They are all calling for high bandwidth and low latency. But the old network infrastructure of data centers is not capable enough in such an environment, especially for those data centers that should deal with a huge amount of traffic. So some data centers are upgrading from 10G networking to 40G networking by using 40 Gigabit Ethernet switch, of which a 32-port 40G switch is a typical choice.
Limits of Old Data Center Network Infrastructure
What are the limits of old data center network infrastructure? In the past, the major traffic in data centers is in the north-south direction. As for data center switches, it is enough to use 10G uplink ports between the Top of Rack (ToR) switches and the aggregation switches. But as new applications and services rapidly emerge, the traffic between the end user and the data center is increasing, and the traffic in the east-west direction within the data center is increasing as well. Issues of congestion, poor scalability and latency occur when data centers keep using traditional network infrastructure.
The New Fabric for Data Center 40G Networking With 32-Port 40G Switch
In order to meet the requirements of the ever increasing network applications and services, data centers are constantly seeking better solutions. The primary problems are about bandwidth and latency. So one important thing is to upgrade from 10G networking to 40G networking. Since the 40G switch price and the 40G accessory price have dropped a lot, it is feasible to deploy 32-port 40G switches in the aggregation layer. In order to reduce the latency, it is wise to adopt the new spine-leaf topology compared with the old topology.
A network based on the spine-leaf topology is considered highly scalable and redundant. Because in a spine-leaf topology, each hypervisor in the rack connects every leaf switch. And each leaf switch is connected to every spine switch, which provides a large amount of bandwidth and a high level of redundancy. In a 40G networking, it means every connection between the hypervisor and the leaf switch, the leaf switch and the spine switch is both at 40G data rate. In a spine-leaf topology, the leaf switches are the ToR switches and the spine switches are the aggregation switches.
One principle in spine-leaf topology is that, the number of leaf switches is determined by the number of ports in the spine switch, at the same time the number of the spine switches equals the number of connections used for uplink. For a 32-port 40G switch like FS.COM N8000-32Q, it can have a maximum of 32 40G ports, but some ports should be used for uplinks to the core switches. In this case, we use 24 40G ports for connectivity to the leaf switches, meaning there are 24 leaf switches in each pod. The leaf switch we use is the FS.COM S5850-48S6Q, a 48-port 10Gb switch with 6 40G uplink ports. Each leaf switch has 4 40G uplinks to the spine switch. Then each spine switch connects to the two core switches.
This new data center fabric by using 32-port 40G switch is an improvement in bandwidth and latency, but it is not perfect either. For every network switch, it has limits on its memory, including the memory of MAC addresses, ARP entries, routing information, etc. Particularly for the core switch, the number of ARPs it can store is still limited compared with the large number it has to deal with.
Therefore, there’s need to split the network into zones. Each zone has its own core switches, and each pod has its own spine switches. Different zones are connected by edge routers. By adopting this design, we are able to expand our network horizontally as long as there are available ports on the edge routers.
The transformation of data centers is mainly due to the demand of the users. The increasing amount of networking applications and traffic pushes data centers to evolve from old fabric to new fabric. So some data centers have changed from 10G networking to 40G networking by using 40 Gigabit Ethernet switch as spine switch like 32-port 40G switch. And better optimized design is adopted to ensure the desired performance of the new 40G network.