Browsed by
Category: Enterprise Network

Things You Should Know About InfiniBand

Things You Should Know About InfiniBand

In today’s high speed information era, more and more people are expecting much for high bandwidth and low latency of Internet, which are the two most common parameters used to compare link performance. To cater to most people’s requirements, InfiniBand is designed to take advantage of the world’s fastest interconnect, supporting up to 56Gb/s and extremely low application latency. So what is InfiniBand exactly? You may find answer in this post.

Introduction to InfiniBand

InfiniBand (IB) is a computer-networking communication standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also utilized as either a direct, or switched interconnect between servers and storage systems, as well as an interconnect between storage systems.

Basic InfiniBand Structure

InfiniBand creates a private, protected channel directly between the nodes via switches, and facilitates data and message movement without CPU involvement with Remote Direct Memory Access (RDMA) and send/receive offloads that are managed and performed by InfiniBand adapters. The adapters are connected on one end to the CPU over a PCI Express interface and to the InfiniBand subnet through InfiniBand network ports on the other. This provides distinct advantages over other network communications protocols, including higher bandwidth, lower latency, and enhanced scalability.

Basic InfiniBand Structure

How InfiniBand Works

Instead of sending data in parallel, which is what PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal. The principles of InfiniBand mirror those of mainframe computer systems that are inherently channel-based systems. InfiniBand channels are created by attaching host channel adapters (HCAs) and target channel adapters (TCAs) through InfiniBand switches. HCAs are I/O engines located within a server. TCAs enable remote storage and network connectivity into the InfiniBand interconnect infrastructure, called a fabric. InfiniBand architecture is capable of supporting tens of thousands of nodes in a single subnet.

InfiniBand architecture

Features and Advantages of InfiniBand

InfiniBand has some primary advantages over other interconnect technologies.

  • Higher Bandwidth—InfiniBand constantly supports the highest end-to-end bandwidth, towards the server and the storage connection.
  • Lower latency—RDMA zero-copy networking reduces OS overhead so data can move through the network quickly.

bandwidth and latency

  • Enhanced scalability—InfiniBand can accommodate flat networks of around 40,000 nodes in a single subnet and up to 2^128 nodes (virtually an unlimited number) in a global network, based on the same switch components simply by adding additional switches.
  • Higher CPU efficiency—With data movement offloads the CPU can spend more compute cycles on its applications, which will reduce run time and increase the number of jobs per day.
  • Reduced management overhead—InfiniBand switches can run in Software-Defined Networking (SDN) mode, allowing them to run as part of the fabric without CPU management.
  • Simplicity—InfiniBand is exceedingly easy to install when building a simple fat-tree cluster, as opposed to Ethernet which requires knowledge of various advanced protocols to build an IT cluster.
Summary

InfiniBand is a high-performance, multi-purpose network architecture based on switched fabric. It has become a leading standard in high-performance computing. Over 200 of the world’s fastest 500 supercomputers use InfiniBand. If you are planning to deploy InfiniBand, feel free to ask FS.COM for help. We have 40G QSFP+ modules compliant to InfiniBand standard and various optical fiber cables for you to choose. FS.COM, as a company specialized in optical communications, offers customized network solution for each customer. You will find a best solution here.

Why Leaf-Spine Architecture and How to Design It?

Why Leaf-Spine Architecture and How to Design It?

With the demand for higher bandwidth, faster speeds and optimal applications, equipment architecture trends have changed from traditional 3-tier architecture model to leaf-spine architecture in existing data centers. 3-tier architecture vs. leaf-spine architecture: which is better? What should be considered when deploying one network architecture over the other?

3-Tier Architecture vs. Leaf-Spine Architecture

The traditional 3-tier architecture model is designed for use in general networks, usually segmented into pods which constrained the location of devices such as virtual servers. 3-tier architecture consists of core switches, aggregation/distribution switches and access switches. These devices are interconnected by pathways for redundancy which can create loops in the network. As part of the design, a protocol (Spanning Tree) that prevents looped paths is implemented. However, doing so deactivates all but the primary route. A backup path is then only brought up and utilized when the active path experiences an outage. The 3-tier architecture comes with a number of disadvantages, such as higher latency and higher energy requirements.

3-tier architectureLeaf-spine architecture, also called fat-tree architecture, is one of the emerging switch architectures that are quickly replacing traditional solutions. It features multiple connections between interconnection switches (spine switches) and access switches (leaf switches) to support high-performance computer clustering. The role of the leaf is to provide connectivity to the endpoints in the network, including compute servers and storage devices or any other networking endpoints—physical or virtual, while the role of the spine is to provide interconnectivity between the leafs. The requirements applying to the leaf-spine topology include the following:

  • Each leaf connects to all spines in the network.
  • The spines are not interconnected with each other.
  • The leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane operations such as forming a server-facing vLAG.)

leaf-spine architecture

Compared to the traditional 3-tier architecture, the leaf-spine architecture design drastically simplifies cabling needs, especially when looking at fiber optic connectivity. But some design considerations for leaf-spine architecture should be considered, which will be introduce in the next part.

Design Considerations for Leaf-Spine Architecture

Oversubscription Ratios — Oversubscription is the ratio of contention should all devices send traffic at the same time. It can be measured in a north/south direction (traffic entering/leavign a data center) as well as east/west (traffic between devices in the data center). Current modern network designs have oversubscription ratios of 3:1 or less, which is measured as the ration of downlink ports (to servers/storage) to uplink ports (to spine switches). For example, a 64-port leaf switch should equate to 48 ports down to 16 ports up.

Leaf and Spine Scale — As the endpoints in the network connect only to the leaf switches, the number of leaf switches in the network depends on the number of interfaces required to connect all the endpoints. The port count requirement should also account for multihomed endpoints. Because each leaf switch connects to all spines, the port density on the spine switch determines the maximum number of leaf switches in the topology. A higher oversubscription ratio at the leafs reduces the leaf scale requirements, as well.

The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the number of redundant/ECMP paths between the leafs, and the port density in the spine switches. Higher throughput in the uplinks from the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in port-channel interfaces between the leafs and the spines.

10G/40G/100G Uplinks From Leaf to Spine — For a leaf-spine network, the uplinks from leaf to spine are typically 10G or 40G and can migrate over time from a starting point of 10G (N x 10G) to become 40G (N x 40G). An ideal scenario always has the uplinks operating at a faster speed than downlinks, in order to ensure there isn’t any blocking due to micro-bursts of one host bursting at line-rate.

Layer 2 or Layer 3 — Two-tier leaf-spine networks can be built at either layer 2 (VLAN everywhere) or layer 3 (subnets). Layer 2 designs allow the most flexibility allowing VLANs to span everywhere and MAC addresses to migrate anywhere. Layer 3 designs provide the fastest convergence times and the largest scale with fan-out with ECMP (equal cost multi pathing) supporting up to 32 or more active spine switches.

Summary

It is important to understand the 2-tier leaf-spine architecture as it offer unique benefits over the traditional 3-tier architecture model. With easily adaptable configurations and design, leaf-spine has improved the IT department’s management of oversubscription and scalability. Deploying leaf-spine network architecture and buying high-performance data center switches are imperative for data center managers as leaf-spine environment allows data centers to thrive while accomplishing all needs and wants of the business.

Proper Cabling Solutions for PoE Network

Proper Cabling Solutions for PoE Network

Ethernet cables

By running power and data transmission over a single Ethernet cable, PoE (Power over Ethernet) has found success across a variety of applications such as IP surveillance cameras, IP phones and wireless access points. However, without the right cabling and network design in place, PoE can encounter cable heating and connectivity issues that may adversely affect performance. So in this post, some cabling recommendations for PoE will be listed for your reference.

working principle of PoE switch

Issues Affect PoE Performance

Heat generation in cable bundles is one of the biggest issues that affect PoE performance. When power is added to balanced twisted-pair cabling, the copper conductors generate heat and temperatures rise. High temperatures will lead to higher insertion loss, and in turn shorter permissible cable lengths. It can also increase bit error rates, and create higher power costs due to more power dissipated in the cabling.

Cabling Recommendations for PoE

Some cabling recommendations for PoE are suggested to help lower cabling temperature.

Use Higher Category Cabling

Higher category-rated cable typically means larger gauge sizes, and as power currents increase, these larger conductors will perform better than smaller cable. Generally, higher category cabling will be necessary to minimize temperature increases while supporting PDs that require more power.

Reduce the Number of Cables per Bundle

If cables are bundled or closely grouped with other cables, cables near the center of the bundle have difficulty radiating heat out into the environment. Therefore, the cables in the middle of the bundle heat up more than those toward the outer layers of the bundle. Separating large cable bundles into smaller bundles or avoiding tight bundles will reduce temperature rise.

Design Pathways to Support Airflow

Enclosed conduit can contribute to heat issues. When possible, using ventilated cable trays would get better airflow. Open mesh cable trays and ladder racks will improve heat dissipation and create more opportunities for loosely grouping cables instead of tight bundling.

Cat 5e vs. Cat 6a: Which Is Better for PoE Cabling?

The type of cabling selected can make a big difference in terms of how heat inside the cable is managed, and how it impacts performance. Typically, Cat 5e and Cat 6a cable can be used to support PoE devices. But it’s better to use Cat 6a for PoE cabling.

With larger-gauge diameter, Cat 6a can reduce resistance and keep power waste to a minimum as it has a lower temperature increase compared to smaller-gauge Cat 5e. This better performance will provide additional flexibility, including larger bundle sizes, closed installation conditions and higher ambient temperatures. For instance, when comparing 23-gauge and 24-gauge cabling, there is a large variance in how power is handled. As much as 20% of the power through the cable can get “lost” in a 24-gauge Cat 5e cable, leading to inefficiency. In addition, less power is dissipated in a 23-gauge Cat 6a cable, which means that more of the power being transferred through the cable is actually being used, improving energy efficiency and lowering operating costs.

FS PoE Switches & Ethernet Cables Solution

FS offers fully managed PoE Gigabit switches, which delivers robust performance and intelligent switching for growing networks. Available with 8, 24, or 48 PoE Gigabit Ethernet ports, the model details of our PoE switches are listed below. Among them, the PS130-8 and PS400-24 are PoE switches, while PS650-48, PS250-8 and PS650-24 are PoE+ switches. Reliable & economical, our PoE switches are ideal for SME networks and can expand your network much more easily than ever.

FS PoE switches specification

Besides PoE, we also have various types of Ethernet cables including Cat 6a, Cat 6, Cat 5e and Cat 7 Ethernet patch cables. Most of them are in large stock and multiple cable colors are available. For more details, please visit www.fs.com.

How to Choose a Switch on a Tight Budget?

How to Choose a Switch on a Tight Budget?

A network switch connects multiple devices to a local area network (LAN) and allows the devices to communicate over the network. It improves the efficiency and potential throughput of the network by sending data only to the devices designated to receive it. The right switch should meet the current needs and offer possibilities for future growth. In this article, we try to figure out what to consider when selecting the best switch for your system.

buying network switch

Considerations When Selecting Network Switch

When selecting a network switch, you need to consider various factors, like routing requirements, port speeds and performance needs. A thorough planning before making the purchase will save you time and money, enabling you get the switch with all the functionalities you required for today and tomorrow. Here are some metrics of a switch you should value.

Routing Requirements—they dictate what series or line will be explored, which affect pricing significantly. You can choose either dynamic (basic RIP, OSPF, BGP) or static (static and inter-VLAN routing).

Performance Requirements—evaluate switch performance requirements for speed, position in the network (edge, distribution and core) and redundancy needs (power, fabric and management redundency)

Manageability—when specifying a brand, model or OS, consider other models and operating systems already in use. These systems are already managed and understood by the technical staff, and finding a comparable platform is helpful.

Ports Speeds and Types—the speeds, as well as the number of primary and uplink ports is important. Fiber optics and transceivers would be selected based on the fiber types, and distances should be specified for each fiber uplink.

Switches With Same Ethernet Ports: How to Make Your Decision?

Then how to choose from switches with the same port and speeds? Here we take some 10G switches for example, each of them features the same 48 1/10G SFP+ ports and 6 40G QSFP+ ports, delivering 1.44T of throughput. The main metric of these switches are presented as following:

FS S5850-48S6Q Cisco Nexus 9372PX Arista 7050SX-72Q HPE Altoline 6920
NOS Fiberstore OS Cisco IOS Arista EOS HPE Comware
ASIC Centec Broadcom Trident2 Broadcom Trident2 Broadcom Trident2
CPU x86 processor and PowerPC x86 processor x86 processor x86 processor
Latency 612ns 1000 ns 550 ns 600–720 ns
System Memory 1 GB 16 GB 4 GB 8 GB
Power Consumption 150W/190W 210W/537W 144W / 261W 315 W (maximum)
Reference Price US$ 4,000.00 US9,505.00 ~ $21,318.16 US$ 21,408.95 US$ 11,209.66 ~ $12,792.00

Given the same port density and capacity, very similar ASIC and CPU, the differences lie in the latency, power consumption and price.

Latency serves as a fairly important metric of switch, it can be a much stronger determinant of application performance. It defines the speed of bits go from device to device, the lower the latency is, the quicker the speed. As it is beneficial to minimize the overall delay in the communication path between communicating pairs or groups of endpoints. Customers are more inclined to choose switches with low latency. S5850-48S6Q has lower latency when compared with switches of the same switching capacity, proven to be an ideal choice.

The power consumption of switch contributes to a considerable fraction of the energy expenses of data center. Power efficiency is another essential elements that customer values. Customers calculation total cost of ownership need to factor in power consumption. S5850-48S6Q delivers significantly reduced power consumption in this case, compared with the three other brands. And that’s an individual switch, while it comes to multiple switches in data centers, this can be a very huge OpEx saving annually.

Most importantly, the economics of the S5850-48S6Q cannot be beat. S5850-48S6Q is equipped with CPU and chips very similar to those of the big brand ones, which means that the functionalities of the three can be equally realized by S5850-48S6Q. For more information on FS S5850-48S6Q, please visit www.fs.com

White Box Switch Vs. Bright Box Switch: Who Is the Winner?

White Box Switch Vs. Bright Box Switch: Who Is the Winner?

The networking industry is making an important move toward open switches, in which the hardware and software are separate entities that can be changed independently of each other. As usual, most enterprises use the closed switches in their data center. However, some companies want to customize the hardware and software to their needs. Until recently, the media is also giving attention to do-it-yourself switches. But this new trend may be confusing to newcomers, those who wonder the differences between white box switch and bright box switch. This article could solve this problem and point out the advantages of white box switch.

Overview of White Box Switch and Bright Box Switch

As it name shows, white box switch differs from black box switch, because the OS and the hardware are not integrated in a white box switch. In short, white box switches do not focus on the brands. The users may buy the OS and the hardware from different vendors. Basically, white box switches can be divided into three types: First, a white box switch can be sold as a bare metal device – the OS and the hardware are completely separated; second, a white box switch may be brought from the vendor who only pay attention to the service of OS or hardware, and the users have multiple choices on hardware or OS; third, a white box switch can be purchased with hardware and installed OS just like the branded switches, but the difference is that a white box switch can be labeled with the brand of buyer’s company or with no brand required by the customers. Therefore, more and more organizations opt for white box switches which can customize the devices to their needs because of their flexibility and cost-efficiency.

white box switch

A white box switch refers to the ability to use generic, off-the-shelf switching and routing hardware, which is generally used with software-defined networks (SDNs). But how to differentiate it from bright box switch? Bright box switches stand for branded white box. There actually is a difference from white box: a bright box switch is made by an original design manufacturer (ODM), and is often the same switch offered by the ODMs as bare metal, but it sports a front bezel with a brand name like Dell and HP. So a bright box switch supports a brand name of any reputed IT company.

White Box Switch Vs. Bright Box Switch: Who Is the Winner?

From the perspective of manufacturing, white box switches and bright box switches can be identical. But in fact, the business aspects of white box versus bright box may be very different. It is important to consider all aspects of the various offerings such as price, warranty and support when selecting a white box or bright box solution. In fact, white box switches are better choices on the reduced capital cost. Some research data indicate that companies transitioning the network over white box switches reduced the network infrastructure costs by 90% over the past few years. Here are the benefits of white box switches:

Cost

White box switches cost significantly less than the equivalent-speed brand-name switches. The key point is the potential low operating costs. White box switches can support a wide range of IT development tools, which enable customization of the switch infrastructure to the specific needs of the data center environment.

Reliability

From the perspective of reliability, white boxes are equivalent to brand-name systems because they are actually the same hardware. White box switches can be deployed either in the data center or in the access network. Hyperscale data centers can deploy white box switches to reduce capital expenditures and leverage open SDN tools to improve time to deployment and automation.

Features and Capabilities

White boxes are not at feature parity with layer 2/3 switches for uses such as campus switching or aggregation. Vendors to watch include software suppliers Cumulus, Big Switch and Pica8, as well as the associate hardware/chip manufacturers like Accton, Quantum, Intel, Broadcom. Significant growth of the white box market has the potential to impact the margins and market share of the incumbent Ethernet switch providers, specifically Cisco (and HP, Juniper, IBM, Dell, Brocade).

Summary

White box switches have more advantages than bright box switches, which will prove its CAPEX and OPEX benefits in the long term, and it will expand to enterprise buyers and into access networks in the future. In addition, administrators can select network hardware and software platforms independently to make the best choices for each scenario.