Browsed by
Author: Aria Zhu

Things You Should Know About InfiniBand

Things You Should Know About InfiniBand

In today’s high speed information era, more and more people are expecting much for high bandwidth and low latency of Internet, which are the two most common parameters used to compare link performance. To cater to most people’s requirements, InfiniBand is designed to take advantage of the world’s fastest interconnect, supporting up to 56Gb/s and extremely low application latency. So what is InfiniBand exactly? You may find answer in this post.

Introduction to InfiniBand

InfiniBand (IB) is a computer-networking communication standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also utilized as either a direct, or switched interconnect between servers and storage systems, as well as an interconnect between storage systems.

Basic InfiniBand Structure

InfiniBand creates a private, protected channel directly between the nodes via switches, and facilitates data and message movement without CPU involvement with Remote Direct Memory Access (RDMA) and send/receive offloads that are managed and performed by InfiniBand adapters. The adapters are connected on one end to the CPU over a PCI Express interface and to the InfiniBand subnet through InfiniBand network ports on the other. This provides distinct advantages over other network communications protocols, including higher bandwidth, lower latency, and enhanced scalability.

Basic InfiniBand Structure

How InfiniBand Works

Instead of sending data in parallel, which is what PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal. The principles of InfiniBand mirror those of mainframe computer systems that are inherently channel-based systems. InfiniBand channels are created by attaching host channel adapters (HCAs) and target channel adapters (TCAs) through InfiniBand switches. HCAs are I/O engines located within a server. TCAs enable remote storage and network connectivity into the InfiniBand interconnect infrastructure, called a fabric. InfiniBand architecture is capable of supporting tens of thousands of nodes in a single subnet.

InfiniBand architecture

Features and Advantages of InfiniBand

InfiniBand has some primary advantages over other interconnect technologies.

  • Higher Bandwidth—InfiniBand constantly supports the highest end-to-end bandwidth, towards the server and the storage connection.
  • Lower latency—RDMA zero-copy networking reduces OS overhead so data can move through the network quickly.

bandwidth and latency

  • Enhanced scalability—InfiniBand can accommodate flat networks of around 40,000 nodes in a single subnet and up to 2^128 nodes (virtually an unlimited number) in a global network, based on the same switch components simply by adding additional switches.
  • Higher CPU efficiency—With data movement offloads the CPU can spend more compute cycles on its applications, which will reduce run time and increase the number of jobs per day.
  • Reduced management overhead—InfiniBand switches can run in Software-Defined Networking (SDN) mode, allowing them to run as part of the fabric without CPU management.
  • Simplicity—InfiniBand is exceedingly easy to install when building a simple fat-tree cluster, as opposed to Ethernet which requires knowledge of various advanced protocols to build an IT cluster.

InfiniBand is a high-performance, multi-purpose network architecture based on switched fabric. It has become a leading standard in high-performance computing. Over 200 of the world’s fastest 500 supercomputers use InfiniBand. If you are planning to deploy InfiniBand, feel free to ask FS.COM for help. We have 40G QSFP+ modules compliant to InfiniBand standard and various optical fiber cables for you to choose. FS.COM, as a company specialized in optical communications, offers customized network solution for each customer. You will find a best solution here.

How to Choose a Switch on a Tight Budget?

How to Choose a Switch on a Tight Budget?

A network switch connects multiple devices to a local area network (LAN) and allows the devices to communicate over the network. It improves the efficiency and potential throughput of the network by sending data only to the devices designated to receive it. The right switch should meet the current needs and offer possibilities for future growth. In this article, we try to figure out what to consider when selecting the best switch for your system.

buying network switch

Considerations When Selecting Network Switch

When selecting a network switch, you need to consider various factors, like routing requirements, port speeds and performance needs. A thorough planning before making the purchase will save you time and money, enabling you get the switch with all the functionalities you required for today and tomorrow. Here are some metrics of a switch you should value.

Routing Requirements—they dictate what series or line will be explored, which affect pricing significantly. You can choose either dynamic (basic RIP, OSPF, BGP) or static (static and inter-VLAN routing).

Performance Requirements—evaluate switch performance requirements for speed, position in the network (edge, distribution and core) and redundancy needs (power, fabric and management redundency)

Manageability—when specifying a brand, model or OS, consider other models and operating systems already in use. These systems are already managed and understood by the technical staff, and finding a comparable platform is helpful.

Ports Speeds and Types—the speeds, as well as the number of primary and uplink ports is important. Fiber optics and transceivers would be selected based on the fiber types, and distances should be specified for each fiber uplink.

Switches With Same Ethernet Ports: How to Make Your Decision?

Then how to choose from switches with the same port and speeds? Here we take some 10G switches for example, each of them features the same 48 1/10G SFP+ ports and 6 40G QSFP+ ports, delivering 1.44T of throughput. The main metric of these switches are presented as following:

FS S5850-48S6Q Cisco Nexus 9372PX Arista 7050SX-72Q HPE Altoline 6920
NOS Fiberstore OS Cisco IOS Arista EOS HPE Comware
ASIC Centec Broadcom Trident2 Broadcom Trident2 Broadcom Trident2
CPU x86 processor and PowerPC x86 processor x86 processor x86 processor
Latency 612ns 1000 ns 550 ns 600–720 ns
System Memory 1 GB 16 GB 4 GB 8 GB
Power Consumption 150W/190W 210W/537W 144W / 261W 315 W (maximum)
Reference Price US$ 4,000.00 US9,505.00 ~ $21,318.16 US$ 21,408.95 US$ 11,209.66 ~ $12,792.00

Given the same port density and capacity, very similar ASIC and CPU, the differences lie in the latency, power consumption and price.

Latency serves as a fairly important metric of switch, it can be a much stronger determinant of application performance. It defines the speed of bits go from device to device, the lower the latency is, the quicker the speed. As it is beneficial to minimize the overall delay in the communication path between communicating pairs or groups of endpoints. Customers are more inclined to choose switches with low latency. S5850-48S6Q has lower latency when compared with switches of the same switching capacity, proven to be an ideal choice.

The power consumption of switch contributes to a considerable fraction of the energy expenses of data center. Power efficiency is another essential elements that customer values. Customers calculation total cost of ownership need to factor in power consumption. S5850-48S6Q delivers significantly reduced power consumption in this case, compared with the three other brands. And that’s an individual switch, while it comes to multiple switches in data centers, this can be a very huge OpEx saving annually.

Most importantly, the economics of the S5850-48S6Q cannot be beat. S5850-48S6Q is equipped with CPU and chips very similar to those of the big brand ones, which means that the functionalities of the three can be equally realized by S5850-48S6Q. For more information on FS S5850-48S6Q, please visit

Time for optimizing your DWDM network with FMT

Time for optimizing your DWDM network with FMT

DWDM technology has been widely applied during these years and will continue to provide the bandwidth for large amounts of data. In fact, the capacity of systems will grow as technologies advance that allow closer spacing, and therefore higher numbers, of wavelengths. But DWDM is also moving beyond transport to become the basis of all-optical networking. How to deploy DWDM network with the lowest cost has always been a hot topic. At FS, we remain committed to being customer-oriented, and now we have launched a brand-new FMT ( FS.COM multi-service transport) platform to help you optimize your DWDM network performance. So what is FMT platform and what can FMT do for you?

About FMT

FMT platform

Developed by FS.COM this year, FMT is a multiservice transport system featured by a high density design and low insertion loss. It aims at optimizing the performance of DWDM network and delivering better services for customers. It can satisfy the requirements of both CWDM transmission and DWDM transmission, especially for DWDM long haul transmissions. Compared with the old generation of DWDM network components, the products of FMT series have been enhanced in every aspects, thus it can provide our customer higher networking performance with better management and lower power consumption. Next I will introduce you what is exactly included in our FMT platform.

All-in-One FMT Series DWDM Networking System

Designed to provide the best service at the lowest price, our FMT platform makes devices like EDFA, OEO, DCM, OLP and VOA into small plug-in cards and provides standard rack units as well as free software to achieve better management and monitoring. You may ask why you need to add these devices and how they help your DWDM network. Next I will introduce them to you respectively:

  • OEO. DWDM transceiver can produce wavelength directly. However, it has higher power consumption. Add an OEO converter between switch can decrease fault risks caused by high temperature and power consumption at switches.


  • EDFA. Light loss occurs at many place: the device, the optical fibers, fiber connectors, fusion splicing points etc. To overcome light loss in DWDM network and ensure long distance transmission of high quality, EDFA is usually add in the network.


  • DCM. In DWDM network, there would be compounded dispersion in optical fiber. The dispersion compensation in DWDM network can be optimized by DCM to finite dispersion. The using of DCM differs from different situations. It is usually suggested to add DCM in your network if the transmission distance is longer than 40km.


  • OLP. Optical line protection system is vital in DWDM network. Using vacant optical fiber to build a backup is what many providers are doing to ensure high secure optical transmission network. 1:1 OLP, 1+1 OLP and OBP are all available.


  • VOA. Our variable optical attenuator provides accurate attention and easy operation for optical power adjustment. You can control the optical power with a more flexible and reliable manner without affecting the network on running.


  • Red/Blue Splitter. Red/blue spitter or filter can combine the red transmit channel and the blue receive channel onto a single fiber. This product is commonly used with simplex DWDM network which uses a single fiber for both transmitting and receiving at the same time.

FMT Red/blue Splitter


FMT platform is definitely an ideal choice for you to optimize your DWDM network. And except from ensuring the good DWDM network transmission quality over long distance, we also offer customized one-stop solution including both products and network designing for your DWDM network. For more details please visit FS.COM.

Fiber Link Power Budget: How to Make It Right?

Fiber Link Power Budget: How to Make It Right?

In optical communication system, fiber patch cables and optical transceivers are necessities to complete the pathway for optical signal, enabling data to transmit between devices. To ensure that the fiber system has sufficient power for correct operation, it is vitally important to calculate the span’s power budget. A solid fiber link performance assures the networks run smoother and faster, with less downtime. This article addresses the essential elements associated with link power budget, and illustrates how to calculate power budget effectively.

Power Budget Definition

Power budget refers to the amount of loss a data link can tolerate while maintaining proper operation. In other words, it defines the amount of optical power available for successful transmitting signal over a distance of optical fiber. Power budget is the difference between the minimum (worst case) transmitter output power and the maximum (worst case) receiver input required. The calculations should always assume the worst-case values, in order to ensure the availability of adequate power for the link, which means the actual value will always be higher than this. Optical power budget is measured by dB, which can be calculated by subtracting the minimum receiver sensitivity from the minimum transmit power:

PB (dB) = PTX (dBm) – PRX (dBm)

fiber link power budget

Why Does Power Budget Matter?

The purpose of power budgeting is to ensure that the optical power from transmission side to receiver is adequate under all circumstances. As data centers migrate to 40G, 100G and possible 400G in the near future, link performance becomes increasingly essential. Link failures would stir up a sequence of problems like system downtime, which equates to accelerated costs, frustrated users, deteriorated performance and increased total cost. While with appropriate power budgeting, a high-performance link can be achieved for better network reliability, more flexible cabling and simplified regular maintenance, which is beneficial in the long run.

Critical Elements Involved In Calculating Power Budget

When performing power budget calculation, there are a long list of elements to account for. The basic items that determine general transmission system performance are listed here.

link power budget vs. distance

Fiber loss: fiber loss impacts greatly on overall system performance, which is expressed by dB per kilometer. The total fiber loss is calculated based on the distance × the loss factor (provided by manufacturer).

Connector loss: the loss of a mated pair of connectors. Multimode connectors will have losses of 0.2-0.5 dB typically. Single-mode connectors, which are factory made and fusion spliced on will have losses of 0.1-0.2 dB. Field terminated single-mode connectors may have losses as high as 0.5-1.0 dB.

Number and type of splices: Mechanical splice loss is generally in a range of 0.7 to 1.5 dB per connector. Fusion splice loss is between 0.1 and 0.5 dB per splice. Because of their limited loss factor, fusion splices are preferred.

Power margin: power budget margin generally includes aging of the fiber, aging of the transmitter and receiver components, additional devices, incidental twisting and bending of the fiber, additional splices, etc. The margin is needed to compensate for link degradation, which is within the range of 3 to 10 dB.

How to Properly Calculate Power Budget?

Here we use the following example to demonstrate how to calculate power budget of an optical link: Example: the system contains the transmitter and receiver, the optical link contains optical amplifier, 4 optical connectors and 5 splices. The following table presents attenuation or gain of each components.

Tx power:   3dBm

Connector loss: 0.15dB

Splice loss: 0.15dB

Amplifier gain: 10dB

Fiber optic loss: 0.2 dB/km

fiber link power budget calculation

The total attenuation of this link PL is the sum of:

Fiber optic loss: (30 km + 50 km) ×0.2dB/km = 16 dB

Attenuation of connectors: 4×0.15 dB = 0.60 dB

Attenuation of splices: 5×0.15 dB = 0.75 dB

So PL = 16 Db + 0.60 Db + 0.75 Db = 17.35 dB

The total gain of the link is generated by optical amplifier, which is 10 dB in this case. So PG = 10 dB

Considering link degradation, power margin should be calculated as well. A good safety margin  PM = 6 dB

To select the receiver’s sensitivity at the end of the optical path, it is sufficient to rearrange and solve the equation. So:

Ptx – Prx < PL – PG + PM

Prx > Ptx – PL + PG – PM

Prx > 3 dBm – 17.35 dB +10 dB – 6 dB

Prx > -10.35 dB

The receiver should provide sensitivity better than -10.35 dBm.


With data centers migrating to 40G, 100G, 200G and even 400G, fiber link performance becomes more important than ever before. Understanding link power budget will help you optimize your fiber link design as well. In addition, high-performance cables, quality transceivers and high-performance installation practices also assist to ensure better link performance.

Cabling Options for 40G QSFP SR4 and 40G QSFP BiDi Transceivers

Cabling Options for 40G QSFP SR4 and 40G QSFP BiDi Transceivers

The boosting global data traffic spurs the demand for faster data transmission and greater capacity over the network, and the demand is not gonna slack. Thus migration from 10G to higher speed 40G or 100G becomes an inevitable trend yet a necessity for network managers to accommodate the data boom. For 40G short-reach data communication and interconnect applications, 40G QSFP SR4 and 40G QSFP BiDi transceiver modules are generally involved. This article guides you through the working principles of the two 40G transceivers, and then presenting the cabling options for each.

40G QSFP SR4 and 40G QSFP BiDi at a Glance

Before we go any further, it’s better to first obtain some basic information about 40G QSFP SR4 and 40G QSFP BiDi transceiver. As they both used to support short-range (SR) 40G connectivity, the major difference lies in the protocols, namely the way to achieve data transmission for 40G application. 40G QSFP SR4 operates over MMF ribbon with MPO connectors, utilizing 4 parallel fiber pairs (8 fiber strands) at 10Gbps each for a total of 40Gbps full duplex.

40g qsfp sr4

40G QSFP BiDi uses the same 10-Gbps electrical lanes, however, they are combined in the optical outputs. Thus requiring two fibers with a LC connector interface. Each fiber simultaneously transmits and receives 20-Gbps traffic at two different wavelengths. Which means that 40G QSFP BiDi module converts four channels each of 10Gbps transmit and receive signals to two bidirectional channels of 20Gbps signals. The connection can reach 100 m on OM3 MMF or 150 m on OM4 MMF, which is the same as 40-Gbps SR4.

40g qsfp bidi

Cabling Solutions for 40G QSFP SR4 and 40G QSFP BiDi Transceivers

Whether for 40G QSFP SR4 or BiDi Transceivers, there basically exist three cabling approaches: direct connection, interconnection and cross-connection. This section respectively illustrates the three approaches for 40G transceiver cabling.

Options for 40G QSFP SR4 Transceiver

40G SR4 operates over 12-fiber strands terminated by MPO-12 connectors, 8-fiber strands carry traffic and 4 are unused. So there are three cabling options for parallel 40G QSFP SR4 connectivity:

  • Solution 1: No conversion and uses traditional 12-fiber MTP connectivity.
  • Solution 2: Use conversion module. Converts two 12 fiber links to three 8 fiber links through a conversion patch panel.
  • Solution 3: Converts two 12-fiber links to three 8-fiber links through a conversion assembly and standard MTP patch panels.

Here we offer cabling options for parallel 40G QSFP SR4 transceiver based on these three solutions.

Scenario One: Direct Connection for 40G QSFP SR4 Transceiver

Direct connection between two parallel optics 40G Ethernet transceiver, a Type-B (key up to key up) MTP patch cable should be used. With fiber 1 on one end goes to fiber 12 on the other end, this reverse fiber positioning ensures the signal to flow from transmission on one end of the link to reception on the other end. The picture below shows an MTP patch cable directly connects two switch ports.

40g-QSFP-SR4 direct connection

Scenario Two: Interconnection for 40G QSFP SR4 Transceiver

The most basic structured cabling solution is an interconnect. The following picture shows several interconnect approaches with various patch panel options.

a. The 2×3 conversion modules allow 100% fiber utilization and constitute the most commonly deployed method. It also greatly reduced jumper complexity. The female to female Type-B polarity cable here is used to directly connect two parallel optic transceivers. That same jumper is used on both ends of the interconnect link, thus eliminating concerns about correct pinning.

b. The same trunk used in method a is adopted, but the jumper type is now male to female Type-B polarity. Thus, when you install the MTP patch cable, you would install the male end in the patch panel, and you would install the female end in the electronics.

c. This combined solution might be deployed when cabling between a spine switch, where the module is placed, and a ToR leaf switch, where the conversion harness and MTP adapter panel are located.

40g-QSFP-SR4 interconnection

Scenario Three: Cross-Connection for 40G QSFP SR4 Transceiver

The picture below shows two cross-connection link designs for cabling a 40G QSFP SR4 transceiver.

a. This link design shows a conversion module example, which again is the most common and preferred method. All the three jumpers in the link are female to female MTP patch cable with Type-B polarity. Thus, in a conversion module deployment, only one jumper type is used for a direct-connect, interconnect, or cross-connect cabling scenario.

b. Standard MTP patch panels are deployed in this method. Here the MTP patch cables at the electronics are female (into the electronics) to male (into the patch panel), although the patch cords at the cross-connect are male-to-male going into the patch panel.

40g-QSFP-SR4 cross-connection

Options for 40G QSFP BiDi Transceiver

Cabling for 40G QSFP BiDi transceiver is relatively easy. Three methods are presented here.

Scenario One: Direct Connection for 40G QSFP BiDi Transceiver

In an unstructured cabling system, devices are directly connected with fiber optic cable. This direct-attachment design can be used to connect devices within short distances in a data center network. Direct connection between two 40-Gbps devices can be provided by MMF cables with QSFP BiDi transceivers at two ends.

40g-QSFP-SR bidi direct connection

Scenario Two: Interconnection for 40G QSFP BiDi Transceiver

When it comes to structured cabling, more permanent links should be considered. The interconnection link between two 40G bidirectional ports basically consists of an MTP trunk, MTP module cassettes and LC fiber patch cables. Future migration can be achieved simply by changing the patch panels on each end, without the need to disrupt the cabling infrastructure.

40g-QSFP-SR bidi interconnection

Scenario Three: Cross-Connection for 40G QSFP BiDi Transceiver

The cross-connection design involves two structured cabling links, which connect two switches via a centralized cross-connect. This design delivers much flexibility when new equipment need to be installed: only patch cables are required to make the connection from the equipment to the patch panels.

40g-QSFP-SR bidi cross-connection


Judging from the cabling solutions for 40G QSFP SR4 and BiDi transceivers, it is clear that QSFP BiDi transceivers provide immense flexibility and simplicity compared to parallel 40G QSFP SR4 transceivers, while removing cost barriers for migration from 10G to 40G in data center networks. However, the main advantage of 40G SR4 transceiver over 40G BiDi transceiver is the reach. Hope what we discussed in the article could help make an informed decision.