Things You Should Know About InfiniBand

In today’s high speed information era, more and more people are expecting much for high bandwidth and low latency of Internet, which are the two most common parameters used to compare link performance. To cater to most people’s requirements, InfiniBand is designed to take advantage of the world’s fastest interconnect, supporting up to 56Gb/s and extremely low application latency. So what is InfiniBand exactly? You may find answer in this post.

Introduction to InfiniBand

InfiniBand (IB) is a computer-networking communication standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also utilized as either a direct, or switched interconnect between servers and storage systems, as well as an interconnect between storage systems.

Basic InfiniBand Structure

InfiniBand creates a private, protected channel directly between the nodes via switches, and facilitates data and message movement without CPU involvement with Remote Direct Memory Access (RDMA) and send/receive offloads that are managed and performed by InfiniBand adapters. The adapters are connected on one end to the CPU over a PCI Express interface and to the InfiniBand subnet through InfiniBand network ports on the other. This provides distinct advantages over other network communications protocols, including higher bandwidth, lower latency, and enhanced scalability.

Basic InfiniBand Structure

How InfiniBand Works

Instead of sending data in parallel, which is what PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal. The principles of InfiniBand mirror those of mainframe computer systems that are inherently channel-based systems. InfiniBand channels are created by attaching host channel adapters (HCAs) and target channel adapters (TCAs) through InfiniBand switches. HCAs are I/O engines located within a server. TCAs enable remote storage and network connectivity into the InfiniBand interconnect infrastructure, called a fabric. InfiniBand architecture is capable of supporting tens of thousands of nodes in a single subnet.

InfiniBand architecture

Features and Advantages of InfiniBand

InfiniBand has some primary advantages over other interconnect technologies.

  • Higher Bandwidth—InfiniBand constantly supports the highest end-to-end bandwidth, towards the server and the storage connection.
  • Lower latency—RDMA zero-copy networking reduces OS overhead so data can move through the network quickly.

bandwidth and latency

  • Enhanced scalability—InfiniBand can accommodate flat networks of around 40,000 nodes in a single subnet and up to 2^128 nodes (virtually an unlimited number) in a global network, based on the same switch components simply by adding additional switches.
  • Higher CPU efficiency—With data movement offloads the CPU can spend more compute cycles on its applications, which will reduce run time and increase the number of jobs per day.
  • Reduced management overhead—InfiniBand switches can run in Software-Defined Networking (SDN) mode, allowing them to run as part of the fabric without CPU management.
  • Simplicity—InfiniBand is exceedingly easy to install when building a simple fat-tree cluster, as opposed to Ethernet which requires knowledge of various advanced protocols to build an IT cluster.
Summary

InfiniBand is a high-performance, multi-purpose network architecture based on switched fabric. It has become a leading standard in high-performance computing. Over 200 of the world’s fastest 500 supercomputers use InfiniBand. If you are planning to deploy InfiniBand, feel free to ask FS.COM for help. We have 40G QSFP+ modules compliant to InfiniBand standard and various optical fiber cables for you to choose. FS.COM, as a company specialized in optical communications, offers customized network solution for each customer. You will find a best solution here.

This entry was posted in Enterprise Network and tagged , . Bookmark the permalink.