Rdma Connection manager
Infiniband requires addresses of end points to be exchanged using an out-of-band channel (like tcp/ip). Glusterfs used a custom protocol over a tcp/ip channel to exchange this address. However, librdmacm provides the same functionality with the advantage of being a standard protocol. This helps if we want to communicate with a non-glusterfs entity (say nfs client with gluster nfs server) over infiniband.
- IP over Infiniband - The value to option remote-host in glusterfs transport configuration should be an IPoIB address
- rdma cm kernel module
- user space rdmacm library - librdmacm
rdma-cm in >= GlusterFs 3.4
Following is the impact of http://review.gluster.org/#change,149.
New userspace packages needed: librdmacm librdmacm-devel
Because of bug 890502, we've to probe the peer on an IPoIB address. This imposes a restriction that all volumes created in the future have to communicate over IPoIB address (irrespective of whether they use gluster's tcp or rdma transport).
Currently client has independence to choose b/w tcp and rdma transports while communicating with the server (by creating volumes with transport-type tcp,rdma). This independence was a by-product of our ability to use the tcp/ip channel - transports with option transport-type tcp - for rdma connection establishment handshake too. However, with new requirement of IPoIB address for connection establishment, we loose this independence (till we bring in multi-network support - where a brick can be identified by a set of ip-addresses and we can choose different pairs of ip-addresses for communication based on our requirements - in glusterd).