1. Default bandwidth for Fabric nodes

Default bandwidth for Fabric nodes

Home Forums FABRIC General Questions and Discussion Default bandwidth for Fabric nodes

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
  • #2531
    Xusheng Ai


    As we could request different resources like “core”, “ram” and “disk” when we create the slice, I was wondering if there exists the default bandwidth that we can look for. And if the bandwidth of the node depends on the site.

    Best Regards,

    Paul Ruth

    You get full physical access to the NIC. The connectx-6 cards are 2x100G, connectx-5 are 2x25G, and the Basic NICs are connectx-6 SR-IOV VFs (100G but bandwidth is shared with other Basic NICs).  We don’t artificially limit bandwidth on any NICs.  Eventually, we have plans to have dedicated QoS for bandwidth across WAN connections, but the NICs themselves are physical NICs and have whatever bandwidth they were designed to have.

    You may see different bandwidths between sites. Some of that could be because other users are sharing the links and couple sites don’t yet have their permanent physical connections. However, we have not ramped up usage yet and nearly all of our networks links are minimally used.    I would not expect other users to significantly affect your WAN bandwidth right now and if they do it will be temporary.

    I do expect that achieving high bandwidths will require tuning of end hosts (and maybe core switches).  Soon we will try to do this ourselves and provide suggested tuning parameters but for now there is nothing artificial that prevents any user from achieving 100G across WAN FABRIC links.  We just having looked into the right tuning parameters yet.

    If you are interested in high bandwidths, I suggest starting with pairs of sites that are physically close to each other (UTAH-SALT, or WASH-MAX).  Low latency makes higher bandwidth a lot easier to achieve.


    Xusheng Ai

    Thank you so much for the information!

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.