1. L2Bridge not forwarding frames between NIC_ConnectX_6 ports

L2Bridge not forwarding frames between NIC_ConnectX_6 ports

Home Forums FABRIC General Questions and Discussion L2Bridge not forwarding frames between NIC_ConnectX_6 ports

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #9607
    Mounika Ghanta
    Participant

      I am facing an issue with L2 connectivity between two nodes using NIC_ConnectX_6 over an L2Bridge network.

      Setup:
      • Site: BRIST
      • Nodes: NIC_ConnectX_6
      • Network: L2Bridge
      • OS: Ubuntu 22.04
      Issue:
      • Link is up (100 Gbps, full duplex)
      • Interfaces are active and configured correctly
      • ARP resolves, but ping fails (100% packet loss)
      • No packets are received on the destination (tcpdump shows nothing)
      I have tried:
      • Assigning static IPs
      • Adding static ARP entries
      • Flushing firewall rules
      • Changing MTU
      • Recreating slices
      • Testing both L2Bridge and L3 networks
      In all cases, packets leave the sender but never reach the receiver.
      Could anyone please help identify why frames are not being forwarded across the L2Bridge?
      #9609
      Meshal Alruwisan
      Participant

        Hi Mounika,

        I ran into a similar issue before, and I fixed it by placing these nodes on different hosts within the site.

        #9610
        Mert Cevik
        Moderator

          Hello Mounika,

          I tried to find which slice this is and I’m guessing it’s Slice ID: c2a39f8b-8278-4bbd-a251-2eb42b1c5d65
          (If not, please indicate your slice ID)

          I want to point out a few items that can be useful.

          First, the topology on the slice that I mentioned above
          – 2x VMs running on the (same) host/worker brist-w2, each one with a dedicated 100G CX6 card and connected over a L2Bridge)
          should work fine to pass traffic on the dataplane.

          I tested a similar slice topology on CLEM node and confirmed that traffic worked well, so there shouldn’t be a limitation when the VMs are placed on the same host. I deleted my test slice on CLEM to release the two 100G dedicated CX6 NICs, if you prefer, you can re-create your slice on CLEM and we can see how it works.

          Alternatively, you can try Meshal’s suggestion and place the VMs on different hosts/workers. Specifically for BRIST node, this can be possible if you choose  NIC_ConnectX_6 for one VM and NIC_ConnectX_5 for the other VM.

          I want to point out this page https://learn.fabric-testbed.net/knowledge-base/fabric-site-hardware-configurations/
          that includes information about the hardware configurations of the FABRIC sites/nodes. FastNet and SlowNet worker elements have the dedicated NICs on them (note CX6 and CX5 types). I also want to share that all sites/nodes (except CERN) have only one FastNet worker.

           

           

        Viewing 3 posts - 1 through 3 (of 3 total)
        • You must be logged in to reply to this topic.