Forum Replies Created
-
AuthorPosts
-
March 26, 2026 at 11:31 am in reply to: L2Bridge not forwarding frames between NIC_ConnectX_6 ports #9610
Hello Mounika,
I tried to find which slice this is and I’m guessing it’s Slice ID: c2a39f8b-8278-4bbd-a251-2eb42b1c5d65
(If not, please indicate your slice ID)I want to point out a few items that can be useful.
First, the topology on the slice that I mentioned above
– 2x VMs running on the (same) host/worker brist-w2, each one with a dedicated 100G CX6 card and connected over a L2Bridge)
should work fine to pass traffic on the dataplane.I tested a similar slice topology on CLEM node and confirmed that traffic worked well, so there shouldn’t be a limitation when the VMs are placed on the same host. I deleted my test slice on CLEM to release the two 100G dedicated CX6 NICs, if you prefer, you can re-create your slice on CLEM and we can see how it works.
Alternatively, you can try Meshal’s suggestion and place the VMs on different hosts/workers. Specifically for BRIST node, this can be possible if you choose NIC_ConnectX_6 for one VM and NIC_ConnectX_5 for the other VM.
I want to point out this page https://learn.fabric-testbed.net/knowledge-base/fabric-site-hardware-configurations/
that includes information about the hardware configurations of the FABRIC sites/nodes. FastNet and SlowNet worker elements have the dedicated NICs on them (note CX6 and CX5 types). I also want to share that all sites/nodes (except CERN) have only one FastNet worker.This maintenance is completed.
Hi Lorenzo,
Your VM was crashed (out of memory). I rebooted the VM, it should be reachable for you. I’m attaching the console output as well in case you find useful information out of that. console-e3cfe65c-0a31-4d43-9ea4-526fa17ec7e6
Hello Tanay,
VM “node3-dpu” is showing a crash status. I’m attaching the section from the console – console-node3-dpu
March 10, 2026 at 1:02 pm in reply to: SSH error: channel 0: open failed: connect failed: No route to host #9571Hi Meshal,
The slice you indicated has slivers on the SALT node and we had a power outage at 7am ET today. Currently, all slivers are recovered and they are online. Specifically for the SALT node, we have been having issues with power outages recently. Our abilities are very limited for the SALT node, but we are actively searching for options that can remediate. If the other occurrences of connectivity problems that you mentioned were slices on the SALT node, then it’s likely that the previous power outages were the cause.
Please let us know when you have such connectivity issues, and we will check and work with you promptly.
Hi Tanay,
As a next step, we can try cold-rebooting the server that is holding the DPU, however this is not possible when other users have VM slivers running on it. I need to make special arrangements for that.
On our Development environment, we have a BlueField-2 DPU and we can perform all kinds of trials on it. You pointed the web page that describes how the configuration steps, but it can be even better if you provide us a complete list of commands for this configuration, so we can test it on the Development site. If there is any variance across BlueField-2 and BlueField-3, it will be good to indicate as well. Even, currently I’m preparing for additional BlueField-3 integrations, so I have BlueField-3 cards just delivered and I can use one card and test on the Development site with a BlueField-3 later.
And lastly, on the web page under How-Plug Firmware Configuration section, there is a note as “Hotplug is not guaranteed to work on AMD machines.” Servers on the FABRIC Testbed infrastructure are all AMD-based Dell R7525 servers. I’m not sure if this may be relevant to our issue.
Best regards,
MertHi Tanay,
I performed a power reset for the DPU. Can you please check if that worked well for the firmware configuration change?
ubuntu@localhost:~$ uname -a
Linux localhost.localdomain 5.15.0-1065-bluefield #67-Ubuntu SMP Tue Apr 22 11:10:15 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
ubuntu@localhost:~$ uptime
16:19:21 up 1 min, 1 user, load average: 6.83, 2.15, 0.75
I will be able to describe the details about how I performed this later. Mainly, I had included the BMC bindings to the DPU integration, and I utilized this path, however I’m not sure very much sure about the terminology or specifics, just some intuitive actions so far. I’m also in touch with the FABRIC team about this item, so your input about the progress will be helpful for our further enhancements.
-
This reply was modified 1 month ago by
Mert Cevik.
-
This reply was modified 1 month ago by
Mert Cevik.
DPU on the SEAT node is recovered and it can be used for experiments.
For the firmware configuration, I need to read the documentation. I have no prior experience with these cards.
Hello Tanay,
Can you share the state of your slice and slivers from your point of view? All slivers of the slice seem to be deleted.
Best regards,
MertSo, since you’re able to login to this problematic VM from other sources, then you can check and make sure the right SSH key is inside the VM. I just placed my SSH key in it, and I could login properly. Please let us know about the status following your SSH key check and I will take a look further.
If I understand the problem from the description correctly (“manual connect”), you’re trying to connect to the VM(s) from a terminal on your computer/laptop and getting the error. If that’s the case, you need to set up your ssh client configuration file and ssh keys properly (in your computer/laptop) and connect. This page can be helpful -> https://learn.fabric-testbed.net/knowledge-base/logging-into-fabric-vms/
If my understanding is wrong and problem is something different, please disregard the info above.
Thank you Komal for the information.
Khawar, can you please describe the directory where your “critical data” resides on the VM?
I checked your VM and found it in a crashed state. I’m not sure about the reason, when/how it was crashed or rebooted without digging into the logs, but the worker node (star-w2) it’s running on is fully occupied with VMs and we will look into possible out of memory issues on the hypervisor. It can be good if you re-create this VM on another worker node on STAR or use a smaller flavor to run a VM on star-w2.
Hello Khawar,
Your VM was shut down by the hypervisor and I started it now. Please let us know if you have any other issues. We will be investigating the main cause of this shut down internally.
Best regards,
MertIssue with the bastion host traffic is resolved. You can try creating your slices with the standard bastion host settings (with bastion.fabric-testbed.net)
-
This reply was modified 1 month ago by
-
AuthorPosts