Forum Replies Created
-
AuthorPosts
-
Hi Tanay,
We are in the process of procuring them. While they may not be available for the Summer release, we are targeting an incremental release or including them in the Fall 2025 release.
Best,
Komal
Hi Rodrigo,
Could you please share your slice ID and let us know how you’re trying to access the VMs—whether through Jupyter Hub or from your local environment?
Thanks,
KomalJune 6, 2025 at 11:18 am in reply to: Guaranteed Capacity and Traffic Prioritization across the Sites #8590Hi Fatih,
Thank you for your email and detailed questions.
At this time, FABRIC does not currently support guaranteed capacity or QoS prioritization on L2P2P links. The service operates as best-effort by default, and DSCP/ToS or VLAN PCP markings are not currently enforced across the underlying infrastructure.
That said, we are actively working to support guaranteed QoS using Explicit Route Options (ERO) in the L2P2P service. This capability is planned for inclusion in our upcoming Release 1.9, targeted for deployment in late July/early August. It will provide a way to request L2P2P links with specified bandwidth guarantees and rate-limiting.
We will share more details and guidance on how to configure these options as part of the release.
Please feel free to reach out with any further questions in the meantime.
Best regards,
Komal TharejaHi Alexander,
Based on our investigation so far, the VMs from your slice that are not passing traffic were hosted on
salt-w3.fabric-testbed.net
. We’ve identified that none of the VMs on this host are able to pass traffic. As a result, we have placed this worker into Maintenance mode and are actively investigating the issue.You should be able to create a new slice without encountering this problem, as
salt-w3
is now in Maintenance and will not be used for any new slices on the SALT site.Thanks,
Komal
Hi Sourya,
MASS is undergoing maintenance from June 2 to June 4, as noted [here].
Since your slice is set to expire on June 9, it will remain unaffected by the maintenance window. As mentioned in the announcement, your VM will be recovered, and your data will persist.
Thanks,
KomalThank you Alexander for sharing this. I have shared the details with the network team. Will keep you posted.
Thanks,
Komal
Hi Philips,
At the moment, we do not support guaranteed QoS. This feature will be available soon. In the meantime, you can use tools such as
tc
to manage bandwidth on the VMs.Thanks,
KomalHi Nishant,
Please find my responses inline below:
Once a user has reserved a slice with an FPGA, that resource is locked and cannot be acquired or modified by other users until the slice is released.
You’re correct—if the FPGA has been flashed with a workflow other than the EsNet workflow, it may fail.
However, we cannot guarantee the validity or state of the bitstream that was previously flashed by another user before you acquired the slice. This may leave the FPGA in an inconsistent or unusable state. In our experience, reflashing the FPGA with a known good (golden) image typically restores it to a usable state.
We are planning to share this golden image along with the notebook with users soon, so they can perform the reflash themselves when needed. In the meantime, if you’re currently blocked, please let me know the specific site you’re working with—I’ll check whether we can assist with reflashing the FPGA for you.
Thanks,
Komal
Hi Alex,
The network team reviewed the configuration and found no issues on the switch side. However, they observed that the MAC addresses for these interfaces have not been learned by the switch.
As a next step, they recommend removing the L2Bridge service and connecting both interfaces directly to FabNetV4 to verify if the network connectivity is restored.
Please perform this change using slice modify, so the same VMs and interfaces can be reused for validation. This helps us rule out the possibility that recreating the VMs might inadvertently resolve the issue.
Refer to this notebook for guidance on how to modify the slice.
Thanks,
Komal
Could you please check your VM again?
All PCI devices had been disconnected. I have reconnected them to your VM. Please check it.
Also, could you please share the sequence of operations that lead your VM to this state?
It would be helpful to see if there is anything that needs to be fixed on our control software.
Thanks,
Komal
Please share your slice ID and also the output of the command:
ifconfig -a
Thanks,
Komal
Thank you Alex for sharing this observation! I temporarily assigned IP addresses to these interfaces on r3 and 4 nodes and do not see ping working between them.
Network service as provisioned looks ok. I am reaching out to the network team and will keep you posted.
Thanks,
Komal
Hi Ajay,
You can use the following code snippet to reboot the node:
slice = fablib.get_slice(slice_name)
node = slice.get_node(node_name)
node.os_reboot()
Also, please share your slice ID so we can take a look at it.
Thanks,
Komal
Thank you for your question.
What I meant is that once an FPGA is initially flashed with a provided bitstream, users can reflash it with a different bitstream of their choice—as long as the PCIe interface remains unchanged. Because of this flexibility, the actual state of the FPGA at a given site may differ from what’s shown in the shared sheet, depending on whether a user has reprogrammed it.
Best,
Komal
-
AuthorPosts