1. Komal Thareja

Komal Thareja

Forum Replies Created

Viewing 15 posts - 31 through 45 (of 540 total)
  • Author
    Posts
  • in reply to: Maintenance Started Tuesday, January 06 – 9:00 AM EST #9338
    Komal Thareja
    Participant

      Maintenance has been completed! Testbed is open to use!

      Best,

      Komal

      in reply to: CPU model and frequency #9322
      Komal Thareja
      Participant

        Hi YoursSunny,

        Thank you for reaching out! This information is not currently exposed through the API. However, it is documented here and may be helpful:
        https://learn.fabric-testbed.net/knowledge-base/fabric-site-hardware-configurations/

        I’ll also raise this with our team to discuss whether we can extend the API to support this in the future.

        Best regards,
        Komal

        Komal Thareja
        Participant

          Hi,

          I’ve fixed NS6 as well. Please try to update your experiment scripts to avoid overwriting the authorized_keys file in the future.

          Best,
          Komal

          in reply to: Issue with L2PTP Tunnels #9270
          Komal Thareja
          Participant

            Hi Fatih,

            Just wanted to check if you were able to acquire resources for longer duration. Please let us know if we can help in anyway.

            Best,

            Komal

            Komal Thareja
            Participant

              Hello Danilo,

              I’ve restored the keys used by the Control Framework. You should now be able to add your keys via POA.

              Please be careful not to overwrite any existing keys, and make sure to take a backup of your data beforehand.

              @yoursunny — great suggestion. So far, we’ve avoided building our own images to reduce additional effort, but we’ll explore ways to either avoid this altogether or introduce a new user without requiring custom OS images.

              Best regards,
              Komal

              in reply to: Issue with L2PTP Tunnels #9237
              Komal Thareja
              Participant

                Hi Fatih,

                Apologies for the delayed response, but most likely the links you are requesting have been reserved in advance causing your renew to fail. I will look at other reservations today and work with the other users to see if we can get your slice stay up for longer duration. I will keep you posted!

                Best,

                Komal

                in reply to: Date format error when extending slice #9236
                Komal Thareja
                Participant

                  Hi Xavier,

                  Until we resolve this on the portal, you can also extend your slice via the JH, checkout this example on Jupyter Hub: fabric_examples/fablib_api/renew_slice/renew_slice.ipynb

                  Best,

                  Komal

                  Komal Thareja
                  Participant

                    Hi Sourya,

                    Is this still an issue?

                    Best,

                    Komal

                    in reply to: Issue with L2PTP Tunnels #9195
                    Komal Thareja
                    Participant

                      Hi Fatih,

                      I see that the following three slivers are currently in a Closed state. Please note that a renew is not a single-shot operation.

                      When you renew a slice, it transitions into the Configuring state and reports which individual slivers were successfully extended and which were not. You can verify this in the Portal by viewing the slice topology, or—if you are renewing from JupyterHub—fablib will explicitly report which slivers failed to renew.

                      You can also check this programmatically:

                      slice = fablib.get_slice(slice_name)
                      slice.list_slivers()
                      

                      Here are the affected reservations:

                      • Reservation ID: 990127bd-aa06-4992-8847-c76654faf0e8
                        State: Closed
                        Reason: Insufficient resources — No path available with the requested QoS
                      • Reservation ID: 30dd426f-9ddc-424b-bec7-ca8631540ea4
                        State: Closed
                        Reason: Insufficient resources — No path available with the requested QoS
                      • Reservation ID: cb4372e4-fb05-454e-8662-f53e297689f8
                        State: Closed
                        Reason: Insufficient resources — No path available with the requested QoS

                      These slivers were not able to secure a viable path during renewal, which is why they are now in a closed state.

                      To re-add these network services, you can modify the slice as follows:

                      1. Fetch the current slice topology, remove the closed network services, and submit the slice.
                      2. Fetch the updated topology, add the required network services again, and submit once more.
                      3. You can refer to this example for guidance on modifying an existing slice (adding/removing resources):
                        fabric_examples/fablib_api/modify_slice/modify-add-node-network.ipynb

                      Please let me know if you’d like help with the modify workflow or with re-submitting the network services.

                      Best,
                      Komal

                      in reply to: Issue with L2PTP Tunnels #9193
                      Komal Thareja
                      Participant

                        Hi Fatih,

                        Thanks for reaching out.

                        I looked into your slice, and it appears that the two network services associated with VLAN 300 and VLAN 600 are currently in a Closed state. Both reservations show the same ticket update:

                        “Insufficient resources: No path available with the requested QoS.”

                        Here are the details:

                        Reservation ID: 8a83db0f-03f1-44b0-843f-c6e0c2664cfe
                        Slice ID: fdf2fd5b-b1b0-46ef-b51a-4d55e0fd5c47
                        Resource Type: L2PTP
                        State: Closed
                        Reason: No path available with requested QoS

                        Reservation ID: 257fae2a-28ca-4430-bb85-77864b3d5c25
                        Slice ID: fdf2fd5b-b1b0-46ef-b51a-4d55e0fd5c47
                        Resource Type: L2PTP
                        State: Closed
                        Reason: No path available with requested QoS

                        This indicates that the system was unable to allocate a viable path for these two tunnels during your most recent renewal window, which is why they are not active now.

                        If you would like, you can try the following:

                        • Re-declare or re-submit these two network services in your slice.
                        • Lower the QoS requirement temporarily to see if a path becomes available.

                        Please feel free to reach out if you need help updating the slice or if you would like us to investigate further.

                        Best regards,
                        Komal

                        Komal Thareja
                        Participant

                          Hi Danilo,

                          I found that the authorized_keys file on both NS1 and NS5 was empty, which is why SSH—whether through the admin key or the Control Framework—was failing resulting in POA/addKey failure. It seems this may have happened unintentionally as part of the experiment.

                          I’ve manually restored SSH access so the Control Framework should now function properly, including POA. Could you please try adding your keys to these VMs again using POA? That should re-establish your SSH access.

                          Please be careful not to remove or overwrite the authorized_keys file in the process.

                          Best,

                          Komal

                          in reply to: Bluefield DPL pull failing due to timeout #9170
                          Komal Thareja
                          Participant

                            I tried running docker pull manually on DALL and SEAT, and it worked fine on both. The artifact also ran successfully on SEAT with following changes. The issue appears to be related to the Docker installation via docker.io.

                            I have also passed this to the artifact author so they can make the required updates.

                            I made the following changes to get the artifact working:

                            • Changed the image to docker_ubuntu_24.
                            • Updated Step 34 to remove docker.io from the installation commands.
                            stdout, stderr = node1.execute('sudo apt-get update', quiet=True)
                            stdout, stderr = node1.execute('sudo apt-get install -y build-essential python3-pip net-tools', quiet=True)
                            stdout, stderr = node2.execute('sudo apt-get update', quiet=True)
                            stdout, stderr = node2.execute('sudo apt-get install -y build-essential python3-pip net-tools', quiet=True)
                            stdout, stderr = node1.execute('sudo pip3 install meson ninja', quiet=True)
                            stdout, stderr = node2.execute('sudo apt install -y python3-scapy', quiet=True)
                            

                            Best,
                            Komal

                            in reply to: Bluefield DPL pull failing due to timeout #9169
                            Komal Thareja
                            Participant

                              Hi Nishanth,

                              I tried on UTAH, MICH, MASS and docker pull seems to work.

                              Could you please try nslookup nvcr.io and then try the docker pull command?

                              I will also check with Mert/Hussam to see if we have any known issues on SEAT and DALL.

                              Best,

                              Komal

                              in reply to: Bluefield DPL pull failing due to timeout #9165
                              Komal Thareja
                              Participant

                                Hi Nishanth,

                                Could you please share which Site is your slice running at?

                                Best,

                                Komal

                                in reply to: Establish communication between FPGA to GPU via PCIe #9154
                                Komal Thareja
                                Participant

                                  Hi Paresh,

                                  Currently, FABRIC allows users to create VMs where GPUs or FPGAs can be attached via PCI passthrough. However, direct communication between FPGA and GPU over PCIe (such as peer-to-peer DMA or RDMA transfers) is not supported.

                                  This is because for true PCIe peer-to-peer access, both devices need to be physically located on the same host and share the same PCIe root complex or switch. At present, none of the FABRIC nodes have both a GPU and an FPGA installed on the same host.

                                  If you’d like to double-check inventory yourself, you can list host capabilities with fablib:

                                  from fabrictestbed_extensions.fablib.fablib import FablibManager as fablib_manager
                                  
                                  fields = [
                                      'name',
                                      'fpga_sn1022_capacity', 'fpga_u280_capacity',
                                      'rtx6000_capacity', 'tesla_t4_capacity', 'a30_capacity', 'a40_capacity'
                                  ]
                                  
                                  fablib = fablib_manager()
                                  output_table = fablib.list_hosts(fields=fields)
                                  

                                  You’ll see per-host capacities for each device type. It will show that hosts with FPGA capacity don’t also list GPU capacity (and vice versa), confirming that GPU+FPGA co-location isn’t available.

                                  Best regards,
                                  Komal

                                Viewing 15 posts - 31 through 45 (of 540 total)