1. Komal Thareja

Komal Thareja

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 505 total)
  • Author
    Posts
  • in reply to: Issue with L2PTP Tunnels #9237
    Komal Thareja
    Participant

      Hi Fatih,

      Apologies for the delayed response, but most likely the links you are requesting have been reserved in advance causing your renew to fail. I will look at other reservations today and work with the other users to see if we can get your slice stay up for longer duration. I will keep you posted!

      Best,

      Komal

      in reply to: Date format error when extending slice #9236
      Komal Thareja
      Participant

        Hi Xavier,

        Until we resolve this on the portal, you can also extend your slice via the JH, checkout this example on Jupyter Hub: fabric_examples/fablib_api/renew_slice/renew_slice.ipynb

        Best,

        Komal

        Komal Thareja
        Participant

          Hi Sourya,

          Is this still an issue?

          Best,

          Komal

          in reply to: Issue with L2PTP Tunnels #9195
          Komal Thareja
          Participant

            Hi Fatih,

            I see that the following three slivers are currently in a Closed state. Please note that a renew is not a single-shot operation.

            When you renew a slice, it transitions into the Configuring state and reports which individual slivers were successfully extended and which were not. You can verify this in the Portal by viewing the slice topology, or—if you are renewing from JupyterHub—fablib will explicitly report which slivers failed to renew.

            You can also check this programmatically:

            slice = fablib.get_slice(slice_name)
            slice.list_slivers()
            

            Here are the affected reservations:

            • Reservation ID: 990127bd-aa06-4992-8847-c76654faf0e8
              State: Closed
              Reason: Insufficient resources — No path available with the requested QoS
            • Reservation ID: 30dd426f-9ddc-424b-bec7-ca8631540ea4
              State: Closed
              Reason: Insufficient resources — No path available with the requested QoS
            • Reservation ID: cb4372e4-fb05-454e-8662-f53e297689f8
              State: Closed
              Reason: Insufficient resources — No path available with the requested QoS

            These slivers were not able to secure a viable path during renewal, which is why they are now in a closed state.

            To re-add these network services, you can modify the slice as follows:

            1. Fetch the current slice topology, remove the closed network services, and submit the slice.
            2. Fetch the updated topology, add the required network services again, and submit once more.
            3. You can refer to this example for guidance on modifying an existing slice (adding/removing resources):
              fabric_examples/fablib_api/modify_slice/modify-add-node-network.ipynb

            Please let me know if you’d like help with the modify workflow or with re-submitting the network services.

            Best,
            Komal

            in reply to: Issue with L2PTP Tunnels #9193
            Komal Thareja
            Participant

              Hi Fatih,

              Thanks for reaching out.

              I looked into your slice, and it appears that the two network services associated with VLAN 300 and VLAN 600 are currently in a Closed state. Both reservations show the same ticket update:

              “Insufficient resources: No path available with the requested QoS.”

              Here are the details:

              Reservation ID: 8a83db0f-03f1-44b0-843f-c6e0c2664cfe
              Slice ID: fdf2fd5b-b1b0-46ef-b51a-4d55e0fd5c47
              Resource Type: L2PTP
              State: Closed
              Reason: No path available with requested QoS

              Reservation ID: 257fae2a-28ca-4430-bb85-77864b3d5c25
              Slice ID: fdf2fd5b-b1b0-46ef-b51a-4d55e0fd5c47
              Resource Type: L2PTP
              State: Closed
              Reason: No path available with requested QoS

              This indicates that the system was unable to allocate a viable path for these two tunnels during your most recent renewal window, which is why they are not active now.

              If you would like, you can try the following:

              • Re-declare or re-submit these two network services in your slice.
              • Lower the QoS requirement temporarily to see if a path becomes available.

              Please feel free to reach out if you need help updating the slice or if you would like us to investigate further.

              Best regards,
              Komal

              Komal Thareja
              Participant

                Hi Danilo,

                I found that the authorized_keys file on both NS1 and NS5 was empty, which is why SSH—whether through the admin key or the Control Framework—was failing resulting in POA/addKey failure. It seems this may have happened unintentionally as part of the experiment.

                I’ve manually restored SSH access so the Control Framework should now function properly, including POA. Could you please try adding your keys to these VMs again using POA? That should re-establish your SSH access.

                Please be careful not to remove or overwrite the authorized_keys file in the process.

                Best,

                Komal

                in reply to: Bluefield DPL pull failing due to timeout #9170
                Komal Thareja
                Participant

                  I tried running docker pull manually on DALL and SEAT, and it worked fine on both. The artifact also ran successfully on SEAT with following changes. The issue appears to be related to the Docker installation via docker.io.

                  I have also passed this to the artifact author so they can make the required updates.

                  I made the following changes to get the artifact working:

                  • Changed the image to docker_ubuntu_24.
                  • Updated Step 34 to remove docker.io from the installation commands.
                  stdout, stderr = node1.execute('sudo apt-get update', quiet=True)
                  stdout, stderr = node1.execute('sudo apt-get install -y build-essential python3-pip net-tools', quiet=True)
                  stdout, stderr = node2.execute('sudo apt-get update', quiet=True)
                  stdout, stderr = node2.execute('sudo apt-get install -y build-essential python3-pip net-tools', quiet=True)
                  stdout, stderr = node1.execute('sudo pip3 install meson ninja', quiet=True)
                  stdout, stderr = node2.execute('sudo apt install -y python3-scapy', quiet=True)
                  

                  Best,
                  Komal

                  in reply to: Bluefield DPL pull failing due to timeout #9169
                  Komal Thareja
                  Participant

                    Hi Nishanth,

                    I tried on UTAH, MICH, MASS and docker pull seems to work.

                    Could you please try nslookup nvcr.io and then try the docker pull command?

                    I will also check with Mert/Hussam to see if we have any known issues on SEAT and DALL.

                    Best,

                    Komal

                    in reply to: Bluefield DPL pull failing due to timeout #9165
                    Komal Thareja
                    Participant

                      Hi Nishanth,

                      Could you please share which Site is your slice running at?

                      Best,

                      Komal

                      in reply to: Establish communication between FPGA to GPU via PCIe #9154
                      Komal Thareja
                      Participant

                        Hi Paresh,

                        Currently, FABRIC allows users to create VMs where GPUs or FPGAs can be attached via PCI passthrough. However, direct communication between FPGA and GPU over PCIe (such as peer-to-peer DMA or RDMA transfers) is not supported.

                        This is because for true PCIe peer-to-peer access, both devices need to be physically located on the same host and share the same PCIe root complex or switch. At present, none of the FABRIC nodes have both a GPU and an FPGA installed on the same host.

                        If you’d like to double-check inventory yourself, you can list host capabilities with fablib:

                        from fabrictestbed_extensions.fablib.fablib import FablibManager as fablib_manager
                        
                        fields = [
                            'name',
                            'fpga_sn1022_capacity', 'fpga_u280_capacity',
                            'rtx6000_capacity', 'tesla_t4_capacity', 'a30_capacity', 'a40_capacity'
                        ]
                        
                        fablib = fablib_manager()
                        output_table = fablib.list_hosts(fields=fields)
                        

                        You’ll see per-host capacities for each device type. It will show that hosts with FPGA capacity don’t also list GPU capacity (and vice versa), confirming that GPU+FPGA co-location isn’t available.

                        Best regards,
                        Komal

                        Komal Thareja
                        Participant

                          Hi Fatih,

                          You should be able to create multiple tunnels on the same NIC by using VLAN-tagged sub-interfaces. Each sub-interface can be assigned to a different L2PTP tunnel, allowing multiple distinct connections over the same physical port.

                          Please check out the example notebook fabric_examples/fablib_api/sub_interfaces/sub_interfaces.ipynb for details on how to configure sub-interfaces.

                          Best regards,
                          Komal

                          Komal Thareja
                          Participant

                            Hi Geoff,

                            This appears to be a bug in fablib. As a workaround, could you please modify the call as follows?

                            client_interface = client_node.get_interface(network_name="client-net", refresh=True)
                            

                            This change should prevent the error from occurring. I’ll work on fixing this issue in fablib.

                            Best,
                            Komal

                            in reply to: Availability of DPU-powered SmartNICs #9145
                            Komal Thareja
                            Participant

                              Hi Tanay,

                              BlueField-3 nodes are now available on FABRIC, and we currently offer two variants:

                              • ConnectX-7-100 – 100 G
                              • ConnectX-7-400 – 400 G

                              To provision and use them, your project lead will need to request access through the Portal under Experiment → Project → Request Permissions.

                              Best,
                              Komal

                              1 user thanked author for this post.
                              Komal Thareja
                              Participant

                                Hi Geoff,

                                Just to confirm my understanding — your slice is in StableOK state, and the nodes display IP addresses as shown in your screenshot, but node.execute is failing with a “no management IP” error. Is that correct?

                                Could you please share your Slice ID here?

                                Thanks,
                                Komal

                                in reply to: pin_cpu & poa(operation=”cpupin”) #9131
                                Komal Thareja
                                Participant

                                  Thank you, @yoursunny, for sharing these observations and the detailed steps to reproduce them. This appears to be a bug. I’ll work on addressing it and will update you once the patch is deployed.

                                  Best,
                                  Komal
                                  1 user thanked author for this post.
                                Viewing 15 posts - 1 through 15 (of 505 total)