1. Komal Thareja

Komal Thareja

Forum Replies Created

Viewing 15 posts - 151 through 165 (of 455 total)
  • Author
    Posts
  • Komal Thareja
    Participant

      Hi Luca,

      Could you share the sites where you encountered this issue? I tried CLEM, and it worked fine.

      As mentioned here, we collaborate with the experimenter to flash the FPGA with the initial bitstream. We’d like to rule out whether a different bitstream (other than ESnet) was used for flashing the FPGA at the sites where you experienced the problem. Also, if you have the slice up where you see the error, please share your slice ID with us!

      Thanks,
      Komal

      in reply to: Unable to access slice #7672
      Komal Thareja
      Participant

        Hi Kriti,

        It appears that there may be a configuration issue with your experiment. I recommend checking your settings. Based on your slice, there’s a Layer 2 network established between ipnode-1 (192.168.14.1) and n6 (192.168.14.254).

        I can confirm that pinging from ipnode-1 to n6 is successful, which indicates that the underlying Layer 2 network is functioning properly. While you do have a route on ipnode-1 to direct traffic through n6, it seems that the subnet you’re trying to access is not reachable from n6. This issue seems to be specific to your experiment’s configuration, so you may need to troubleshoot on your end.


        [root@ipnode-1 ~]# ip route list
        10.30.6.0/23 dev eth0 proto kernel scope link src 10.30.6.233 metric 100
        169.254.169.254 via 10.30.6.11 dev eth0 proto dhcp src 10.30.6.233 metric 100
        192.168.0.0/16 via 192.168.14.254 dev eth1
        192.168.14.0/24 dev eth1 proto kernel scope link src 192.168.14.1
        [root@ipnode-1 ~]# traceroute 192.168.28.1
        traceroute to 192.168.28.1 (192.168.28.1), 30 hops max, 60 byte packets
        1 192.168.14.254 (192.168.14.254) 0.422 ms !N 0.386 ms !N *
        [root@ipnode-1 ~]# traceroute 192.168.14.254
        traceroute to 192.168.14.254 (192.168.14.254), 30 hops max, 60 byte packets
        1 192.168.14.254 (192.168.14.254) 0.454 ms 0.434 ms 0.423 ms
        [root@ipnode-1 ~]#
        [root@ipnode-1 ~]#
        [root@ipnode-1 ~]#
        [root@ipnode-1 ~]# ping -c 5 192.168.14.254
        PING 192.168.14.254 (192.168.14.254) 56(84) bytes of data.
        64 bytes from 192.168.14.254: icmp_seq=1 ttl=64 time=0.069 ms
        64 bytes from 192.168.14.254: icmp_seq=2 ttl=64 time=0.062 ms
        64 bytes from 192.168.14.254: icmp_seq=3 ttl=64 time=0.102 ms
        64 bytes from 192.168.14.254: icmp_seq=4 ttl=64 time=0.066 ms
        64 bytes from 192.168.14.254: icmp_seq=5 ttl=64 time=0.076 ms

        n6:

        [root@n6 ~]# ip route list
        10.30.6.0/23 dev eth0 proto kernel scope link src 10.30.6.69 metric 100
        169.254.169.254 via 10.30.6.11 dev eth0 proto dhcp src 10.30.6.69 metric 100
        192.168.1.0/24 proto ospf metric 20
        nexthop via 192.168.8.2 dev eth3 weight 1
        nexthop via 192.168.12.1 dev eth1 weight 1
        192.168.2.0/24 via 192.168.8.2 dev eth3 proto ospf metric 20
        192.168.3.0/24 proto ospf metric 20
        nexthop via 192.168.8.2 dev eth3 weight 1
        nexthop via 192.168.12.1 dev eth1 weight 1
        192.168.4.0/24 via 192.168.12.1 dev eth1 proto ospf metric 20
        192.168.5.0/24 via 192.168.8.2 dev eth3 proto ospf metric 20
        192.168.6.0/24 proto ospf metric 20
        nexthop via 192.168.8.2 dev eth3 weight 1
        nexthop via 192.168.12.1 dev eth1 weight 1
        192.168.7.0/24 via 192.168.12.1 dev eth1 proto ospf metric 20
        192.168.8.0/24 dev eth3 proto kernel scope link src 192.168.8.1
        192.168.9.0/24 via 192.168.8.2 dev eth3 proto ospf metric 20
        192.168.10.0/24 via 192.168.8.2 dev eth3 proto ospf metric 20
        192.168.11.0/24 via 192.168.12.1 dev eth1 proto ospf metric 20
        192.168.12.0/24 dev eth1 proto kernel scope link src 192.168.12.2
        192.168.13.0/24 via 192.168.12.1 dev eth1 proto ospf metric 20
        192.168.14.0/24 dev eth2 proto kernel scope link src 192.168.14.254
        192.168.15.0/24 via 192.168.8.2 dev eth3 proto ospf metric 20
        192.168.16.0/24 via 192.168.12.1 dev eth1 proto ospf metric 20

        [root@n6 ~]# traceroute 192.168.28.1
        traceroute to 192.168.28.1 (192.168.28.1), 30 hops max, 60 byte packets
        connect: Network is unreachable

        Thanks,

        Komal

        in reply to: Unable to access slice #7668
        Komal Thareja
        Participant

          Hi Kriti,

          I created a slice on MASS for a Layer2 Network as well as Layer3 network. Both slices were able to pass traffic.

          This may be something specific to your slice or configuration. Please share your slice id. We can help to take a look but would also recommend checking configuration on your side as well.

          Thank you for letting us know about the portal. We will work on addressing that as well.

          Thanks,

          Komal

          in reply to: Unable to extend slice #7664
          Komal Thareja
          Participant

            Hi Kriti,

            Could you please open the Calendar by clicking on the small square shown next to Lease End at to choose the timestamp and then click on Extend?

            Also, you can extend your slice by following methods as well:

            • Notebook accessible from Start_here -> Extend Slice Reservation
            • slice-commander command line utility available in JH container


            fabric@fall:system-29%$ slice-commander
            Usage: renew <days> [SliceName1, SliceName2, ...]

            Please let us know if you run into issues or errors.

            Thanks,

            Komal

            Komal Thareja
            Participant

              The maintenance is complete and the new model has been deployed.

              in reply to: Student facing issue in running hello FABRIC example #7626
              Komal Thareja
              Participant

                Hi Prateek,

                It seems that the teaching materials are somewhat outdated. Users no longer need to manually complete the steps under “Configure your Jupyter environment.” Instead, they can simply run the latest jupyter-examples-rel1.7.*/configure_and_validate.ipynb notebook to set up their Jupyter Hub environment.

                If you have any questions or encounter any issues, don’t hesitate to reach out.

                Thanks,
                Komal

                in reply to: Integration of USRPs with FABRIC #7621
                Komal Thareja
                Participant

                  I also wanted to mention another option you can utilize: TailScale, which doesn’t require Ext services. You might find the Jupyter example helpful—check it out here.

                  Thanks,
                  Komal

                  in reply to: Integration of USRPs with FABRIC #7618
                  Komal Thareja
                  Participant

                    Hi Sourya,

                    While your request is being reviewed, please consider trying the workaround suggested by @yoursunny here: https://learn.fabric-testbed.net/forums/topic/reaching-the-internet-from-a-fabnet-node/#post-7605

                    Thanks,
                    Komal

                    in reply to: Integration of USRPs with FABRIC #7616
                    Komal Thareja
                    Participant

                      Hi Sourya,

                      We offer FabNetv*Ext services that allow internet access; however, these services require special permission due to the potential security risks they pose. The Project Lead for your project can request these permissions as specified here!

                      Please consider checking the following examples on how to use these services:

                      https://github.com/fabric-testbed/jupyter-examples/tree/main/fabric_examples/fablib_api/create_l3network_fabnet_ipv4ext_manual

                      https://github.com/fabric-testbed/jupyter-examples/tree/main/fabric_examples/fablib_api/create_l3network_fabnet_ipv6ext_manual

                      Feel free to reach out in case you run into issues or have queries.

                      Thanks,
                      Komal

                      Komal Thareja
                      Participant

                        @Tejas – Notebook is not attached. Could you please share it? You may have to change the file extension to .txt

                        in reply to: Unable to assign ip address #7583
                        Komal Thareja
                        Participant

                          Thank you Kim! Could you also please open a terminal on JH and try SSH to the VM using the command displayed above?

                          ssh -i /home/fabric/work/fabric_config/slice_key -F /home/fabric/work/fabric_config/ssh_config rocky@137.222.230.26

                          Thanks,
                          Komal

                          in reply to: Unable to assign ip address #7580
                          Komal Thareja
                          Participant

                            Hi Kim,

                            Both of your VMs are accessible via SSH. I can verify with Nova SSH key. I also verified that your SSH keys are pushed in to the VM.

                            Could you please share the output of the following?
                            Please run the following snippet in a notebook cell before the configure step.
                            If this does show the management IPs for the VMs, please try running the configure cells again and let us know if it works.


                            slice = fablib.get_slice(slice_name)

                            slice.show();

                            slice.list_nodes();

                            Thanks,
                            Komal

                            Komal Thareja
                            Participant

                              Hi Tejas,

                              It looks like SSH connection to bastion host is failing. Could you please re-rerun this notebook jupyter-examples-rel1.7.*/configure_and_validate.ipynb? Please retry your notebook after that.

                              Please let us know if the issue persists.

                              Thanks,

                              Komal

                              in reply to: No space left on the device #7531
                              Komal Thareja
                              Participant

                                Hi Sepideh,

                                Disk usage of your container is 100%. You seem to have output_file which seems to be taking majority of the space.

                                The /home/fabric/work directory (1GB) in the JupyterHub environment serves as persistent storage for code, notebooks, scripts, and other materials related to configuring and running experiments, including the addition of extra Python modules. However, it is not designed to handle large datasets or output files.

                                Please consider removing un-needed files or move output_file to avoid this error.

                                Additionally, if you need more disk space, I recommend setting up your own FABRIC environment on your laptop or machine to run your experiments. This approach will allow you to capture more data and reduce reliance on Jupyter Hub. Consider configuring a local Python environment for the FABRIC API as described here, and run the notebooks locally.


                                fabric@spring:work-100%$ du -sh *
                                20K 5_clients_1_server.ipynb
                                60K fabric_config
                                228K hipft.ipynb
                                95M jupyter-examples-rel1.5.5
                                96M jupyter-examples-rel1.6.1
                                28K lost+found
                                686M output_file
                                82M rel1.7.0.tar.gz4q_e8dq5.tmp
                                0 rel1.7.0.tar.gzceav1uzw.tmp
                                0 rel1.7.0.tar.gzds9a6279.tmp
                                0 rel1.7.0.tar.gzgmqadnvv.tmp
                                0 rel1.7.0.tar.gziuc6xzxa.tmp

                                Thanks,

                                Komal

                                in reply to: VM core pinning #7513
                                Komal Thareja
                                Participant

                                  Hi Ilya,

                                  Yes, it’s possible to pin vCPUs to physical cores. The following APIs on the node class may be of interest:

                                  node.get_cpu_info() provides information about the VM’s CPU in relation to the host.
                                  – You can pin specific vCPUs to physical cores using node.poa(operation="cpupin", vcpu_cpu_map=vcpu_cpu_map).

                                  In this case, vcpu_cpu_map is a dictionary mapping each vCPU to the desired physical core.

                                  For more details, please refer to the documentation here. Let us know if you have any questions or encounter any issues!

                                  Thanks,
                                  Komal

                                Viewing 15 posts - 151 through 165 (of 455 total)