1. Komal Thareja

Komal Thareja

Forum Replies Created

Viewing 15 posts - 406 through 420 (of 505 total)
  • Author
    Posts
  • Komal Thareja
    Participant

      Hi Elie,

      Could you please share the output of the following commands from your container?

      pip list|grep fabric

      cat ~/work/fabric_config/requirements.txt

      If you have any entries for fabrictestbed-extensions in ~/work/fabric_config/requirements.txt Please remove them and restart your container via File -> Hub Control Panel -> Stop My Server followed by Start My Server.

      Thanks,

      Komal

      in reply to: Long running slice stability issue.  #4802
      Komal Thareja
      Participant

        Hi Fengping,

        I have rebooted both Node1 and Node2. They should be accessible now. Please set up the IPs as per the mac addresses shared above. Please do let me know if anything else is needed form my side.

        Thanks,

        Komal

         

         

        in reply to: Long running slice stability issue.  #4786
        Komal Thareja
        Participant

          You can confirm the interfaces for Node1 and Node2 via the mac addresses:

          Node1

          02:7F:AE:44:CB:C9 => NIC3

          06:E3:D6:00:5B:06=> NIC2

          02:BC:A6:3F:C7:CB=> NIC1

          Node2

          02:15:60:C2:7A:AD=>NIC3

          02:1D:B9:31:E7:23=> NIC2

          02:B5:53:89:2C:E6=> NIC1

          Thanks,

          Komal

          in reply to: Long running slice stability issue.  #4785
          Komal Thareja
          Participant

            Hi Fengping,

            I think ens7 -> net1, ens8->net3 and ens9 -> net2 Please let me know once you get the public access back. I can help figure out the interfaces.

            Thanks,

            Komal

            in reply to: Long running slice stability issue.  #4780
            Komal Thareja
            Participant

              Hello Fengping,

              I have re-attached the pci devices for the VMs: node1and node2. You would need to reassign the IP addresses back on them for your links to work. Please let us know if the links are working as expected after configuring the IP addresses.

              Thanks,

              Komal

              Komal Thareja
              Participant

                Maintenance is completed. Testbed is open for use.

                in reply to: Jupyter Hub Outage – Cluster Issues #4635
                Komal Thareja
                Participant

                  GKE cluster issues are resolved, Jupyter Hub is back online. Apologies for the inconvenience!

                  Thanks,

                  Komal

                  in reply to: A public IP for the Fabric node #4603
                  Komal Thareja
                  Participant

                    @yoursunny – Thank you for sharing the example scripts. Appreciate it!

                    @Xusheng – You can use FabNetv4Ext or FabNetv6Ext services as explained here.

                    Also, we have two example notebooks one each for FabNetv4Ext or FabNetv6Ext available via start_here.ipynb:

                    • FABNet IPv4 Ext (Layer 3): Connect to FABRIC’s IPv4 internet with external access (manual)
                    • FABNet IPv6 Ext (Layer 3): Connect to FABRIC’s IPv6 internet with external access (manual)

                    Thanks,
                    Komal

                    in reply to: IPv6 on FABRIC: A hop with a low MTU #4580
                    Komal Thareja
                    Participant

                      @yoursunny Thank you for sharing your script. We have updated MTU setting across sites and were able to use your script as well for testing. However, with latest fablib changes for performance improvements, the script needed to be adjusted a little bit. Sharing the updated script here.

                      Thanks,
                      Komal

                      in reply to: manual cleanup needed? #4579
                      Komal Thareja
                      Participant

                        Hi Fengping,

                        Thank you so much for reporting this issue. There was a bug which led to allocating same subnet to multiple slices. So when a second slice got allocated the same subnet the traffic stopped working for your slice.

                        I have applied the fix for the bug on production. Could you please delete your slice and recreate it? Apologies for the inconvenience.

                        Appreciate your help with making the system better.

                        Thanks,
                        Komal

                        in reply to: manual cleanup needed? #4575
                        Komal Thareja
                        Participant

                          Please try this to create 12 VMs, this shall let you use almost the entire worker w.r.t cores. I will keep you posted about the flavor details.

                          
                          
                          #Create Slice
                          slice = fablib.new_slice(name=slice_name)
                          
                          # Network
                          net1 = slice.add_l2network(name=network_name, subnet=IPv4Network("192.168.1.0/24"))
                          
                          node_name = "Node"
                          number_of_nodes = 12
                          for x in range(number_of_nodes):
                            disk = 500
                            if x == 0:
                              disk = 4000
                            node = slice.add_node(name=f'{node_name}{x}', site=site, cores='62', ram='128', disk=disk)
                            iface = node.add_component(model='NIC_Basic', name='nic1').get_interfaces()[0]
                            iface.set_mode('auto')
                            net1.add_interface(iface)
                          
                          #Submit Slice Request
                          slice.submit();
                          

                          Thanks,
                          Komal

                          in reply to: manual cleanup needed? #4572
                          Komal Thareja
                          Participant

                            With the current flavor definition, I would recommend requesting VMs with the configuration:

                            cores='62', ram='384', disk='2000'

                            Anything bigger than this maps to fabric.c64.m384.d4000 and only one of the workers i.e. cern-w1 can accomodate 4TB disks and rest of the worker can at max accomodate 2TB disk. I will discuss this internally to work on providing a better flavor to accomodate your slice.

                            Thanks,

                            Komal

                            P.S: I was able to successfully create a slice with the above configuration.

                            • This reply was modified 2 years, 5 months ago by Komal Thareja.
                            in reply to: manual cleanup needed? #4570
                            Komal Thareja
                            Participant

                              I looked at the instance types, please try setting the core='62', ram='384', disk='100'

                              FYI: https://github.com/fabric-testbed/InformationModel/blob/master/fim/slivers/data/instance_sizes.json this might be useful for VM sizing.

                              Thanks,

                              Komal

                              in reply to: manual cleanup needed? #4569
                              Komal Thareja
                              Participant

                                I looked at your slices and found that you have 2 Dead Slices and 6 Closing Slices. All the slices are requesting VMs on a single site CERN. All the Slice requests are requesting either 120 or 60 cores. Regardless of the disk size, for core/ram requested these are mapped to the following flavors. Considering that there are other slices also on CERN site, your slice cannot be accommodated by single CERN site. Please consider either spanning your slice across multiple sites or reducing the size of the VM not only w.r.t disk but also cores/ram.

                                We currently only have a limited number of flavors and your core/ram request is being mapped to a huge disk.

                                core: 120 , ram: 480 G, ==>  fabric.c64.m384.d4000

                                core: 60 , ram: 360 G,  ==> fabric.c60.m384.d2000

                                NOTE: No manual cleanup is needed the software is behaving as designed.

                                Thanks,

                                Komal

                                in reply to: manual cleanup needed? #4566
                                Komal Thareja
                                Participant

                                  Hi Fengping,

                                  Your second slice failed with the error: Insufficient resources as depicted below. Please note that slice deletion is not synchronous, it may take some time for all the resources associated with a slice to be deleted. Please consider adding slight delay between subsequent slice creation attempts if both the slices are requesting resources from the same site which might not have been released yet by the first slice.

                                  Resource Type: VM Notices: Reservation 113cd41c-26df-461e-8dc9-f93ed92fcebf (Slice ServiceXSlice(66a78e70-ecf2-41e7-be12-740561904991) Graph Id:cc871ebc-e290-4b44-ab36-046d3cd2da00 Owner:fengping@uchicago.edu) is in state (Closed,None_) (Last ticket update: Insufficient resources : ['disk'])

                                   

                                  For the second slice, you can view the failure reasons from the portal, by select the check box ‘Include Dead/Closed Slices`.

                                  Please try creating the slice again and let us know if you still see errors.

                                   

                                  Thanks,

                                  Komal

                                  • This reply was modified 2 years, 5 months ago by Komal Thareja.
                                Viewing 15 posts - 406 through 420 (of 505 total)