1. Mert Cevik

Mert Cevik

Forum Replies Created

Viewing 15 posts - 121 through 135 (of 186 total)
  • Author
    Posts
  • in reply to: Node creation fails on PSC (site) #6227
    Mert Cevik
    Moderator

      Hi Mami,

      There is network maintenance at PSC
      https://learn.fabric-testbed.net/forums/topic/maintenance-on-fabric-psc-between-december-19-20-short-notice/

      FABRIC-PSC should not be used until we announce the end of maintenance.

      in reply to: Multi-day FABRIC maintenance (January 1-5, 2024) #6224
      Mert Cevik
      Moderator

        You can create slices until 12/29 23:59 UTC. After that date, new slices cannot be created. All existing slices will stay active until Jan 1st  5pm EST, then we will start deleting the slices.

        It’s safer to consider that all slivers (VMs, dataplane services) of a slice that is touching any of these 4 sites will be deleted for this maintenance. For example, if you have a slice connecting STAR and some other sites that will continue operation (eg. PSC and GPN), you should consider that entire slice with all VMs running on STAR, PSC and GPN will be deleted.

        This work (maintenance) has several complications that we cannot provide a guarantee for the health of the slices touching these 4 sites. Apologies for this inconvenience.

        in reply to: Multi-day FABRIC maintenance (January 1-5, 2024) #6197
        Mert Cevik
        Moderator

          Hello Yoursunny,

          Yes, we will have maintenance and switch upgrades/fiber work at both WASH and STAR locations, therefore FABNetv4Ext  will be affected during this maintenance.

           

          Mert Cevik
          Moderator

            Dear Experimenters,

            We completed this maintenance, worker nodes mentioned on the sites listed on the previous message are available for experiments.
            None of the VMs or reservations running on the workers with 100G ConnectX-6 SmartNICs were affected during the work.

            Mert Cevik
            Moderator

              Power outage is resolved. All active slivers are restored.
              Please let us know if you have any issues with your existing slivers on FABRIC-MASS.

              Mert Cevik
              Moderator

                FASTnet worker at KANS (kans-w2.fabric-testbed.net) encountered a hardware (driver) problem, all VMs on it are rebooted. All network interfaces are re-attached to the VMs. (IP address configurations may need to be reconfigured on them)

                For the specific hardware problem, we will work on remedies to prevent such incidents.

                in reply to: Maintenance on the testbed #6072
                Mert Cevik
                Moderator

                  Dear experimenters,

                  The issue is fixed, maintenance released. Testbed is back to normal operations for all services.

                  in reply to: FABRIC MICH site is in maintenance indefinitely #5924
                  Mert Cevik
                  Moderator

                    Dear experimenters,

                    Issues are resolved. FABRIC-MICH is available for experiments.

                    in reply to: FABRIC-FIU potential outage due to cooling problem #5505
                    Mert Cevik
                    Moderator

                      Cooling system at the datacenter is fixed without any impact on FABRIC-FIU site.

                      FABRIC-FIU resources are available for slices.

                      in reply to: STAR site power loss, connectivity losses #5429
                      Mert Cevik
                      Moderator

                        STAR is online.

                        in reply to: revive the ServiceXSlice? #5217
                        Mert Cevik
                        Moderator

                          Interfaces on node1 are re-attached with respect to the original devices.

                           

                          ubuntu@node1:~$ ip a

                          1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

                              link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

                              inet 127.0.0.1/8 scope host lo

                                 valid_lft forever preferred_lft forever

                              inet6 ::1/128 scope host 

                                 valid_lft forever preferred_lft forever

                          2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000

                              link/ether fa:16:3e:1c:38:5f brd ff:ff:ff:ff:ff:ff

                              inet 10.30.6.43/23 brd 10.30.7.255 scope global dynamic ens3

                                 valid_lft 54283sec preferred_lft 54283sec

                              inet6 2001:400:a100:3090:f816:3eff:fe1c:385f/64 scope global dynamic mngtmpaddr noprefixroute 

                                 valid_lft 86315sec preferred_lft 14315sec

                              inet6 fe80::f816:3eff:fe1c:385f/64 scope link 

                                 valid_lft forever preferred_lft forever

                          3: ens7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

                              link/ether 02:7f:ae:44:cb:c9 brd ff:ff:ff:ff:ff:ff

                          4: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

                              link/ether 02:bc:a6:3f:c7:cb brd ff:ff:ff:ff:ff:ff

                              inet6 fe80::bc:a6ff:fe3f:c7cb/64 scope link 

                                 valid_lft forever preferred_lft forever

                          5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

                              link/ether 06:e3:d6:00:5b:06 brd ff:ff:ff:ff:ff:ff

                              inet6 fe80::4e3:d6ff:fe00:5b06/64 scope link 

                                 valid_lft forever preferred_lft forever

                          in reply to: revive the ServiceXSlice? #5215
                          Mert Cevik
                          Moderator

                            Fengping,

                            I checked the VMs on this slice (I will indicate the IPs below) and all of them have a sliver key as below.
                            ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNo … nrsc4= sliver

                            On the FABRIC bastion hosts, I see the following (bastion) key
                            ecdsa-sha2-nistp256 AAAAE2VjZ … frtHLo= bastion_

                            You can check your ssh configuration and sliver key accordingly.

                            For the FABNetv6Ext network, all VMs have their IPs in place and could ping the default gateway in their subnet (2602:fcfb:1d:2::/64). These IPs are also receiving traffic from external sources, so they seemed in good health.

                            However, I could not ping the IP you mentioned in the peering subnet  (eg. 2602:fcfb:1d:3::/64). My visibility is limited for this peering subnet and I’m not sure where they are active. I notified our network team about this.

                            VMs:
                            2001:400:a100:3090:f816:3eff:fe56:acb7
                            2001:400:a100:3090:f816:3eff:fe80:bfc7
                            2001:400:a100:3090:f816:3eff:fe9c:3e41
                            2001:400:a100:3090:f816:3eff:fee3:ef05
                            2001:400:a100:3090:f816:3eff:fe8b:deb0
                            2001:400:a100:3090:f816:3eff:fe8a:f1d1
                            2001:400:a100:3090:f816:3eff:fe1c:385f
                            2001:400:a100:3090:f816:3eff:feaa:161a
                            2001:400:a100:3090:f816:3eff:fee2:d192
                            2001:400:a100:3090:f816:3eff:fe31:1eeb

                            in reply to: FABRIC NAT64 Service Outage(8/28 – ???) #5203
                            Mert Cevik
                            Moderator

                              FABRIC Nat64 solution is back online.

                              in reply to: FABRIC-SALT Outage #5067
                              Mert Cevik
                              Moderator

                                All VMs are active following a reboot. All network services are online. We apologize for the this inconvenience. Please let us know if you have any issues on your slivers.

                                in reply to: Any problems with SSH connectivity? #5004
                                Mert Cevik
                                Moderator

                                  Hello Bruce,

                                  Inside this VM (205.172.170.76), an ssh public key identified as “fabric@localhost” is present under user account ubuntu.
                                  VM is accessible with SSH from public internet. With the right ssh key and ssh client configurations as described on https://learn.fabric-testbed.net/knowledge-base/logging-into-fabric-vms/ you should be able to login to the VM.

                                  On the other hand, as you mention you are suddenly unable to login to the VMs, it could be a related to a change with the SSH key inside the VM. Can you confirm the ssh key that I mentioned above is the one that’s supposed to be present in the VM?

                                Viewing 15 posts - 121 through 135 (of 186 total)