1. Mert Cevik

Mert Cevik

Forum Replies Created

Viewing 15 posts - 121 through 135 (of 181 total)
  • Author
    Posts
  • Mert Cevik
    Moderator

      FASTnet worker at KANS (kans-w2.fabric-testbed.net) encountered a hardware (driver) problem, all VMs on it are rebooted. All network interfaces are re-attached to the VMs. (IP address configurations may need to be reconfigured on them)

      For the specific hardware problem, we will work on remedies to prevent such incidents.

      in reply to: Maintenance on the testbed #6072
      Mert Cevik
      Moderator

        Dear experimenters,

        The issue is fixed, maintenance released. Testbed is back to normal operations for all services.

        in reply to: FABRIC MICH site is in maintenance indefinitely #5924
        Mert Cevik
        Moderator

          Dear experimenters,

          Issues are resolved. FABRIC-MICH is available for experiments.

          in reply to: FABRIC-FIU potential outage due to cooling problem #5505
          Mert Cevik
          Moderator

            Cooling system at the datacenter is fixed without any impact on FABRIC-FIU site.

            FABRIC-FIU resources are available for slices.

            in reply to: STAR site power loss, connectivity losses #5429
            Mert Cevik
            Moderator

              STAR is online.

              in reply to: revive the ServiceXSlice? #5217
              Mert Cevik
              Moderator

                Interfaces on node1 are re-attached with respect to the original devices.

                 

                ubuntu@node1:~$ ip a

                1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

                    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

                    inet 127.0.0.1/8 scope host lo

                       valid_lft forever preferred_lft forever

                    inet6 ::1/128 scope host 

                       valid_lft forever preferred_lft forever

                2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000

                    link/ether fa:16:3e:1c:38:5f brd ff:ff:ff:ff:ff:ff

                    inet 10.30.6.43/23 brd 10.30.7.255 scope global dynamic ens3

                       valid_lft 54283sec preferred_lft 54283sec

                    inet6 2001:400:a100:3090:f816:3eff:fe1c:385f/64 scope global dynamic mngtmpaddr noprefixroute 

                       valid_lft 86315sec preferred_lft 14315sec

                    inet6 fe80::f816:3eff:fe1c:385f/64 scope link 

                       valid_lft forever preferred_lft forever

                3: ens7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

                    link/ether 02:7f:ae:44:cb:c9 brd ff:ff:ff:ff:ff:ff

                4: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

                    link/ether 02:bc:a6:3f:c7:cb brd ff:ff:ff:ff:ff:ff

                    inet6 fe80::bc:a6ff:fe3f:c7cb/64 scope link 

                       valid_lft forever preferred_lft forever

                5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

                    link/ether 06:e3:d6:00:5b:06 brd ff:ff:ff:ff:ff:ff

                    inet6 fe80::4e3:d6ff:fe00:5b06/64 scope link 

                       valid_lft forever preferred_lft forever

                in reply to: revive the ServiceXSlice? #5215
                Mert Cevik
                Moderator

                  Fengping,

                  I checked the VMs on this slice (I will indicate the IPs below) and all of them have a sliver key as below.
                  ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNo … nrsc4= sliver

                  On the FABRIC bastion hosts, I see the following (bastion) key
                  ecdsa-sha2-nistp256 AAAAE2VjZ … frtHLo= bastion_

                  You can check your ssh configuration and sliver key accordingly.

                  For the FABNetv6Ext network, all VMs have their IPs in place and could ping the default gateway in their subnet (2602:fcfb:1d:2::/64). These IPs are also receiving traffic from external sources, so they seemed in good health.

                  However, I could not ping the IP you mentioned in the peering subnet  (eg. 2602:fcfb:1d:3::/64). My visibility is limited for this peering subnet and I’m not sure where they are active. I notified our network team about this.

                  VMs:
                  2001:400:a100:3090:f816:3eff:fe56:acb7
                  2001:400:a100:3090:f816:3eff:fe80:bfc7
                  2001:400:a100:3090:f816:3eff:fe9c:3e41
                  2001:400:a100:3090:f816:3eff:fee3:ef05
                  2001:400:a100:3090:f816:3eff:fe8b:deb0
                  2001:400:a100:3090:f816:3eff:fe8a:f1d1
                  2001:400:a100:3090:f816:3eff:fe1c:385f
                  2001:400:a100:3090:f816:3eff:feaa:161a
                  2001:400:a100:3090:f816:3eff:fee2:d192
                  2001:400:a100:3090:f816:3eff:fe31:1eeb

                  in reply to: FABRIC NAT64 Service Outage(8/28 – ???) #5203
                  Mert Cevik
                  Moderator

                    FABRIC Nat64 solution is back online.

                    in reply to: FABRIC-SALT Outage #5067
                    Mert Cevik
                    Moderator

                      All VMs are active following a reboot. All network services are online. We apologize for the this inconvenience. Please let us know if you have any issues on your slivers.

                      in reply to: Any problems with SSH connectivity? #5004
                      Mert Cevik
                      Moderator

                        Hello Bruce,

                        Inside this VM (205.172.170.76), an ssh public key identified as “fabric@localhost” is present under user account ubuntu.
                        VM is accessible with SSH from public internet. With the right ssh key and ssh client configurations as described on https://learn.fabric-testbed.net/knowledge-base/logging-into-fabric-vms/ you should be able to login to the VM.

                        On the other hand, as you mention you are suddenly unable to login to the VMs, it could be a related to a change with the SSH key inside the VM. Can you confirm the ssh key that I mentioned above is the one that’s supposed to be present in the VM?

                        in reply to: bastion ssh login #4734
                        Mert Cevik
                        Moderator

                          There may be multiple issues that we will be working on.

                          In order to eliminate the possibility of an issue with the “outside” IP address that you’re using, can you please send your IP address?

                          in reply to: Long running slice stability issue.  #4733
                          Mert Cevik
                          Moderator

                            2 VMs were stopped by the hypervisor and I started them. Can you please check the status and your access?

                            Root cause of this problem is a known issue that we could correct on Phase-1 sites last month, but some of the Phase-2 sites did not receive this correction yet. We will find convenient time in the next few weeks, for now we will be able to help when this occurs.

                            Mert Cevik
                            Moderator

                              Maintenance is completed.

                              in reply to: Maintenance on FABRIC-MASS between June 5-9 #4467
                              Mert Cevik
                              Moderator

                                FABRIC-MASS is shutdown and will be online by the end of Multi-day FABRIC maintenance (June 12-June 16, 2023)

                                 

                                Mert Cevik
                                Moderator

                                  Work on network connectivity for FIU is still in progress. We are working with the hosting campus to resolve the issues. FIU will remain in maintenance until the end of multi-day maintenance that we will perform next week (6/12).

                                Viewing 15 posts - 121 through 135 (of 181 total)