1. Mert Cevik

Mert Cevik

Forum Replies Created

Viewing 15 posts - 166 through 180 (of 181 total)
  • Author
    Posts
  • in reply to: Loss of SSH connectivity after Debian upgrade #4023
    Mert Cevik
    Moderator

      We are in the process of adding Debian 11 (Bullseye) image to all FABRIC sites. We will notify when this image can be used in your notebooks.

      Mert Cevik
      Moderator

        Following subnets are used for control interfaces of the VMs on all sites.

        [1] 10.20.4.0/23
        [2] 10.30.6.0/23

        Update:

        For the EDC site (special RACK for Measurement VMs) following subnets are used:

        [1] 10.40.4.0/23
        [2] 10.40.6.0/23

        • This reply was modified 2 years, 6 months ago by Mert Cevik.
        1 user thanked author for this post.
        Mert Cevik
        Moderator

          Maintenance on FABRIC-MICH is completed.

          Mert Cevik
          Moderator

            Maintenance on FABRIC-MASS is completed.

            Mert Cevik
            Moderator

              Maintenance on FABRIC-MASS is completed.

              in reply to: Network outage for FABRIC Central Services [RESOLVED] #3028
              Mert Cevik
              Moderator

                This outage is resolved. All FABRIC services are operational and new requests can be submitted.

                in reply to: Network outage for FABRIC-NCSA [RESOLVED] #2726
                Mert Cevik
                Moderator

                  This outage is resolved. VMs on FABRIC-NCSA rack are reachable from public internet. Please let us know if you encounter any problems.

                  in reply to: Network outage for FABRIC-NCSA [RESOLVED] #2652
                  Mert Cevik
                  Moderator

                    We are working with the hosting campus for a known network issue that may require part replacement. No ETA is available, updates will be posted when the problem is resolved.

                    in reply to: Create Nodes with IPv6 address #1524
                    Mert Cevik
                    Moderator

                      If you create an example slice to reveal the problem, we can work on it to understand better, suggest solutions, consider changes on public ACLs. Please let us know with the name of the slice and FABRIC site, if this sounds good for you.

                      in reply to: Create Nodes with IPv6 address #1519
                      Mert Cevik
                      Moderator

                        Does it mean that TACC site is not support IPv6?

                        Yes, TACC site does not support IPv6, it supports IPv4 addresses.

                        Sites MAX, TACC (and upcoming MASS) support only IPv4 addresses.

                        Sites STAR, MICH, UTAH, NCSA, WASH, DALL, SALT support only IPv6 addresses.

                        • This reply was modified 3 years, 6 months ago by Mert Cevik.
                        in reply to: Slice stall on “Configuring” for MAX site via Python API #1239
                        Mert Cevik
                        Moderator

                          Hi Adam,

                          I checked FABRIC-MAX, however I could not repeat the problem. It will make it easier for us if you can share information about the slice (eg. topology information as a list/description, quick drawing,  request python code etc).

                          Mert

                          in reply to: Network outage for FABRIC-TACC #1228
                          Mert Cevik
                          Moderator

                            Network outage is resolved. FABRIC-TACC is online.

                            in reply to: Maintenance on FABRIC-UTAH on Monday 1/10/22 – 2-5pm ET #1224
                            Mert Cevik
                            Moderator

                              This maintenance is re-scheduled for Tuesday 1/11/22 –  9am-12pm ET.

                              in reply to: slice active but node no longer accessible #1129
                              Mert Cevik
                              Moderator

                                I cannot comment on that without diving into the logs to see how the VM was created, PCIe device attached etc. Instead, I suggest starting a new slices (including the fixes for management network), then step by step checking the status with respect to the requested devices. I will be able to help if you prefer this approach.

                                in reply to: slice active but node no longer accessible #1125
                                Mert Cevik
                                Moderator

                                  I checked one of your VMs on FABRIC-MAX.

                                  Name: ff5acfa1-bbff-44a0-bf28-3d7d2f038d1f-Node1
                                  IP: 63.239.135.79

                                  In your workflow to configure slice, you change the network settings that affect Management Network.

                                  [root@node1 ~]# systemctl status NetworkManager
                                  ● NetworkManager.service – Network Manager
                                  Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; ve>
                                  Active: inactive (dead)
                                  Docs: man:NetworkManager(8)
                                  [root@node1 ~]# systemctl is-enabled NetworkManager
                                  disabled

                                  Interface eth0 should persist its IP address configuration (from RFC1918 subnet). Network node of the virtualization platform control external traffic either by NAT’ing or routing against the configured IP address. Currently you have the following:

                                  [root@node1 ~]# ifconfig -a
                                  docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
                                  inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
                                  ether 02:42:86:f0:f7:a8 txqueuelen 0 (Ethernet)
                                  RX packets 0 bytes 0 (0.0 B)
                                  RX errors 0 dropped 0 overruns 0 frame 0
                                  TX packets 0 bytes 0 (0.0 B)
                                  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                                  eth0: flags=4098<BROADCAST,MULTICAST> mtu 9000
                                  ether fa:16:3e:49:8e:5a txqueuelen 1000 (Ethernet)
                                  RX packets 0 bytes 0 (0.0 B)
                                  RX errors 0 dropped 0 overruns 0 frame 0
                                  TX packets 0 bytes 0 (0.0 B)
                                  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                                  lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
                                  inet 127.0.0.1 netmask 255.0.0.0
                                  inet6 ::1 prefixlen 128 scopeid 0x10
                                  loop txqueuelen 1000 (Local Loopback)
                                  RX packets 16 bytes 916 (916.0 B)
                                  RX errors 0 dropped 0 overruns 0 frame 0
                                  TX packets 16 bytes 916 (916.0 B)
                                  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                                  When network settings is reverted to the original for Management Network, your VM shows the following:

                                  [root@node1 ~]# ifconfig -a
                                  docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
                                  inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
                                  ether 02:42:69:1f:14:22 txqueuelen 0 (Ethernet)
                                  RX packets 0 bytes 0 (0.0 B)
                                  RX errors 0 dropped 0 overruns 0 frame 0
                                  TX packets 0 bytes 0 (0.0 B)
                                  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                                  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
                                  inet 10.20.4.94 netmask 255.255.255.0 broadcast 10.20.4.255
                                  inet6 fe80::f816:3eff:fe49:8e5a prefixlen 64 scopeid 0x20 ether fa:16:3e:49:8e:5a txqueuelen 1000 (Ethernet)
                                  RX packets 2015 bytes 232936 (227.4 KiB)
                                  RX errors 0 dropped 31 overruns 0 frame 0
                                  TX packets 1978 bytes 226617 (221.3 KiB)
                                  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                                  lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
                                  inet 127.0.0.1 netmask 255.0.0.0
                                  inet6 ::1 prefixlen 128 scopeid 0x10
                                  loop txqueuelen 1000 (Local Loopback)
                                  RX packets 1160 bytes 58116 (56.7 KiB)
                                  RX errors 0 dropped 0 overruns 0 frame 0
                                  TX packets 1160 bytes 58116 (56.7 KiB)
                                  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                                  and it’s reachable back again.

                                  $ ping 63.239.135.79 -c 3
                                  PING 63.239.135.79 (63.239.135.79): 56 data bytes
                                  64 bytes from 63.239.135.79: icmp_seq=0 ttl=52 time=23.257 ms
                                  64 bytes from 63.239.135.79: icmp_seq=1 ttl=52 time=21.347 ms
                                  64 bytes from 63.239.135.79: icmp_seq=2 ttl=52 time=17.025 ms

                                  — 63.239.135.79 ping statistics —
                                  3 packets transmitted, 3 packets received, 0.0% packet loss
                                  round-trip min/avg/max/stddev = 17.025/20.543/23.257/2.607 ms

                                  You need to review standard installation procedures of platforms such as Docker, Kubernetes, OpenStack and consider changes for the Management Network of your slivers.

                                Viewing 15 posts - 166 through 180 (of 181 total)