1. Mert Cevik

Mert Cevik

Forum Replies Created

Viewing 10 posts - 151 through 160 (of 160 total)
  • Author
    Posts
  • in reply to: Network outage for FABRIC-NCSA [RESOLVED] #2726
    Mert Cevik
    Moderator

      This outage is resolved. VMs on FABRIC-NCSA rack are reachable from public internet. Please let us know if you encounter any problems.

      in reply to: Network outage for FABRIC-NCSA [RESOLVED] #2652
      Mert Cevik
      Moderator

        We are working with the hosting campus for a known network issue that may require part replacement. No ETA is available, updates will be posted when the problem is resolved.

        in reply to: Create Nodes with IPv6 address #1524
        Mert Cevik
        Moderator

          If you create an example slice to reveal the problem, we can work on it to understand better, suggest solutions, consider changes on public ACLs. Please let us know with the name of the slice and FABRIC site, if this sounds good for you.

          in reply to: Create Nodes with IPv6 address #1519
          Mert Cevik
          Moderator

            Does it mean that TACC site is not support IPv6?

            Yes, TACC site does not support IPv6, it supports IPv4 addresses.

            Sites MAX, TACC (and upcoming MASS) support only IPv4 addresses.

            Sites STAR, MICH, UTAH, NCSA, WASH, DALL, SALT support only IPv6 addresses.

            • This reply was modified 2 years, 10 months ago by Mert Cevik.
            in reply to: Slice stall on “Configuring” for MAX site via Python API #1239
            Mert Cevik
            Moderator

              Hi Adam,

              I checked FABRIC-MAX, however I could not repeat the problem. It will make it easier for us if you can share information about the slice (eg. topology information as a list/description, quick drawing,  request python code etc).

              Mert

              in reply to: Network outage for FABRIC-TACC #1228
              Mert Cevik
              Moderator

                Network outage is resolved. FABRIC-TACC is online.

                in reply to: Maintenance on FABRIC-UTAH on Monday 1/10/22 – 2-5pm ET #1224
                Mert Cevik
                Moderator

                  This maintenance is re-scheduled for Tuesday 1/11/22 –  9am-12pm ET.

                  in reply to: slice active but node no longer accessible #1129
                  Mert Cevik
                  Moderator

                    I cannot comment on that without diving into the logs to see how the VM was created, PCIe device attached etc. Instead, I suggest starting a new slices (including the fixes for management network), then step by step checking the status with respect to the requested devices. I will be able to help if you prefer this approach.

                    in reply to: slice active but node no longer accessible #1125
                    Mert Cevik
                    Moderator

                      I checked one of your VMs on FABRIC-MAX.

                      Name: ff5acfa1-bbff-44a0-bf28-3d7d2f038d1f-Node1
                      IP: 63.239.135.79

                      In your workflow to configure slice, you change the network settings that affect Management Network.

                      [root@node1 ~]# systemctl status NetworkManager
                      ● NetworkManager.service – Network Manager
                      Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; ve>
                      Active: inactive (dead)
                      Docs: man:NetworkManager(8)
                      [root@node1 ~]# systemctl is-enabled NetworkManager
                      disabled

                      Interface eth0 should persist its IP address configuration (from RFC1918 subnet). Network node of the virtualization platform control external traffic either by NAT’ing or routing against the configured IP address. Currently you have the following:

                      [root@node1 ~]# ifconfig -a
                      docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
                      inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
                      ether 02:42:86:f0:f7:a8 txqueuelen 0 (Ethernet)
                      RX packets 0 bytes 0 (0.0 B)
                      RX errors 0 dropped 0 overruns 0 frame 0
                      TX packets 0 bytes 0 (0.0 B)
                      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                      eth0: flags=4098<BROADCAST,MULTICAST> mtu 9000
                      ether fa:16:3e:49:8e:5a txqueuelen 1000 (Ethernet)
                      RX packets 0 bytes 0 (0.0 B)
                      RX errors 0 dropped 0 overruns 0 frame 0
                      TX packets 0 bytes 0 (0.0 B)
                      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                      lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
                      inet 127.0.0.1 netmask 255.0.0.0
                      inet6 ::1 prefixlen 128 scopeid 0x10
                      loop txqueuelen 1000 (Local Loopback)
                      RX packets 16 bytes 916 (916.0 B)
                      RX errors 0 dropped 0 overruns 0 frame 0
                      TX packets 16 bytes 916 (916.0 B)
                      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                      When network settings is reverted to the original for Management Network, your VM shows the following:

                      [root@node1 ~]# ifconfig -a
                      docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
                      inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
                      ether 02:42:69:1f:14:22 txqueuelen 0 (Ethernet)
                      RX packets 0 bytes 0 (0.0 B)
                      RX errors 0 dropped 0 overruns 0 frame 0
                      TX packets 0 bytes 0 (0.0 B)
                      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                      eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
                      inet 10.20.4.94 netmask 255.255.255.0 broadcast 10.20.4.255
                      inet6 fe80::f816:3eff:fe49:8e5a prefixlen 64 scopeid 0x20 ether fa:16:3e:49:8e:5a txqueuelen 1000 (Ethernet)
                      RX packets 2015 bytes 232936 (227.4 KiB)
                      RX errors 0 dropped 31 overruns 0 frame 0
                      TX packets 1978 bytes 226617 (221.3 KiB)
                      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                      lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
                      inet 127.0.0.1 netmask 255.0.0.0
                      inet6 ::1 prefixlen 128 scopeid 0x10
                      loop txqueuelen 1000 (Local Loopback)
                      RX packets 1160 bytes 58116 (56.7 KiB)
                      RX errors 0 dropped 0 overruns 0 frame 0
                      TX packets 1160 bytes 58116 (56.7 KiB)
                      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

                      and it’s reachable back again.

                      $ ping 63.239.135.79 -c 3
                      PING 63.239.135.79 (63.239.135.79): 56 data bytes
                      64 bytes from 63.239.135.79: icmp_seq=0 ttl=52 time=23.257 ms
                      64 bytes from 63.239.135.79: icmp_seq=1 ttl=52 time=21.347 ms
                      64 bytes from 63.239.135.79: icmp_seq=2 ttl=52 time=17.025 ms

                      — 63.239.135.79 ping statistics —
                      3 packets transmitted, 3 packets received, 0.0% packet loss
                      round-trip min/avg/max/stddev = 17.025/20.543/23.257/2.607 ms

                      You need to review standard installation procedures of platforms such as Docker, Kubernetes, OpenStack and consider changes for the Management Network of your slivers.

                      in reply to: jupyter hub issue #1104
                      Mert Cevik
                      Moderator

                        Hello Tejasri,

                        Can you try to login to the JupyterHub again and let us know?

                        I’m not the best person to help with this at the moment, but I will be able to contact the team. It will be useful to know how multiple login trials work.

                        Mert

                      Viewing 10 posts - 151 through 160 (of 160 total)