Home › Forums › FABRIC General Questions and Discussion › Trouble with IPv4 Connectivity in a 3-Node Ubuntu 22 Cluster Using Shared NICs
Tagged: IPv4, Netv4, Shared NIC, Ubuntu
- This topic has 6 replies, 2 voices, and was last updated 2 weeks, 2 days ago by Komal Thareja.
-
AuthorPosts
-
December 27, 2024 at 11:30 pm #7975
I’ve set up a cluster with three nodes running Ubuntu 22, and each node is configured with a Shared NIC using Netv4 network connections. I’m encountering an issue with IPv4 connectivity between the nodes.
Here’s the situation:
- Each node can successfully ping IPv6 addresses within the cluster.
- All nodes can ping the IPv4 gateway.
- However, nodes cannot ping each other using their IPv4 addresses.
- When attempting to access the internet, nodes can ping external IPv6 addresses but not IPv4 addresses.
Could there be something wrong with my configuration that’s causing these IPv4 connectivity issues? What steps should I take to ensure that all nodes can communicate with each other using IPv4 addresses in this setup?
Any advice or suggestions would be greatly appreciated!
Thanks in advance for your help!
December 28, 2024 at 9:53 am #7976Hi Pinxiang,
Could you please share your Slice ID?
Thanks,
Komal
December 28, 2024 at 11:53 am #7977Hi Komal,
my slice ID is 2e8bc4ce-6e53-4fff-84bd-4b0276a996a1. Thank you for your help!
Best,
Pinxiang
December 28, 2024 at 1:24 pm #7978Hi Pinxiang,
Looking at your slice, you have 3 VMs connected to FabNetv4 service as you mentioned. But the IP addresses are not configured on the respective interfaces on the VMs, hence the traffic does not pass.
Could you please try Fabnetv4 example accessible via
start_here.ipynb
?FABNet IPv4 (Layer 3): Connect to FABRIC’s IPv4 internet – it has 3 options auto, manual and full auto.
In the auto, and full auto, API takes care of configuring the IP addresses and traffic should pass on the IPv4 address while in the manual configuration, user is explicitly required to configure the IP addresses.
Please feel free to reach out in case of questions or concers.
Snippet from your VMs:
root@3bb1005a-6a0f-4b52-9c07-75d453b50813-node1:~# ifconfig -a
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.30.6.153 netmask 255.255.254.0 broadcast 10.30.7.255
inet6 2001:400:a100:3020:f816:3eff:fe23:bd75 prefixlen 64 scopeid 0x0<global>
inet6 fe80::f816:3eff:fe23:bd75 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:23:bd:75 txqueuelen 1000 (Ethernet)
RX packets 364541 bytes 306898628 (306.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34407 bytes 3474334 (3.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp7s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 02:50:a9:17:fc:d4 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 662 bytes 115088 (115.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 662 bytes 115088 (115.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ubuntu@05441f94-5e35-4981-97d3-1ed1dac3381e-node3:~$ ifconfig -a
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.30.6.231 netmask 255.255.254.0 broadcast 10.30.7.255
inet6 fe80::f816:3eff:fec8:e21a prefixlen 64 scopeid 0x20<link>
inet6 2001:400:a100:3020:f816:3eff:fec8:e21a prefixlen 64 scopeid 0x0<global>
ether fa:16:3e:c8:e2:1a txqueuelen 1000 (Ethernet)
RX packets 368197 bytes 307203695 (307.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 37436 bytes 3716755 (3.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp7s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 02:fe:2e:df:af:a7 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 556 bytes 95893 (95.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 556 bytes 95893 (95.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ubuntu@9ac56841-a123-4efa-9322-af75d3731819-node2:~$ ifconfig -a
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.30.6.23 netmask 255.255.254.0 broadcast 10.30.7.255
inet6 2001:400:a100:3020:f816:3eff:fe62:510a prefixlen 64 scopeid 0x0<global>
inet6 fe80::f816:3eff:fe62:510a prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:62:51:0a txqueuelen 1000 (Ethernet)
RX packets 379700 bytes 308068634 (308.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 46612 bytes 4697307 (4.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp7s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 02:ef:84:b8:fd:09 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 570 bytes 97991 (97.9 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 570 bytes 97991 (97.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Thanks,
Komal
- This reply was modified 3 weeks, 3 days ago by Komal Thareja.
December 28, 2024 at 1:31 pm #7981Also, please note FabNetv4 network service is like an internet within Fabric and does not provide external connectivity. Please check out more details about the Network Services offered by Fabric here.
FabNetv*Ext services do offer external connectivity but require special permission to be enabled which can be requested by the Project Lead.
Thanks,
Komal
January 3, 2025 at 9:01 pm #7991Hi Komal,
Thank you for your guidance. I created the cluster through the Experiment GUI rather than JupyterHub, and I’m unsure how to execute the
start_here.ipynb
. I’m curious why the GUI-created cluster doesn’t automatically configure the IP addresses. Is there a way to enable automatic IP assignment during configuration via the GUI? If manual configuration is necessary, should I configure the IP forenp7s0
on each node to address this issue?We plan to deploy a Kubernetes cluster across these three nodes, which requires the ability to access external sites (like GitHub and DockerHub) and for the nodes to communicate with each other via IPv4 addresses. We do not need to expose any services to the public internet. Could you offer any advice on this setup and any additional considerations we should be aware of during deployment? I look forward to your further guidance.
Best regards,
Pinxiang
January 5, 2025 at 11:51 am #7993Hi Pinxiang,
GUI does not support automatic configuration of IP addresses or complex topologies. When creating a slice from GUI, user is expected to configure the IP addresses manually after logging into the VM. You are right in this case the interface would be
enp7s0
. This can also be confirmed by matching the MAC address shown in the GUI and the interface.I would strongly encourage you to try JupyterHub, we have several example available there which might be very helpful.
Please follow the instructions here to setup your Jupyter Hub environment and create a simple slice.
Also, sharing instruction for creating K8s cluster on FABRIC (Example created by Professor Fraida Fund).
Please let us know if you run into any issues or have questions.
Thanks,
Komal
-
AuthorPosts
- You must be logged in to reply to this topic.