1. Edgard da Cunha Pontes

Edgard da Cunha Pontes

Forum Replies Created

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • in reply to: Difference between throughput after maintenance #6537

    Hi Paul! Thanks for the feedback.

    Sorry for the delay in responding to you. Our group was developing an article with the theme “Sliced WANs for Data-Intensive Science” and all experiments were carried out in FABRIC. After we submitted the paper, I found some time to go back to running other experiments.

    I split the previous topology [LINK] into two different ones.
    Topology 1: SEAT (h1), MASS (r1), SALT (r2), STAR (r3), NEWY (h2).
    Topology 2: LOSA (h1), DALL (r1), ATLA (r2), WASH (r3), NEWY (h2).

    In topology 2, I obtained the following results (100MB each TCP flow, iPerf v. 3.5.0):

    h1 > r1
    [ 5] 0.00-230.51 sec 100 MBytes 3.64 Mbits/sec 697 sender
    [ 5] 0.00-230.55 sec 99.6 MBytes 3.62 Mbits/sec receiver
    r1 > r2
    [ 5] 0.00-0.69 sec 101 MBytes 1.23 Gbits/sec 0 sender
    [ 5] 0.00-0.73 sec 100 MBytes 1.15 Gbits/sec receiver
    r2 > r3
    [ 5] 0.00-0.53 sec 101 MBytes 1.60 Gbits/sec 0 sender
    [5] 0.00-0.57 sec 99.5 MBytes 1.47 Gbits/sec receiver
    r3 > h2
    [ 5] 0.00-0.22 sec 100 MBytes 3.88 Gbits/sec 172 sender
    [ 5] 0.00-0.26 sec 99.4 MBytes 3.26 Gbits/sec receiver
    h1 > h2
    [ 5] 0.00-459.85 sec 100 MBytes 1.83 Mbits/sec 740 sender
    [ 5] 0.00-459.92 sec 99.5 MBytes 1.82 Mbits/sec receiver

    Again, results that pass through LOSA show a decrease in the transmission rate.

    One question I still have: is whether there is a difference between L2 overlay L2STS and L2PTP for throughput tests?

    in reply to: Difference between throughput after maintenance #6347

    Hi Paul, thanks for answering me!

    What NIC types are you using?

    I’m using NIC_ConnectX_5 NICs on this test.

    What VM size are you using?

    All nodes are default_rocky_8 with 2 cores and 8 GB RAM.

    How are you forwarding traffic in you routers?

    On these tests, I’m using static routes and different routes with TOS. Basically, all tests are made with iperf3 (TCP).

    Are you tuning the TCP/IP configuration of your nodes (congestion control algorithm, MTU, buffer sizes, etc)?

    We are investigating congestion control with the Cubic and BBR algorithms. These tests are using Cubic. All other settings have not been changed.

    Also, are you pinning your nodes to the NIC’s NUMA domain? NUMA pinning example.

    One of the main objectives of this experiment, in addition to investigating congestion control, is the replication property of all tests.

    Apparently, in the topology presented, the main bottleneck is the node in Los Angeles (LOSA).

    in reply to: Difference between throughput after maintenance #6341

    I recreated the same slice and got the following results.

    
    h1  >  r1
    [  5]   0.00-0.94   sec  50.2 MBytes   448 Mbits/sec    9             sender
    [  5]   0.00-0.98   sec  48.7 MBytes   416 Mbits/sec                  receiver
    r1  >  r2
    [  5]   0.00-10.43  sec  50.6 MBytes  40.7 Mbits/sec   44             sender
    [  5]   0.00-10.47  sec  46.8 MBytes  37.5 Mbits/sec                  receiver
    r1  >  r3
    [  5]   0.00-82.03  sec  50.1 MBytes  5.13 Mbits/sec  356             sender
    [  5]   0.00-82.07  sec  49.8 MBytes  5.09 Mbits/sec                  receiver
    r1  >  r5
    [  5]   0.00-454.26 sec  50.1 MBytes   925 Kbits/sec  3455             sender
    [  5]   0.00-454.30 sec  49.8 MBytes   919 Kbits/sec                  receiver
    r2  >  r3
    [  5]   0.00-0.37   sec  50.8 MBytes  1.17 Gbits/sec    0             sender
    [  5]   0.00-0.41   sec  50.0 MBytes  1.03 Gbits/sec                  receiver
    r3  >  r4
    [  5]   0.00-1.16   sec  50.2 MBytes   362 Mbits/sec    0             sender
    [  5]   0.00-1.21   sec  49.4 MBytes   343 Mbits/sec                  receiver
    r4  >  r5
    [  5]   0.00-0.48   sec  51.2 MBytes   892 Mbits/sec    0             sender
    [  5]   0.00-0.52   sec  50.2 MBytes   811 Mbits/sec                  receiver
    r4  >  h2
    [  5]   0.00-0.13   sec  51.0 MBytes  3.34 Gbits/sec    0             sender
    [  5]   0.00-0.17   sec  49.7 MBytes  2.52 Gbits/sec                  receiver
    
    in reply to: Users from Brazil unable to login to FABRIC #5956

    I noticed this problem yesterday (26/10), and I have already made a request to “Account Issues” to use Google as an identity provider, since my institution (UFES) uses Gmail as its institutional email.
    As my FABRIC account was created with this email, I did not lose my data and registration in the project in which I participate.
    It may be more interesting to have more than one email account registered with FABRIC or use Github as an identity provider, if possible.

    in reply to: Enable DPDK on Fabric Nodes #4172

    Hello Ilya, thank you for answering me!

    In this thread, is DPDK enabled on FPGA interfaces? Is there any material that helps this DPDK activation?

    in reply to: Loss of SSH connectivity after Debian upgrade #4029

    Hi Mert,

    Thanks so much for replying and adding Debian 11 (Bullseye) to the official images.

    Helped me a lot. I’m going to start using it this week.

    in reply to: Loss of SSH connectivity after Debian upgrade #4025

    Thanks for the good news!

    To generate some results while waiting for the official Debian 11 “Bullseye” image to be included, I’m using the default Rocky Linux image and to my surprise, I haven’t had any problems with the NIC VLANs NIC_ConnectX_5.

    Thanks.

    in reply to: Error in exchanging ospf protocol routes #3994

    Hi Paul,

    Thanks for answering me!

    Unfortunately, my experiences with SharedNICs have not been very fruitful. Reading some questions that have already been answered, I decided to use Dedicated NICs (NIC_ConnectX_5).

    Initially, I’m working on a topology with 3 nodes (routers) with FRRouting and OSPF is already working.

    debian@d8698a6b-3042-4429-af77-8389f9ea261e-r3:~$ sudo tcpdump -i eth1.100
    tcpdump: verbose output suppressed, use -v[v]… for full protocol decode
    listening on eth1.100, link-type EN10MB (Ethernet), snapshot length 262144 bytes
    19:01:05.079869 IP 192.168.3.2 > ospf-all.mcast.net: OSPFv2, Hello, length 48
    19:01:06.971130 IP 192.168.3.1 > ospf-all.mcast.net: OSPFv2, Hello, length 48

    Next, I’ll use an FRR competitor for another test. Once I get to that part, I’ll share the notebook with you. I was forced to take a step back to get initial results.

    Edgard.

Viewing 8 posts - 1 through 8 (of 8 total)