1. Elie Kfoury

Elie Kfoury

Forum Replies Created

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
    Posts
  • in reply to: Regarding queue buildup in p4 switches in Fabric Testbed #5502
    Elie Kfoury
    Participant

      You need to limit the rate in order to see the queue buildup. You can use the set_queue_rate command to do that.

      My guess is that you are seeing those results because the congestion is due to factors other than the link being fully occupied.

      in reply to: Regarding queue buildup in p4 switches in Fabric Testbed #5500
      Elie Kfoury
      Participant

        Hi Nagmat,

        First question: the unit used by BMv2 is the number of packets (you might have smaller or bigger packets).

        • enq_qdepth
          • depth of queue when the packet was first enqueued, in number of packets
        • deq_qdepth
          • depth of queue when the packet was first dequeued, in number of packet

        Second question:

        According to https://github.com/p4lang/behavioral-model/issues/31the default queue size is 64 packets.

        You might be getting worse results with much larger buffers due to bufferbloat. This phenomenon happens when your buffer is excessively large, making the packets stay in a queue for a long time, increasing the latency.

        Hope that helps.

        Regards,

        Elie.

        in reply to: Bmv2 max performance in FABRIC #5498
        Elie Kfoury
        Participant

          You can share it here or send it via email: ekfoury@email.sc.edu

          Thanks.

          in reply to: Bmv2 max performance in FABRIC #5496
          Elie Kfoury
          Participant

            Hi Nagmat,

            Reaching ~4Gbps on BMv2 is not common. What kind of tuning did you do? Would you mind sharing your notebook?

            Regards,

            Elie.

            in reply to: Bmv2 max performance in FABRIC #5494
            Elie Kfoury
            Participant

              Hi Nishanth,

              From the results you’re sharing, I think this is a minor performance degradation at the beginning of the test. Maybe it is related to the burst of traffic before the 600Mbps rate is satisfied.

              If the switch is dropping the packets, you can also try increasing the queue size on the BMv2 switch by using the set_queue_depth command in the simple_switch_CLI tool. You can refer to Lab 5 to interact with the switch at runtime.

              Another suggestion is to try using nuttcp for UDP tests. ESnet suggests nuttcp instead of iperf3 for UDP testing (https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/nuttcp/)

              Regards,

              Elie.

              in reply to: Bmv2 max performance in FABRIC #5361
              Elie Kfoury
              Participant

                Hi Nagmat,

                Good to hear that you were able to reach the 1Gbps speed.

                You did not attach the main1.p4 program. I would suggest running BMv2 with logging enabled. You can refer to lab2_P4_program_building_blocks.ipynb notebook to understand how you can enable logging.

                The typical workflow is to start with logging enabled so that you can verify the behavior of your P4 program. Afterwards, you can move your program to the high performance BMv2.

                Regards,

                Elie.

                in reply to: Bmv2 max performance in FABRIC #5343
                Elie Kfoury
                Participant

                  Hi Nagmat,

                  I see from your screenshot that the first few seconds the throughput was zero, then it went up. I believe if you run iperf for longer time, the average will go up to 1Gbps.

                  You can try using the -t option to specify the time the iperf3 test should be running (e.g., -t 60 will run the test for 60 seconds).

                  Regards,

                  Elie.

                  in reply to: Bmv2 max performance in FABRIC #5341
                  Elie Kfoury
                  Participant

                    Hi Nishanth,

                    I tried running the notebook you mentioned, and I got a throughput close to 1Gbps. Note that I changed the sites since NCSA, STAR, and UMich are under maintenance. I use the following sites:

                    site1=’MAX’
                    site2=’MASS’
                    site3=’NEWY’

                    I just executed the cells sequentially and got high throughput as shown in the figure. Can you try changing the sites and repeat the experiment?

                    Regards,

                    Elie.

                    Elie Kfoury
                    Participant

                      Hi Komal,

                      Thanks for your help. Following your steps solved the issue.

                      Regards,

                      Elie.

                    Viewing 9 posts - 1 through 9 (of 9 total)