1. Regarding queue buildup in p4 switches in Fabric Testbed

Regarding queue buildup in p4 switches in Fabric Testbed

Home Forums FABRIC General Questions and Discussion Regarding queue buildup in p4 switches in Fabric Testbed

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #5499
    Nagmat Nazarov
    Participant

      I am doing experiments with P4 switches on the Fabric Testbed. I am adding INT(Network telemetry) data on certain protocols in IPV4 and retrieving INT data while making congestion using iperf3(Throughput is 4Gbits/s).

      The data I am retrieving are:
      hdr.my_meta.enq_timestamp = standard_metadata.enq_timestamp;
      hdr.my_meta.enq_qdepth = (bit<32>) standard_metadata.enq_qdepth;
      hdr.my_meta.deq_timedelta = standard_metadata.deq_timedelta;
      hdr.my_meta.deq_qdepth = (bit<32>) standard_metadata.deq_qdepth;

      INT data is collected after each 0.05 seconds.

      After the first experiment, the data I got in the CSV file are :

      `

      830141867,3,76,11
      830232101,11,68,8
      830328413,48,619,50
      830488868,51,752,56
      <strong>830593862,61,755,56</strong>
      830769889,55,922,61
      835661502,4,54,1
      836565116,1,15,1
      836737157,1,31,1
      837945980,1,15,1
      838568806,1,21,1
      842556241,1,21,1
      851889549,1,13,1
      872360530,1,14,1
      872533596,1,18,1
      872785378,1,14,1
      872869246,1,17,1
      873466523,1,17,1
      873555094,1,16,1
      874237210,1,14,1

      <code></code>`

      My first question is :

      What does the data on the 5th row (830593862,61,755,56) mean? What are the 61 enq_qdepth and 56 deq_qdepth mean in bytes?  (in Intel Tofino, it each cell meant 80 bytes If I recall it correctly, how many bytes is it in the bmv2 switch?)

      Second question :

      I can set depth by using the set_queue_depth command in the simple_switch_CLI tool. But I couldn’t find the default queue_depth. (I am getting the best results with the default settings, whenever I set different values I am getting worse results.) How can I find the queue_depth in the bmv2 switch?

      #5500
      Elie Kfoury
      Participant

        Hi Nagmat,

        First question: the unit used by BMv2 is the number of packets (you might have smaller or bigger packets).

        • enq_qdepth
          • depth of queue when the packet was first enqueued, in number of packets
        • deq_qdepth
          • depth of queue when the packet was first dequeued, in number of packet

        Second question:

        According to https://github.com/p4lang/behavioral-model/issues/31the default queue size is 64 packets.

        You might be getting worse results with much larger buffers due to bufferbloat. This phenomenon happens when your buffer is excessively large, making the packets stay in a queue for a long time, increasing the latency.

        Hope that helps.

        Regards,

        Elie.

        #5501
        Nagmat Nazarov
        Participant

          61

          Thanks for the quick response.

          My third question is: Why we don’t get the values near 64 as we are still facing congestion with 4Gbits/s?

          Instead, we explore values around 1. When I did a similar experiment with the intel Tofino switch I was getting consistent values. If the enq_qdepth started with 50, then we explore the 30s or 40s around. In bmv2 experiment we explore high values only in the beginning then only 1s.

          Kind regards,

          Nagmat.

          #5502
          Elie Kfoury
          Participant

            You need to limit the rate in order to see the queue buildup. You can use the set_queue_rate command to do that.

            My guess is that you are seeing those results because the congestion is due to factors other than the link being fully occupied.

          Viewing 4 posts - 1 through 4 (of 4 total)
          • You must be logged in to reply to this topic.