Forum Replies Created
-
AuthorPosts
-
September 27, 2023 at 7:31 pm in reply to: Regarding queue buildup in p4 switches in Fabric Testbed #5502
You need to limit the rate in order to see the queue buildup. You can use the set_queue_rate command to do that.
My guess is that you are seeing those results because the congestion is due to factors other than the link being fully occupied.
September 27, 2023 at 3:50 pm in reply to: Regarding queue buildup in p4 switches in Fabric Testbed #5500Hi Nagmat,
First question: the unit used by BMv2 is the number of packets (you might have smaller or bigger packets).
- enq_qdepth
- depth of queue when the packet was first enqueued, in number of packets
- deq_qdepth
- depth of queue when the packet was first dequeued, in number of packet
Second question:
According to https://github.com/p4lang/behavioral-model/issues/31the default queue size is 64 packets.
You might be getting worse results with much larger buffers due to bufferbloat. This phenomenon happens when your buffer is excessively large, making the packets stay in a queue for a long time, increasing the latency.
Hope that helps.
Regards,
Elie.
Hi Nagmat,
Reaching ~4Gbps on BMv2 is not common. What kind of tuning did you do? Would you mind sharing your notebook?
Regards,
Elie.
Hi Nishanth,
From the results you’re sharing, I think this is a minor performance degradation at the beginning of the test. Maybe it is related to the burst of traffic before the 600Mbps rate is satisfied.
If the switch is dropping the packets, you can also try increasing the queue size on the BMv2 switch by using the set_queue_depth command in the simple_switch_CLI tool. You can refer to Lab 5 to interact with the switch at runtime.
Another suggestion is to try using nuttcp for UDP tests. ESnet suggests nuttcp instead of iperf3 for UDP testing (https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/nuttcp/)
Regards,
Elie.
Hi Nagmat,
Good to hear that you were able to reach the 1Gbps speed.
You did not attach the main1.p4 program. I would suggest running BMv2 with logging enabled. You can refer to lab2_P4_program_building_blocks.ipynb notebook to understand how you can enable logging.
The typical workflow is to start with logging enabled so that you can verify the behavior of your P4 program. Afterwards, you can move your program to the high performance BMv2.
Regards,
Elie.
Hi Nagmat,
I see from your screenshot that the first few seconds the throughput was zero, then it went up. I believe if you run iperf for longer time, the average will go up to 1Gbps.
You can try using the -t option to specify the time the iperf3 test should be running (e.g., -t 60 will run the test for 60 seconds).
Regards,
Elie.
Hi Nishanth,
I tried running the notebook you mentioned, and I got a throughput close to 1Gbps. Note that I changed the sites since NCSA, STAR, and UMich are under maintenance. I use the following sites:
site1=’MAX’
site2=’MASS’
site3=’NEWY’I just executed the cells sequentially and got high throughput as shown in the figure. Can you try changing the sites and repeat the experiment?
Regards,
Elie.
July 24, 2023 at 1:49 pm in reply to: Label exception: Unable to set field numa of labels, no such field available #4805Hi Komal,
Thanks for your help. Following your steps solved the issue.
Regards,
Elie.
- enq_qdepth
-
AuthorPosts