Forum Replies Created
-
AuthorPosts
-
This is the error on the network service :
“failed lease update- all units failed priming: Exception during create for unit: e4ea540a-02fb-4ebe-91b2-9d87e0ec344a Playbook has failed tasks: NSO commit returned JSON-RPC error: type: rpc.method.failed, code: -32000, message: Method failed, data: message: External error in the NED implementation for device tacc-data-sw: Tue Dec 12 18:14:02.649 UTCrnrn Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted.rn SEMANTIC ERRORS: This configuration was rejected by rn the system due to semantic errors. The individual rn errors with each failed configuration command can be rn found below.rnrnrnl2vpnrn bridge group bg-net3-e4ea540a-02fb-4ebe-91b2rn bridge-domain bd-net3-e4ea540a-02fb-4ebe-rn interface HundredGigE0/0/0/7.2102rn Invalid argument: Interface already used for L2VPNrn rn rn rnrnend, internal: jsonrpc_tx_commit357#all units failed priming: Exception during create for unit: e4ea540a-02fb-4ebe-91b2-9d87e0ec344a Playbook has failed tasks: NSO commit returned JSON-RPC error: type: rpc.method.failed, code: -32000, message: Method failed, data: message: External error in the NED implementation for device tacc-data-sw: Tue Dec 12 18:14:02.649 UTCrnrn Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted.rn SEMANTIC ERRORS: This configuration was rejected by rn the system due to semantic errors. The individual rn errors with each failed configuration command can be rn found below.rnrnrnl2vpnrn bridge group bg-net3-e4ea540a-02fb-4ebe-91b2rn bridge-domain bd-net3-e4ea540a-02fb-4ebe-rn interface HundredGigE0/0/0/7.2102rn Invalid argument: Interface already used for L2VPNrn rn rn rnrnend, internal: jsonrpc_tx_commit357#”
I am facing similar issues while creating.
What may be the issue?
The slice id : d072423f-1deb-401b-8b9f-650400497c51
September 27, 2023 at 5:40 pm in reply to: Regarding queue buildup in p4 switches in Fabric Testbed #550161
Thanks for the quick response.
My third question is: Why we don’t get the values near 64 as we are still facing congestion with 4Gbits/s?
Instead, we explore values around 1. When I did a similar experiment with the intel Tofino switch I was getting consistent values. If the enq_qdepth started with 50, then we explore the 30s or 40s around. In bmv2 experiment we explore high values only in the beginning then only 1s.
Kind regards,
Nagmat.
Hi Nagmat,
Reaching ~4Gbps on BMv2 is not common. What kind of tuning did you do? Would you mind sharing your notebook?
Regards,
Elie.
How can I share it?
Hi,
I was able to get ~1Gbps performance with TCP, however there is always a slight mismatch between packets sent and received as reported by iperf3, with couple of retries during the 1st second.
I used a UDP connection instead and I am seeing a consistent behaviour of reasonable amount of losses on the 1st second across runs.
I am using the same notebook as mentioned above, but replaced the client command with the following:
iperf3 -c 192.168.2.10 -u -l 1300 -b 600MI have pasted the results for multiple runs with the above mentioned values. Is there an explanation for this drop, mostly at the first second.
Accepted connection from 192.168.1.10, port 36772
[ 5] local 192.168.2.10 port 5201 connected to 192.168.1.10 port 33996
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 69.7 MBytes 585 Mbits/sec 0.044 ms 501/56731 (0.88%)
[ 5] 1.00-2.00 sec 71.5 MBytes 600 Mbits/sec 0.052 ms 0/57691 (0%)
[ 5] 2.00-3.00 sec 71.5 MBytes 600 Mbits/sec 0.052 ms 0/57696 (0%)
[ 5] 3.00-4.00 sec 71.5 MBytes 600 Mbits/sec 0.049 ms 0/57690 (0%)
[ 5] 4.00-5.00 sec 71.5 MBytes 599 Mbits/sec 0.052 ms 58/57694 (0.1%)
[ 5] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 0.063 ms 0/57691 (0%)
[ 5] 6.00-7.00 sec 71.5 MBytes 600 Mbits/sec 0.057 ms 0/57694 (0%)
[ 5] 7.00-8.00 sec 71.5 MBytes 600 Mbits/sec 0.055 ms 0/57691 (0%)
[ 5] 8.00-9.00 sec 71.5 MBytes 600 Mbits/sec 0.057 ms 0/57693 (0%)
[ 5] 9.00-10.00 sec 71.5 MBytes 600 Mbits/sec 0.018 ms 0/57709 (0%)
[ 5] 10.00-10.02 sec 1.12 MBytes 600 Mbits/sec 0.004 ms 0/901 (0%)
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.02 sec 715 MBytes 598 Mbits/sec 0.004 ms 559/576881 (0.097%) receiver
———————————————————–
Server listening on 5201
———————————————————–
Accepted connection from 192.168.1.10, port 46278
[ 5] local 192.168.2.10 port 5201 connected to 192.168.1.10 port 39296
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 69.6 MBytes 584 Mbits/sec 0.058 ms 598/56731 (1.1%)
[ 5] 1.00-2.00 sec 71.5 MBytes 600 Mbits/sec 0.055 ms 0/57692 (0%)
[ 5] 2.00-3.00 sec 71.5 MBytes 600 Mbits/sec 0.053 ms 0/57693 (0%)
[ 5] 3.00-4.00 sec 71.5 MBytes 599 Mbits/sec 0.112 ms 0/57634 (0%)
[ 5] 4.00-5.00 sec 71.6 MBytes 601 Mbits/sec 0.041 ms 5/57751 (0.0087%)
[ 5] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 0.064 ms 0/57690 (0%)
[ 5] 6.00-7.00 sec 71.5 MBytes 600 Mbits/sec 0.057 ms 0/57694 (0%)
[ 5] 7.00-8.00 sec 71.5 MBytes 600 Mbits/sec 0.043 ms 0/57690 (0%)
[ 5] 8.00-9.00 sec 71.5 MBytes 600 Mbits/sec 0.057 ms 0/57695 (0%)
[ 5] 9.00-10.00 sec 71.5 MBytes 600 Mbits/sec 0.063 ms 0/57692 (0%)
[ 5] 10.00-10.02 sec 1.13 MBytes 610 Mbits/sec 0.013 ms 0/914 (0%)
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.02 sec 714 MBytes 598 Mbits/sec 0.013 ms 603/576876 (0.1%) receiver
———————————————————–
Server listening on 5201
———————————————————–
Accepted connection from 192.168.1.10, port 42888
[ 5] local 192.168.2.10 port 5201 connected to 192.168.1.10 port 36183
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 70.3 MBytes 590 Mbits/sec 0.063 ms 0/56730 (0%)
[ 5] 1.00-2.00 sec 71.5 MBytes 600 Mbits/sec 0.052 ms 0/57694 (0%)
[ 5] 2.00-3.00 sec 71.5 MBytes 600 Mbits/sec 0.061 ms 0/57691 (0%)
[ 5] 3.00-4.00 sec 71.5 MBytes 600 Mbits/sec 0.054 ms 0/57693 (0%)
[ 5] 4.00-5.00 sec 71.5 MBytes 600 Mbits/sec 0.051 ms 0/57693 (0%)
[ 5] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 0.063 ms 0/57691 (0%)
[ 5] 6.00-7.00 sec 71.5 MBytes 600 Mbits/sec 0.057 ms 0/57692 (0%)
[ 5] 7.00-8.00 sec 71.5 MBytes 600 Mbits/sec 0.048 ms 0/57695 (0%)
[ 5] 8.00-9.00 sec 71.5 MBytes 600 Mbits/sec 0.049 ms 0/57690 (0%)
[ 5] 9.00-10.00 sec 71.5 MBytes 600 Mbits/sec 0.062 ms 0/57691 (0%)
[ 5] 10.00-10.02 sec 1.07 MBytes 577 Mbits/sec 0.053 ms 0/867 (0%)
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.02 sec 715 MBytes 599 Mbits/sec 0.053 ms 0/576827 (0%) receiver
———————————————————–
Server listening on 5201
———————————————————–
Accepted connection from 192.168.1.10, port 53714
[ 5] local 192.168.2.10 port 5201 connected to 192.168.1.10 port 46178
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 69.1 MBytes 580 Mbits/sec 0.056 ms 965/56730 (1.7%)
[ 5] 1.00-2.00 sec 71.5 MBytes 600 Mbits/sec 0.045 ms 0/57695 (0%)
[ 5] 2.00-3.00 sec 71.5 MBytes 600 Mbits/sec 0.050 ms 0/57689 (0%)
[ 5] 3.00-4.00 sec 71.5 MBytes 600 Mbits/sec 0.049 ms 0/57693 (0%)
[ 5] 4.00-5.00 sec 71.5 MBytes 600 Mbits/sec 0.048 ms 0/57692 (0%)
[ 5] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 0.045 ms 0/57692 (0%)
[ 5] 6.00-7.00 sec 71.5 MBytes 600 Mbits/sec 0.042 ms 0/57695 (0%)
[ 5] 7.00-8.00 sec 71.5 MBytes 600 Mbits/sec 0.048 ms 0/57688 (0%)
[ 5] 8.00-9.00 sec 71.5 MBytes 600 Mbits/sec 0.051 ms 0/57693 (0%)
[ 5] 9.00-10.00 sec 71.5 MBytes 600 Mbits/sec 0.054 ms 0/57692 (0%)
[ 5] 10.00-10.02 sec 1.07 MBytes 574 Mbits/sec 0.054 ms 0/867 (0%)
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.02 sec 714 MBytes 598 Mbits/sec 0.054 ms 965/576826 (0.17%) receiverHi,
I think minor performance degradation is related to reaching specific limits specified (600M).
After tuning up on the servers and the switch I was getting near 4Gbits/s using bmv2 switch.
`
[ 5] 55.00-56.00 sec 449 MBytes 3.77 Gbits/sec 1751 67.4 MBytes
[ 5] 56.00-57.00 sec 428 MBytes 3.59 Gbits/sec 1930 62.7 MBytes
[ 5] 57.00-58.00 sec 471 MBytes 3.95 Gbits/sec 2035 69.3 MBytes
[ 5] 58.00-59.00 sec 431 MBytes 3.62 Gbits/sec 1863 51.0 MBytes
[ 5] 59.00-60.00 sec 470 MBytes 3.94 Gbits/sec 1384 40.1 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-60.00 sec 26.2 GBytes 3.74 Gbits/sec 121474 sender
[ 5] 0.00-60.07 sec 25.8 GBytes 3.69 Gbits/sec receiver<code></code>`
I tried the same example with UDP as well but ended up 1Gbits/s with UDP packets.
`
[ 5] 54.00-55.00 sec 456 MBytes 3.82 Gbits/sec 183850
[ 5] 55.00-56.00 sec 455 MBytes 3.82 Gbits/sec 183609
[ 5] 56.00-57.00 sec 454 MBytes 3.81 Gbits/sec 183204
[ 5] 57.00-58.00 sec 454 MBytes 3.81 Gbits/sec 183261
[ 5] 58.00-59.00 sec 456 MBytes 3.83 Gbits/sec 183899
[ 5] 59.00-60.00 sec 457 MBytes 3.84 Gbits/sec 184449
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 26.8 GBytes 3.83 Gbits/sec 0.000 ms 0/11050971 (0%) sender
[ 5] 0.00-60.07 sec 9.98 GBytes 1.43 Gbits/sec 0.012 ms 6929044/11050832 (63%) receiver<code></code>`
What maybe the reason for UDP packets to be 1Gbits/s?
Dear Elie,
After executing with ” -t 60″ I was getting 1 Gbit/s speed limit.
Hi Nagmat,
I see from your screenshot that the first few seconds the throughput was zero, then it went up. I believe if you run iperf for longer time, the average will go up to 1Gbps.
You can try using the -t option to specify the time the iperf3 test should be running (e.g., -t 60 will run the test for 60 seconds).
Regards,
Elie.
I tried to add some info on the header and execute the main1.p4 (attached) program again.
For some reason, it didn’t work for me. Since logging was disabled I couldn’t debug the program.
I executed a similar program on my other experiments using fabric_testbed and it was working fine.
What may be the reason?
Kind regards,
Nagmat
I tried the same example, after tuning up the servers and the switch I am getting max of “3.17 Gbits/sec” and average 709 Mbits/s.
Server configs :
server1 = slice.add_node(name=”server1″,
site=site1,
cores=8,
ram=16,
disk=500,
image=’default_ubuntu_20′)server2 = slice.add_node(name=”server2″,
site=site3,
cores=8,
ram=16,
disk=500,
image=’default_ubuntu_20′)Switch config :
switch = slice.add_node(name=”switch”,
site=site2,
cores=32,
ram=16,
disk=40,
image=’default_ubuntu_20′)The results are attached.
Kind regards,
Nagmat
- This reply was modified 1 year, 2 months ago by Nagmat Nazarov.
- This reply was modified 1 year, 2 months ago by Nagmat Nazarov.
August 30, 2023 at 2:28 pm in reply to: What is the Maximum throughput achieved in Fabric Testbed? #5198My Fablib version is Version 1.5.4 as shown in the attachment.
I am using Jupyter Hub.
After “sudo usermod -G docker rocky” and tuning the parameters for both hosts, I am getting30Gbits/s.
I am doing the same experiment by connecting to 2 nodes(scheme attached) with network service(direct connection), but I am getting 20Gbits/s only.
Is it the network service causing 10Gbits/s lost in the ethernet connection?
Kind regards,
Nagmat
August 30, 2023 at 12:13 pm in reply to: What is the Maximum throughput achieved in Fabric Testbed? #5193I am not sure why at this point but I just tried the same slice (WASH/INDI) and got 46Gbps.
I can’t explain it for the moment. Try a couple of other pairs – e.g. INDI/STAR, UCSD/DALL – you can just override the site selection in the notebook:
#[site1, site2] = fablib.get_random_sites(count=2) [site1, site2] = ('STAR', 'INDI') print(f"Sites: {site1}, {site2}")
Example from my run:
Connecting to host 10.133.3.2, port 5201 [ 5] local 10.140.5.2 port 37080 connected to 10.133.3.2 port 5201 [ 7] local 10.140.5.2 port 37088 connected to 10.133.3.2 port 5201 [ 9] local 10.140.5.2 port 37090 connected to 10.133.3.2 port 5201 [ 11] local 10.140.5.2 port 37102 connected to 10.133.3.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.01 sec 11.9 GBytes 10.2 Gbits/sec 3557 54.1 MBytes (omitted) [ 7] 0.00-10.01 sec 12.6 GBytes 10.8 Gbits/sec 6110 45.3 MBytes (omitted) [ 9] 0.00-10.01 sec 21.2 GBytes 18.2 Gbits/sec 18196 62.8 MBytes (omitted) [ 11] 0.00-10.01 sec 17.1 GBytes 14.7 Gbits/sec 4092 65.9 MBytes (omitted) [SUM] 0.00-10.01 sec 62.8 GBytes 53.9 Gbits/sec 31955 (omitted) - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 0.00-10.01 sec 10.9 GBytes 9.36 Gbits/sec 35 45.1 MBytes [ 7] 0.00-10.01 sec 8.72 GBytes 7.49 Gbits/sec 80 39.2 MBytes [ 9] 0.00-10.01 sec 17.5 GBytes 15.0 Gbits/sec 140 71.4 MBytes [ 11] 0.00-10.01 sec 17.8 GBytes 15.3 Gbits/sec 36 71.4 MBytes [SUM] 0.00-10.01 sec 55.0 GBytes 47.2 Gbits/sec 291 - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 10.01-20.01 sec 10.1 GBytes 8.68 Gbits/sec 293 55.2 MBytes [ 7] 10.01-20.01 sec 8.31 GBytes 7.14 Gbits/sec 41 83.8 MBytes [ 9] 10.01-20.01 sec 17.7 GBytes 15.2 Gbits/sec 57 107 MBytes [ 11] 10.01-20.01 sec 18.2 GBytes 15.6 Gbits/sec 226 83.3 MBytes [SUM] 10.01-20.01 sec 54.3 GBytes 46.6 Gbits/sec 617 - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 20.01-30.00 sec 14.7 GBytes 12.6 Gbits/sec 1652 79.7 MBytes [ 7] 20.01-30.00 sec 7.91 GBytes 6.80 Gbits/sec 322 58.0 MBytes [ 9] 20.01-30.00 sec 17.7 GBytes 15.3 Gbits/sec 693 80.4 MBytes [ 11] 20.01-30.00 sec 13.3 GBytes 11.4 Gbits/sec 1073 60.4 MBytes [SUM] 20.01-30.00 sec 53.6 GBytes 46.1 Gbits/sec 3740 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-30.00 sec 35.7 GBytes 10.2 Gbits/sec 1980 sender [ 5] 0.00-30.03 sec 36.0 GBytes 10.3 Gbits/sec receiver [ 7] 0.00-30.00 sec 24.9 GBytes 7.14 Gbits/sec 443 sender [ 7] 0.00-30.03 sec 25.1 GBytes 7.19 Gbits/sec receiver [ 9] 0.00-30.00 sec 53.0 GBytes 15.2 Gbits/sec 890 sender [ 9] 0.00-30.03 sec 52.7 GBytes 15.1 Gbits/sec receiver [ 11] 0.00-30.00 sec 49.3 GBytes 14.1 Gbits/sec 1335 sender [ 11] 0.00-30.03 sec 49.1 GBytes 14.1 Gbits/sec receiver [SUM] 0.00-30.00 sec 163 GBytes 46.6 Gbits/sec 4648 sender [SUM] 0.00-30.03 sec 163 GBytes 46.6 Gbits/sec receiver
Thanks for the clarification.
I am getting errors on the run phase from Jupiter notebook. The error is :
“docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? See ‘docker run –help’. “, that’s why I am logging in to the nodes and running the iperf3 commands manually. Probably that may be causing the issue.
I chose SALT and WASH this time, but still getting :
“[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.05 sec 617 MBytes 515 Mbits/sec receiver“
How can I resolve the issue?
August 29, 2023 at 5:37 pm in reply to: What is the Maximum throughput achieved in Fabric Testbed? #5185Between WASH and INDI, the maximum I got was 1.28 Gbits/s as shown on the attachment.
August 29, 2023 at 4:31 pm in reply to: What is the Maximum throughput achieved in Fabric Testbed? #5184should get at least 30
I tried the example from the first link, but I got 817 Mbits/sec only. Not near 30 GB/s.
Below is the snapshot of the example :
`</span>
<p class=”p1″><span class=”s1″>3: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000</span></p>
<p class=”p1″><span class=”s1″><span class=”Apple-converted-space”> </span>link/ether 26:27:49:84:cd:8d brd ff:ff:ff:ff:ff:ff</span></p>
<p class=”p1″><span class=”s1″><span class=”Apple-converted-space”> </span>altname enp0s7</span></p>
<p class=”p1″><span class=”s1″><span class=”Apple-converted-space”> </span>inet 10.133.133.2/24 scope global ens7</span></p>
<p class=”p1″><span class=”s1″><span class=”Apple-converted-space”> </span>valid_lft forever preferred_lft forever</span></p>
<p class=”p1″><span class=”s1″><span class=”Apple-converted-space”> </span>inet6 fe80::aaa4:4e25:fba5:5251/64 scope link noprefixroute </span></p>
<p class=”p1″><span class=”s1″><span class=”Apple-converted-space”> </span>valid_lft forever preferred_lft forever</span></p>
<p class=”p1″><span class=”s1″>[rocky@Node2 ~]$ iperf3 -s</span></p>
<p class=”p1″><span class=”s1″>———————————————————–</span></p>
<p class=”p1″><span class=”s1″>Server listening on 5201</span></p>
<p class=”p1″><span class=”s1″>———————————————————–</span></p>
<p class=”p1″><span class=”s1″>Accepted connection from 10.132.7.2, port 46800</span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] local 10.133.133.2 port 5201 connected to 10.132.7.2 port 46802</span></p>
<p class=”p1″><span class=”s1″>[ ID] Interval <span class=”Apple-converted-space”> </span>Transfer <span class=”Apple-converted-space”> </span>Bitrate</span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>0.00-1.00 <span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>75.9 MBytes <span class=”Apple-converted-space”> </span>636 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>1.00-2.00 <span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>98.9 MBytes <span class=”Apple-converted-space”> </span>830 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>2.00-3.00 <span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>99.2 MBytes <span class=”Apple-converted-space”> </span>832 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>3.00-4.00 <span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>99.1 MBytes <span class=”Apple-converted-space”> </span>831 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>4.00-5.00 <span class=”Apple-converted-space”> </span>sec <span class=”Apple-converted-space”> </span>100 MBytes <span class=”Apple-converted-space”> </span>841 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>5.00-6.00 <span class=”Apple-converted-space”> </span>sec <span class=”Apple-converted-space”> </span>101 MBytes <span class=”Apple-converted-space”> </span>844 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>6.00-7.00 <span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>98.7 MBytes <span class=”Apple-converted-space”> </span>828 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>7.00-8.00 <span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>98.8 MBytes <span class=”Apple-converted-space”> </span>829 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>8.00-9.00 <span class=”Apple-converted-space”> </span>sec <span class=”Apple-converted-space”> </span>101 MBytes <span class=”Apple-converted-space”> </span>844 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>9.00-10.00<span class=”Apple-converted-space”> </span>sec <span class=”Apple-converted-space”> </span>101 MBytes <span class=”Apple-converted-space”> </span>845 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5]<span class=”Apple-converted-space”> </span>10.00-10.04<span class=”Apple-converted-space”> </span>sec<span class=”Apple-converted-space”> </span>2.80 MBytes <span class=”Apple-converted-space”> </span>637 Mbits/sec <span class=”Apple-converted-space”> </span></span></p>
<p class=”p1″><span class=”s1″>- – – – – – – – – – – – – – – – – – – – – – – – -</span></p>
<p class=”p1″><span class=”s1″>[ ID] Interval <span class=”Apple-converted-space”> </span>Transfer <span class=”Apple-converted-space”> </span>Bitrate</span></p>
<p class=”p1″><span class=”s1″>[<span class=”Apple-converted-space”> </span>5] <span class=”Apple-converted-space”> </span>0.00-10.04<span class=”Apple-converted-space”> </span>sec <span class=”Apple-converted-space”> </span>976 MBytes <span class=”Apple-converted-space”> </span>815 Mbits/sec<span class=”Apple-converted-space”> </span>receiver</span></p>
<span class=”s1″><code></code>`What may I be doing wrong that I am not getting 30 GB/s?
Still website is giving error like :
“Notice: Due to a major power outage in the area, some FABRIC services including the dataplane are currently unavailable.”
Apart from that, these 2 network componenets are still “ticketed” for 10 minutes :
7220acbd-e677-43d1-a0d6-282fcbc6ca12 net_s1_s2 network Ticketed 65a44115-4a68-44e8-b445-00b75623b73f net_s2_s3 network Ticketed Is it related to the outage?
Kind regards,
Nagmat
The error is :
failed lease update- all units failed priming: Exception during create for unit: 7220acbd-e677-43d1-a0d6-282fcbc6ca12 Playbook has failed tasks: NSO commit returned JSON-RPC error: type: rpc.method.failed, code: -32000, message: Method failed, data: message: External error in the NED implementation for device dall-data-sw: Tue Aug 29 14:55:57.056 UTCrnrn Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted.rn SEMANTIC ERRORS: This configuration was rejected by rn the system due to semantic errors. The individual rn errors with each failed configuration command can be rn found below.rnrnrnl2vpnrn bridge group bg-net_s1_s2-7220acbd-e677-43d1rn bridge-domain bd-net_s1_s2-7220acbd-e677-rn interface HundredGigE0/0/0/9.2056rn Invalid argument: Interface already used for L2VPNrn rn rn rnrnend, internal: jsonrpc_tx_commit357#all units failed priming: Exception during create for unit: 7220acbd-e677-43d1-a0d6-282fcbc6ca12 Playbook has failed tasks: NSO commit returned JSON-RPC error: type: rpc.method.failed, code: -32000, message: Method failed, data: message: External error in the NED implementation for device dall-data-sw: Tue Aug 29 14:55:57.056 UTCrnrn Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted.rn SEMANTIC ERRORS: This configuration was rejected by rn the system due to semantic errors. The individual rn errors with each failed configuration command can be rn found below.rnrnrnl2vpnrn bridge group bg-net_s1_s2-7220acbd-e677-43d1rn bridge-domain bd-net_s1_s2-7220acbd-e677-rn interface HundredGigE0/0/0/9.2056rn Invalid argument: Interface already used for L2VPNrn rn rn rnrnend, internal: jsonrpc_tx_commit357#
Still website is giving error like :
“Notice: Due to a major power outage in the area, some FABRIC services including the dataplane are currently unavailable.”
Apart from that, these 2 network componenets are still “ticketed” for 10 minutes :
7220acbd-e677-43d1-a0d6-282fcbc6ca12 net_s1_s2 network Ticketed 65a44115-4a68-44e8-b445-00b75623b73f net_s2_s3 network Ticketed Is it related to the outage?
Kind regards,
Nagmat
June 30, 2023 at 8:29 pm in reply to: Internet issues on SALT – 2001:400:a100:3010:f816:3eff:febc:362a #4626I am trying to install Iperf3 on SALT – s3 node with 2001:400:a100:3010:f816:3eff:febc:362a management IP.
The slice id is : 9a2465fc-bdaf-4579-b5c3-6fc5afabd429 .
What may be the issue?
Attached the screenshot from the experiment.
Update on the situation :
There is internet on s3 – SALT, but there is no internet inside docker somehow.
June 19, 2023 at 3:03 pm in reply to: Getting AttributeError: ‘Node’ object has no attribute ‘add_fabnet’ #4554It worked after removing
fabrictestbed-extensions==1.3.3
from requirements.txt
Thanks!
-
AuthorPosts