-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(client): remove needless process_output in process_input #1922
base: main
Are you sure you want to change the base?
Conversation
…ultiple_input There is no need to call `process_output` within `process_multiple_input` after each GRO datagram batch. Instead, process all available incoming datagrams in `process_multiple_input` and then move on to the top of the `loop`, calling `handler.handle` and then process_output` as usual.
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1922 +/- ##
==========================================
- Coverage 95.36% 95.32% -0.04%
==========================================
Files 112 112
Lines 36475 36475
==========================================
- Hits 34784 34771 -13
- Misses 1691 1704 +13 ☔ View full report in Codecov by Sentry. |
Let's wait for a benchmark run before merging here. Just to make sure nothing depends on the additional calls to |
@mxinden the RPS bench seems to just sit there idle. |
@mxinden anything I can do to help land this? |
I suggest we land #1929 first. Might resolve the RPS benchmark stall. |
Pulled in from |
Benchmark resultsPerformance differences relative to f801c29. coalesce_acked_from_zero 1+1 entries: 💔 Performance has regressed.time: [196.50 ns 196.94 ns 197.42 ns] change: [+1.4264% +1.7729% +2.1298%] (p = 0.00 < 0.05) coalesce_acked_from_zero 3+1 entries: Change within noise threshold.time: [237.30 ns 237.95 ns 238.68 ns] change: [+0.0076% +0.3098% +0.6237%] (p = 0.05 < 0.05) coalesce_acked_from_zero 10+1 entries: No change in performance detected.time: [236.99 ns 237.78 ns 238.71 ns] change: [-0.3228% +0.1272% +0.5887%] (p = 0.60 > 0.05) coalesce_acked_from_zero 1000+1 entries: Change within noise threshold.time: [217.87 ns 218.04 ns 218.24 ns] change: [+0.8314% +1.6348% +2.3818%] (p = 0.00 < 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [119.42 ms 119.49 ms 119.56 ms] change: [+0.6544% +0.7456% +0.8354%] (p = 0.00 < 0.05) transfer/pacing-false/varying-seeds: No change in performance detected.time: [41.586 ms 43.487 ms 45.420 ms] change: [-2.9314% +3.0552% +9.3095%] (p = 0.34 > 0.05) transfer/pacing-true/varying-seeds: No change in performance detected.time: [54.106 ms 56.987 ms 59.880 ms] change: [-4.7970% +3.1800% +11.223%] (p = 0.41 > 0.05) transfer/pacing-false/same-seed: No change in performance detected.time: [49.179 ms 50.640 ms 52.065 ms] change: [-3.9362% +0.2231% +4.9892%] (p = 0.92 > 0.05) transfer/pacing-true/same-seed: No change in performance detected.time: [72.312 ms 79.005 ms 85.619 ms] change: [-4.3396% +9.0694% +23.600%] (p = 0.18 > 0.05) 1-conn/1-100mb-resp (aka. Download)/client: 💚 Performance has improved.time: [140.16 ms 140.87 ms 141.67 ms] thrpt: [705.86 MiB/s 709.89 MiB/s 713.45 MiB/s] change: time: [-19.713% -18.704% -17.687%] (p = 0.00 < 0.05) thrpt: [+21.488% +23.007% +24.553%] 1-conn/10_000-parallel-1b-resp (aka. RPS)/client: 💔 Performance has regressed.time: [429.69 ms 432.06 ms 434.38 ms] thrpt: [23.021 Kelem/s 23.145 Kelem/s 23.272 Kelem/s] change: time: [+4.2016% +5.2442% +6.3452%] (p = 0.00 < 0.05) thrpt: [-5.9666% -4.9829% -4.0321%] 1-conn/1-1b-resp (aka. HPS)/client: No change in performance detected.time: [46.978 ms 47.244 ms 47.509 ms] thrpt: [21.049 elem/s 21.167 elem/s 21.287 elem/s] change: time: [-1.3336% +0.3260% +2.0086%] (p = 0.70 > 0.05) thrpt: [-1.9691% -0.3249% +1.3516%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
All green. Good news 😮💨 |
Let's hold off merging for now. |
OK, making this a draft PR for now. |
There is no need to call
process_output
withinprocess_multiple_input
after each GRO datagram batch. Instead, process all available incoming datagrams inprocess_multiple_input
and then move on to the top of theloop
, callinghandler.handle
and then process_output` as usual.