-
-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quinn stream is 10x slower then TCP/UDP, what could be wrong? #2153
Comments
Quinn includes encryption via TLS, while TCP/UDP don't. |
I tested it on another client (same hardware) with a wired connection, it can reach 6462908 bytes/s, a bit lower than TCP, which is reasonable. Can a high RTT be the cause? For 5G it's about 100ms compared to 18ms for the wired connection, if so, can I increase |
In theory, yes. See https://en.wikipedia.org/wiki/Bandwidth-delay_product. |
Thanks for the link, it seems that TCP can increase window scale, can QUIC do that? |
The various Another possibility to explore is packet loss. |
How to observe packet loss through ----EDIT 1---- Thers' |
I have increased the window size and buffer size (both on the server and client), but nothing good: fn set_transport_config(config: &mut TransportConfig) {
// STREAM_RWND: 12500 * 100
// stream_receive_window: STREAM_RWND
// send_window: 8 * STREAM_RWND
// crypto_buffer_size: 16 * 1024
let stream_rwnd = 12500 * 100;
// increase wnd *2
let stream_rwnd = stream_rwnd * 2;
let crypto_buffer_size = 16 * 1024;
// increase *2
let crypto_buffer_size = crypto_buffer_size * 2;
config.stream_receive_window((stream_rwnd as u32).into());
config.send_window(8 * stream_rwnd);
config.crypto_buffer_size(crypto_buffer_size);
} lost packets on the client are quite low (print interval is 2 seconds): read: 37927414 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 38407326 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 38779318 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 39090255 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 39523313 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 39900975 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 40196299 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 40476012 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 40768502 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 41289600 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 41898718 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 42361589 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 42570305 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 42922423 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 43363415 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 43806485 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 44256564 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 44702395 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 45058779 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 45454929 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 46045595 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 46507047 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 47082105 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 47803392 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 48538892 bytes, lost_packets: 3, lost_plpmtud_probes: 0
read: 49142337 bytes, lost_packets: 3, lost_plpmtud_probes: 0 The test process consumes about 15% of CPU. |
You need to configure connection-level window sizes, not just stream-level. The smaller of the two limits applies. |
What do you mean by stream-level? I didn't see any API related to window/buffer settings inside I set |
Sorry, I think you mean Because it is initialized to |
Ah, right, that should be fine then, assuming you're installing the config correctly. Next step would be to investigate what precisely it's waiting for. I don't have the bandwidth to do this personally right now, but you could approach the problem by investigating a decrypted packet capture, and/or digging into |
I have a 5G network with a download limit of about 100Mb/s and a server with a public IP, I noticed that the download speed of the QUIC connection is much lower than I expected, just around 4-5 Mb/s, so I made a test:
tokio TCPStream
,tokio UDPSocket
,quinn SendStream/RecvStream
.The result:
read: 104788840 bytes in 15 seconds, speed: 6985922 bytes/s
read: 104096000 bytes in 13 seconds, speed: 8007384 bytes/s
read: 103257760 bytes in 186 seconds, speed: 555149 bytes/s
Part of the program:
Tokio TCP/UDP code is more or less the same.
A few notes:
The text was updated successfully, but these errors were encountered: