Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quiche(http3) is slower than HTTP2 #1486

Open
bhzhu203 opened this issue Apr 20, 2023 · 14 comments
Open

quiche(http3) is slower than HTTP2 #1486

bhzhu203 opened this issue Apr 20, 2023 · 14 comments

Comments

@bhzhu203
Copy link

bhzhu203 commented Apr 20, 2023

The average request_time value of the http3 is quite higher than HTTP2's in our prodution enviroment for USA users.

In the document introduction , the response time of http3 will be more stable and lower than TCPs'. But in our real prodution enviroment(USA/AU/EU users), it is diffrent. While the avg response time of http2 is 0.037s , the avg response time of quiche(http3) is 0.549s. Is the UDP flow set low priority by most telecom providers?

version b3c73e3 + nginx patch https://github.com/cloudflare/quiche/tree/master/nginx
图片

HTTP3 is always slow
图片

By the way , another nginx patch I have tried which can reduce value the request avg response :

nginx-1.24.0 with threads support + patch of kn007

https://github.com/kn007/patch/blob/master/nginx_with_quic.patch

图片

@LPardue
Copy link
Contributor

LPardue commented Apr 24, 2023

Thanks for the report.

Can you explain a bit more about how you measure response time please?

Are you measuring timings over the same page? Or at least pages of roughly equal size? What is the page/asset size?

@bhzhu203
Copy link
Author

bhzhu203 commented Apr 25, 2023

Thanks for the report.

Can you explain a bit more about how you measure response time please?

Are you measuring timings over the same page? Or at least pages of roughly equal size? What is the page/asset size?

The request_time is from the nginx logs.

The log result above is order by cost in a period of time , such as 24 hours , the request_time of HTTP3 is always on the top . The first picture is that every hour average request_time to make a graph.

NO HTTP3: select now() as time, avg(cost) as NTTP3_COST where rq not like '%HTTP/3%'
HTTP3 : select now() as time, avg(cost) as HTTP3_COST where rq like '%HTTP/3%'

   log_format compression escape=json '{"@timestamp":"$time_iso8601",'
                           '"ip":"$remote_addr","host":"$http_host",'
                           '"rq":"$request","rqb":"$request_body",'
                           '"st":"$status","size":$body_bytes_sent,'
                           '"ua":"$http_user_agent","ck":"$http_cookie",'
                           '"cost":"$request_time",'
                           '"ref":"$http_referer",'
                           '"xff":"$http_x_forwarded_for",'
                           '"ust":"$upstream_status",'
                           '"uip":"$upstream_addr",'
                           '"ut":"$upstream_response_time"}';
  
    access_log  logs/access.log  compression;

Our website is https://www.yfn.com , having HTTP3 and HTTP2 service at the same time.

We do not set the "alt-svc" header . Because I have found that some clients having no mature HTTP3 fuction (maybe lower HTTP3 version supported) which request_time too high in the nginx logs. Other clients still can connect to our HTTP3 service without "alt-svc" header although(I think that their HTTP3 verions are higher enough).

You could have a test for our website.

@LPardue
Copy link
Contributor

LPardue commented Apr 25, 2023

Server side request_time isn't a very accurate measure because once data is handed off to the kernel the application loses visibility, especially true for TCP.

Have you confirmed these results using client-side measurements?

@bhzhu203
Copy link
Author

Server side request_time isn't a very accurate measure because once data is handed off to the kernel the application loses visibility, especially true for TCP.

Have you confirmed these results using client-side measurements?

No using client-side measurements yet, just server-side request_time. But I have found that the speed of HTTP3 has less advantage compared to HTTP2.

In some sitations : some of our employees have encountered that our web page loading slowly . Then found that some small resources (<100kB) using HTTP3 downloaded even > 60s in F12 console .
2023-04-26_09-31

2023-04-26_09-27_1

@bhzhu203
Copy link
Author

图片

After switch to offical nginx-quic , it the latency becomes more stable and lower , very close to HTTP2's.

@Ryenum
Copy link

Ryenum commented Dec 3, 2023

Hello, I encountered some problems when configuring the quic service of NGINX. My configuration is the same as the official website, but I still cannot use the quic protocol when accessing the server, and h2 protocol is still used. This is the configuration:
server {
# Enable QUIC and HTTP/3.
listen 443 quic reuseport;
server_name test.cn;

    # Enable HTTP/2 (optional).
    listen 443 ssl http2;

    ssl_certificate      /usr/local/nginx/conf/cert/test.pem;
    ssl_certificate_key  /usr/local/nginx/conf/cert/test.key;

    # Enable all TLS versions (TLSv1.3 is required for QUIC).
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

    # Add Alt-Svc header to negotiate HTTP/3.
    add_header alt-svc 'h3=":443"; ma=86400';
    
}

@bhzhu203
Copy link
Author

Hello, I encountered some problems when configuring the quic service of NGINX. My configuration is the same as the official website, but I still cannot use the quic protocol when accessing the server, and h2 protocol is still used. This is the configuration: server { # Enable QUIC and HTTP/3. listen 443 quic reuseport; server_name test.cn;

    # Enable HTTP/2 (optional).
    listen 443 ssl http2;

    ssl_certificate      /usr/local/nginx/conf/cert/test.pem;
    ssl_certificate_key  /usr/local/nginx/conf/cert/test.key;

    # Enable all TLS versions (TLSv1.3 is required for QUIC).
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

    # Add Alt-Svc header to negotiate HTTP/3.
    add_header alt-svc 'h3=":443"; ma=86400';
    
}

Maybe you could try the firefox or EDGE rather than Chrome . I think the browser manages the protocols swithing .

@Ryenum
Copy link

Ryenum commented Dec 13, 2023

Hello, I encountered some problems when configuring the quic service of NGINX. My configuration is the same as the official website, but I still cannot use the quic protocol when accessing the server, and h2 protocol is still used. This is the configuration: server { # Enable QUIC and HTTP/3. listen 443 quic reuseport; server_name test.cn;

    # Enable HTTP/2 (optional).
    listen 443 ssl http2;

    ssl_certificate      /usr/local/nginx/conf/cert/test.pem;
    ssl_certificate_key  /usr/local/nginx/conf/cert/test.key;

    # Enable all TLS versions (TLSv1.3 is required for QUIC).
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

    # Add Alt-Svc header to negotiate HTTP/3.
    add_header alt-svc 'h3=":443"; ma=86400';
    
}

Maybe you could try the firefox or EDGE rather than Chrome . I think the browser manages the protocols swithing .

Thank you very much for your reply. However, the latest version of NGINX has started to support the quic protocol, and I have successfully configured it.
I have a new question for you, that is, what test tool was used in the picture above to test out?

@bhzhu203
Copy link
Author

Hello, I encountered some problems when configuring the quic service of NGINX. My configuration is the same as the official website, but I still cannot use the quic protocol when accessing the server, and h2 protocol is still used. This is the configuration: server { # Enable QUIC and HTTP/3. listen 443 quic reuseport; server_name test.cn;

    # Enable HTTP/2 (optional).
    listen 443 ssl http2;

    ssl_certificate      /usr/local/nginx/conf/cert/test.pem;
    ssl_certificate_key  /usr/local/nginx/conf/cert/test.key;

    # Enable all TLS versions (TLSv1.3 is required for QUIC).
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

    # Add Alt-Svc header to negotiate HTTP/3.
    add_header alt-svc 'h3=":443"; ma=86400';
    
}

Maybe you could try the firefox or EDGE rather than Chrome . I think the browser manages the protocols swithing .

Thank you very much for your reply. However, the latest version of NGINX has started to support the quic protocol, and I have successfully configured it. I have a new question for you, that is, what test tool was used in the picture above to test out?

First you should configure your nginx to using json log output

   log_format compression escape=json '{"@timestamp":"$time_iso8601",'
                           '"ip":"$remote_addr","host":"$http_host",'
                           '"rq":"$request","rqb":"$request_body",'
                           '"st":"$status","size":$body_bytes_sent,'
                           '"ua":"$http_user_agent","ck":"$http_cookie",'
                           '"cost":"$request_time",'
                           '"ref":"$http_referer",'
                           '"xff":"$http_x_forwarded_for",'
                           '"ust":"$upstream_status",'
                           '"uip":"$upstream_addr",'
                           '"ut":"$upstream_response_time"}';
  
    access_log  logs/access.log  compression;

The second , Using AliBaba cloud SLS service (logtail tool , you could install in your server,not only the Alibaba could server ) to collect the nginx log data.

Using the data to draw your graph you need . You could use it like SQL. The Experience is like Kibana.

'"cost":"$request_time",' To analyze the "cost" column in minutes/hour average you can find something.

2023-12-14_10-43

2023-12-14_10-46

@Ryenum
Copy link

Ryenum commented Jan 1, 2024

Hello, I made the same effect with you through the local self-matching server, QUIC protocol download speed is indeed not as good as TCP protocol, but its upload speed is better than TCP protocol, I would like to ask you know the reason?

@xiaorong61
Copy link

I suspect it's a congestion control issue, I think it would be faster if BBR was used, especially when the RTT is very high.

@bhzhu203
Copy link
Author

Hello, I made the same effect with you through the local self-matching server, QUIC protocol download speed is indeed not as good as TCP protocol, but its upload speed is better than TCP protocol, I would like to ask you know the reason?

Here are some articles "QUIC is not Quick Enough over Fast Internet" may explain the issue :

  1. https://dl.acm.org/doi/10.1145/3589334.3645323
  2. https://www.reddit.com/r/programming/comments/1g7vv66/quic_is_not_quick_enough_over_fast_internet/
  3. https://arxiv.org/abs/2310.09423

@xiaorong61
Copy link

I think it has something to do with the send buffer. If the send buffer is too small, it will damage the performance of the high-latency link.

@ghedo
Copy link
Member

ghedo commented Jan 23, 2025

Regarding send buffering, that's been at the back of my mind for a while actually, so I made #1921 which adds a configuration option to increase the send capacity by a given factor.

I haven't had time to do any measuring at this point, but I imagine setting this to 2 or 3 could help prevent starvation of the connection while waiting for more data to be written by the application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants