-
Notifications
You must be signed in to change notification settings - Fork 572
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fetch fail: cause: AggregateError [ETIMEDOUT]: #2777
Comments
Thanks for reporting! Can you provide steps to reproduce? We often need a reproducible example, e.g. some code that allows someone else to recreate your problem by just copying and pasting it. If it involves more than a couple of different file, create a new repository on GitHub and add a link to that. |
repo is here: https://github.com/guotie/fetch-failed I think the problem is timeout. when the network is Ok, it rarely throw |
Facing a similar issue |
same issue, anyone know the solution? |
if the internet connectivity was the issue, then why curl always work and never throw timeout? |
it seems this error only appear in certain environment or device? it's hard to reproduce but I think the bug really exists someone already describe the same issue here too #2990 |
or in certain host? I faced this issue when trying to fetch telegram api |
here's the endpoint that I try to fetch
|
Can you provide an |
I've provided the Minimum Reproducible Example on my Gist, as you suggested. If you have a strong or reliable internet connection, consider simulating slow connectivity to see if the error replicates. After all, ETIMEDOUT errors are more likely to occur under limited bandwidth conditions. Interestingly, while fetch sometimes throws this error, curl seems to be able to avoid it in this scenario. |
Hmm, this does not seem like an The errors shown by the example and the roots of the issue mostly to the initial TCP connection (including TLS), meaning that You can attempt to extend the overall timeout while creating a custom As well, you can wrap it with the |
in the docs it says:
why it close the connection too early if the default was 300 seconds? does the node fetch use undici differently in the internal? btw, this is unrelated question: why I can't access undici directly ( |
The timeouts you mention are applied directly at Sadly no, you'll need to install |
Thanks for your assistance! It's confirmed that increasing the connection timeout resolved the issue in my scenario ^^ |
I am able to reliably repro it when fetching too many urls at once. nodejs/node-core-utils#810 |
I've been working on the same problem for 1 day. Here's the solution:check your ip dns configuration or clear your dns cache; make sure your router is serving a real dns server, or change your ip dns to target a real dns server. The observation is that on my online server the code doesn't throw this AggregateError [ETIMEDOUT] error, but on my local machine it always throws this error only when I'm at home on my local network. Looking further, the main errors thrown are ETIMEDOUT and ENETUNREACH. This usually occurs when the DNS module is unable to resolve the IP address.
if the request got succesful once, it seems that the ip address in the log error can be the cached ones stored in your machine before. My hypothesis is that when working under a network or router with a non-dns server, the node's dns module is no longer able to evaluate the ip address and doesn't use the cached one and sends errors. I'll check this out, but you have the solution. |
I have encountered the same error when experimental HTTP/2 support is enabled with massive parallel requests (more than 32 at once, toward about 20 different domains). With HTTP/2 support disabled the error is gone. |
Maybe will also solved by #3707 by @metcoder95 |
@guotie @Uzlopak I have run into the same issue - only to recall that I have already resolved it, but did not remember the code. On some networks (like e.g. mine today on a LTE tethered connection in a country far away from my database provider) the default Doubling that time solves all issues for me: |
Is still reproducible or it can be closed after #3707 landed? |
@metcoder95 yes it is: Tested on "undici": "^7.2.1"
To be able to reproduce you would need some combination of:
OR ...
I believe this is purely related to Node's default being 250ms which is too short on some networks - there was some debate about it being to short or not but it was closed without changes. |
I'd rather say that is not really an |
@metcoder95 true. It could be debated if this is the right default in Either way I expect issues will continue to be opened in upstream repositories like |
Do you have the PR at hand? Why was removed? If seeking to extend the timeout, I'd suggest opening an issue in Node.js with the reference to the other issues you've opened and the feedback you've got to see where it lands. For this issue, having the comment added into the documentation about high-latency networks should be ok. |
@metcoder95 |
can you just address the recommendation there? |
It is addressed, the PR was also approved by you. Not sure how your merging process works. |
Bug Description
my code is typescript. I run it in two ways:
when use bun run it, no error occurs;
when use node run it, it occurs fetch failed extremely frequent
the request is an apollo graphql request.
proxy is local http proxy:
Logs & Screenshots
Environment
Mac M1 Sonoma 14.2
Nodejs v21
Additional context
The text was updated successfully, but these errors were encountered: