Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: AsyncWebServer queue support #4119

Closed
wants to merge 14 commits into from

Conversation

willmmiles
Copy link
Member

Add queuing capabilities to AsyncWebServer, allowing deferred request handling to spread out memory load, generating 503 responses when the device is overloaded, and effectively preventing the web API from causing OOM crashes.

This PR should be considered beta quality; the upstream branch has not yet been merged, pending wider testing. I've tested it as thoroughly as I can, given my limited stable of devices, and I have run out of ways to make them hang or crash with curl. Please consider merging this branch with your work-in-progress code to try it in more devices and under different load conditions.

Update ESPAsyncWebserver and AsyncTCP dependency calls to the
queue supporting branches.
Enable the new concurrent request and queue size limit features
of AsyncWebServer.  This should improve the handling of burst
traffic or many clients, and significantly reduce the likelihood
of OOM crashes due to HTTP requests.
This can be helpful for debugging web handler related issues.
Use the web server's queuing mechanism to call us back later.
No locking contention, but a much larger target
Update based on empirical data on an ESP32-WROVER, -WROOM, and -S2.
@willmmiles willmmiles marked this pull request as draft August 27, 2024 18:02
@blazoncek
Copy link
Collaborator

When I get home from vacation I'll give it a spin.

@willmmiles willmmiles marked this pull request as ready for review September 14, 2024 20:56
@softhack007 softhack007 added this to the 0.15.1 candidate milestone Sep 28, 2024
@softhack007 softhack007 added enhancement optimization re-working an existing feature to be faster, or use less memory labels Oct 11, 2024
@blazoncek
Copy link
Collaborator

Sorry for the long delay but now I finally managed to give AWS 2.3.0 + AsyncTCP 1.3.0 a spin.

ESP32 code increases 0.5-0.6% which is substantial (my build goes t. 98.9%).
ESP8266 seems to be OK but I did notice less heap available (~2-3k). It does feel more responsive though.

@willmmiles
Copy link
Member Author

Sorry for the long delay but now I finally managed to give AWS 2.3.0 + AsyncTCP 1.3.0 a spin.

ESP32 code increases 0.5-0.6% which is substantial (my build goes t. 98.9%). ESP8266 seems to be OK but I did notice less heap available (~2-3k). It does feel more responsive though.

Thanks for the testing!

Re ESP8266 heap: I'll review what's going on there.
Re ESP32: I'll update this PR for AsyncTCP 1.3.1 -- outbout TCP connections are broken and will leak memory in 1.3.0. This is particularly problematic with MQTT.

@willmmiles
Copy link
Member Author

Hmm.. I couldn't reproduce either issue right away.

ESP8266 - build time:

0_15:
Checking size .pio\build\esp8266_2m_160\firmware.elf
Advanced Memory Usage is available via "PlatformIO Home > Project Inspect"
RAM: [====== ] 57.1% (used 46756 bytes from 81920 bytes)
Flash: [========= ] 85.4% (used 892239 bytes from 1044464 bytes)

aws-queue:
Checking size .pio\build\esp8266_2m_160\firmware.elf
Advanced Memory Usage is available via "PlatformIO Home > Project Inspect"
RAM: [====== ] 57.1% (used 46812 bytes from 81920 bytes)
Flash: [========= ] 85.5% (used 892527 bytes from 1044464 bytes)

Runtime gave me the same output, 21.3kb free. Is it possible there was a second browser window or some other integration active during your testing?


esp32dev:

0_15:
Checking size .pio\build\esp32dev\firmware.elf
Advanced Memory Usage is available via "PlatformIO Home > Project Inspect"
RAM: [= ] 14.8% (used 48488 bytes from 327680 bytes)
Flash: [========= ] 86.5% (used 1359993 bytes from 1572864 bytes)

aws-queue:
Checking size .pio\build\esp32dev\firmware.elf
Advanced Memory Usage is available via "PlatformIO Home > Project Inspect"
RAM: [= ] 14.8% (used 48480 bytes from 327680 bytes)
Flash: [========= ] 86.6% (used 1362629 bytes from 1572864 bytes)

So I'm seeing about 2.6k size increase (0.1%). Not ideal, and maybe I can improve, but not huge either.

Would you mind sending me your build and runtime configs?

@blazoncek
Copy link
Collaborator

blazoncek commented Dec 6, 2024

Old 2.2.1:
RAM: [== ] 15.3% (used 50016 bytes from 327680 bytes)
Flash: [==========] 98.1% (used 1542694 bytes from 1572864 bytes)

New 2.3.0:
RAM: [== ] 15.3% (used 50004 bytes from 327680 bytes)
Flash: [==========] 98.5% (used 1548898 bytes from 1572864 bytes)

The only difference is in libraries and the change in json.cpp and wled_server.cpp. Will do more tests at home.

Test at home (on mac):
Old 2.2.1
RAM: [== ] 15.3% (used 50000 bytes from 327680 bytes)
Flash: [==========] 98.5% (used 1548494 bytes from 1572864 bytes)

New 2.3.0
RAM: [== ] 15.3% (used 49988 bytes from 327680 bytes)
Flash: [==========] 98.8% (used 1554762 bytes from 1572864 bytes)

The environment might be slightly different between computers.

@willmmiles
Copy link
Member Author

Thanks - I wonder what's different between our environments? I do expect some code size increase -- the queuing logic isn't going to be free, unfortunately -- but 6k does seem like a lot more than I'd've expected. If you can point me at where I can replicate your config/target, I'll see what I can do. I'm out of town this weekend, though, so I might not get to it until next week.

@willmmiles
Copy link
Member Author

Superseded with #4480.

@willmmiles willmmiles closed this Jan 24, 2025
@blazoncek
Copy link
Collaborator

So, there is no more need to test this? Do I revert all my environments?

@willmmiles
Copy link
Member Author

So, there is no more need to test this? Do I revert all my environments?

I think it's about as tested as it can get without moving in to main with a wider audience. I've rebased it and opened a new PR #4480 to keep the changeset simple and avoid the complex merge.

@willmmiles willmmiles deleted the aws-queue branch February 1, 2025 05:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement optimization re-working an existing feature to be faster, or use less memory
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants