-
-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: 6 GBs files (307 chunks) wont join #681
Comments
here is my stack:
my stack.env:
here is my storagebox tree: Here is my bucket content: partial files are found in /tmp/zipline: |
could be a memory issue since it has to load each partial into memory (especially for s3). Maybe changing the temporary files location to somewhere on disk, unless you have already done that. I would also turn on debug logs. |
I see Well, I am doing both things (changing location to drive instead of storage box, and enabling debug log):
I am a bit confused. After receiving the total of chunks, its not mentionning anything about joining them? just, say the datasource is s3, reminder of connection string, the instance config (invites, registration, etc whatever) but nothing about the actual pending job? On the UI same as before. The job is pending. 0/307 chunks. Yesterday I left it 15 hours. The server its on, may not be the most powerful on earth, but I think it should be able to handle joining 307 chunks to make 6 GBs whole, its a hetzner CX42. 8 vcpu, 16 GB ram, 160 GB storage, 20 TB transfer. |
The last image looks like logs for stats updating 🤔
|
or some snippets that look like this?
|
nope, I have shared all logs of the operations, and there is nothing alike. And nothing else. |
I split the file in two and it worked instantly and flawlessly. |
i don't think i'll implement a fix for this in v3, but if the issue persists in v4 it'll most likely get handled over there (i will test this soon with a big file). another question: are you using s3? i guess it's not a memory issue since you have a beefy server lol |
Yep that's fine |
Select those in progress chunk uploads and clear them, clear your temp directory, delete the file that should've been made whole, and reupload on the latest commit (trunk branch). I changed s3's saving a bit to use multipart uploading (a server side thing, no change to uploading). If the chunk upload still doesn't work, collect any logs that are like this or this after |
What happened?
Hello,
So I was uploading a 6gbs file, 307 chunks. Once the upload was complete, I got the usual pop up of offering me a link while I wait the background process finishes.
However. It seems that background process wont finish, it wont even start.
File number 2 by the way, is a previous attempt of the same file.
Its been now, close to an hour. Same for previous file (before I deleted it) in which, the status remains on "pending".
The log dont tell much.
On top of deleting the previous upload, and reuploading it, I have restarted the containers stack twice. The status havent changed whatsoever.
I am running on 3.7.11
Version
latest (ghcr.io/diced/zipline or ghcr.io/diced/zipline:latest)
What browser(s) are you seeing the problem on?
No response
Zipline Logs
Browser Logs
Additional Info
No response
The text was updated successfully, but these errors were encountered: