You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I first came across litestream a few weeks ago and thought this could be a great solution for my application. But between then and now I see that it is no longer described as a streaming replication tool, and that streaming replication was removed a few months ago. Darn. I also see that Windows support has been dropped. Darn again. I've read in the discussions the reasons for dropping Windows, and I've read in issue #8 about dropping streaming replication, but I'm wondering if my use-case is different from what that feature was attempting to do.
I have a datalogging application that writes nearly 2000 data points to db file every second. A new file is started every day (occasionally more often), and grows to 800MB by end of day. I need these files to be replicated to a remote server at some reasonable rate. Currently I have a scheduled task that finds the latest open file and transfers it via SFTP every half-hour (time set based on how long the largest files take to transfer). Total data transferred in one day approaches 20 GB, which is a huge amount of wasted bandwidth, and also SSD disk writes, because I have to make a temporary local copy before transferring, and delete it afterwards. So the idea of only copying the WAL files continuously is very attractive. I'm currently doing this all on Windows, but would consider moving everything to Linux if it made replication easier.
The replicated files on the remote server are essentially read-only. Once a day I run a script that processes the previous day's file(s): merging them if there are more than one, adding a few calculations, and copying them to their final destination, where they can be queried but not further written to. The "raw" files received via SFTP can also be queried, but are never written or restored to the original logging computer. This is a one-way transfer of data that is logged and preserved.
If I use the last version that supported Windows (0.3.7), do I have a hope of making this work, or is there some fundamental problem? Since my understanding is that litestream works on a single named file, I'm expecting that I would need some external scripting around it to notice when logging has transitioned to a new file, kill the litestream instance and start a new one. That all seems pretty straightforward, provided this is the right tool in the first place.
Thanks.
Edit: part of my question is what distinguishes a "streaming replication tool" from a "disaster recovery tool".
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I first came across litestream a few weeks ago and thought this could be a great solution for my application. But between then and now I see that it is no longer described as a streaming replication tool, and that streaming replication was removed a few months ago. Darn. I also see that Windows support has been dropped. Darn again. I've read in the discussions the reasons for dropping Windows, and I've read in issue #8 about dropping streaming replication, but I'm wondering if my use-case is different from what that feature was attempting to do.
I have a datalogging application that writes nearly 2000 data points to db file every second. A new file is started every day (occasionally more often), and grows to 800MB by end of day. I need these files to be replicated to a remote server at some reasonable rate. Currently I have a scheduled task that finds the latest open file and transfers it via SFTP every half-hour (time set based on how long the largest files take to transfer). Total data transferred in one day approaches 20 GB, which is a huge amount of wasted bandwidth, and also SSD disk writes, because I have to make a temporary local copy before transferring, and delete it afterwards. So the idea of only copying the WAL files continuously is very attractive. I'm currently doing this all on Windows, but would consider moving everything to Linux if it made replication easier.
The replicated files on the remote server are essentially read-only. Once a day I run a script that processes the previous day's file(s): merging them if there are more than one, adding a few calculations, and copying them to their final destination, where they can be queried but not further written to. The "raw" files received via SFTP can also be queried, but are never written or restored to the original logging computer. This is a one-way transfer of data that is logged and preserved.
If I use the last version that supported Windows (0.3.7), do I have a hope of making this work, or is there some fundamental problem? Since my understanding is that litestream works on a single named file, I'm expecting that I would need some external scripting around it to notice when logging has transitioned to a new file, kill the litestream instance and start a new one. That all seems pretty straightforward, provided this is the right tool in the first place.
Thanks.
Edit: part of my question is what distinguishes a "streaming replication tool" from a "disaster recovery tool".
Beta Was this translation helpful? Give feedback.
All reactions