Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request BACKUP_CUSTOM_LABEL when stop containers. #47

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jan-brinkmann
Copy link
Contributor

Currently, there is a bug or, let's say, undesirable behavior when docker.sock is mounted. This will fix it:

When docker.sock is mounted (e.g., if you like to trigger docker-rotate-backups by means of POST_COMMAND) and BACKUP_CUSTOM_LABEL is not set, all containers that have the label docker-volume-backup.stop-during-backup=true will be stopped - regardless if they have another tag or not. This may be not desired.

With this pull request, we simply request BACKUP_CUSTOM_LABEL to be set in order to stop and start containers.

@jareware
Copy link
Owner

When docker.sock is mounted ... and BACKUP_CUSTOM_LABEL is not set, all containers that have the label docker-volume-backup.stop-during-backup=true will be stopped

Hmm, maybe I misunderstand, but... isn't that exactly how it's supposed to work? How is it better to require a BACKUP_CUSTOM_LABEL when there's already a standard one that you can use?

@jan-brinkmann
Copy link
Contributor Author

jan-brinkmann commented Jan 18, 2022

Let's say you have the two unrelated docker-compose stacks like below that both run on the same host and both mount docker.sock. The grafana stack mounts docker.sock in order ro run docker-rotate-backups. The nextcloud stack mounts docker.sock in order to stop nextcloud during the backup process.

When the grafana backup container starts the backup, it will stop the nextcloud container. This is because it finds the nextcloud container in the following line (CUSTOM_LABEL is empty).

docker ps --format "{{.ID}}" --filter "label=docker-volume-backup.stop-during-backup=true" $CUSTOM_LABEL > "$TEMPFILE"

With this merge request, we avoid this because we request both docker.sock and BACKUP_CUSTOM_LABEL: https://github.com/jan-brinkmann/docker-volume-backup/blob/82d193a8813d9fd7de35955c5e647e84d2acc65b/src/backup.sh#L30

  dashboard:
    image: grafana/grafana:7.4.5
    volumes:
      - grafana-data:/var/lib/grafana
 
  backup:
    image: docker-volume-backup
    environment:
      POST_BACKUP_COMMAND: "docker run --rm -e DRY_RUN=false -v /home/pi/backups:/archive ghcr.io/jan-brinkmann/docker-rotate-backups"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - grafana-data:/backup/nextcloud-data:ro
      - /home/pi/backups:/archive
  nextcloud:
    image: nextcloud
    volumes:
      - nextcloud-data:/nextcloud
    labels:
      - "docker-volume-backup.stop-during-backup=true"

  backup:
    image: docker-volume-backup
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - nextcloud-data:/backup/nextcloud-data:ro
      - /home/pi/backups:/archive

@jareware
Copy link
Owner

@jan-brinkmann okay the use case makes sense to me.

But wouldn't this mean the container stopping wouldn't work at all unless you specify a BACKUP_CUSTOM_LABEL? So we would make the default use case more complicated because of this special use case of multiple compose stacks on the same host?

@jan-brinkmann
Copy link
Contributor Author

But wouldn't this mean the container stopping wouldn't work at all unless you specify a BACKUP_CUSTOM_LABEL?

Yes, that is correct.

So we would make the default use case more complicated because of this special use case of multiple compose stacks on the same host?

To be honest, it is questionable what the default use case is. Imho, running multiple docker-compose stacks is the default.

Anyway, is there an elegant way to list all container of a certain docker-compose stack? We do not have access to the docker-compose.yml.

Here is an unelegenat way: Inside the backup container, we can run docker ps --filter "id=$(cat /etc/hostname)" --format "{{.Names}}". Then, we will receive some string like nextcloud_backup_1 (if the underlying docker-compose.yml resists within some directory names nextcloud). Then, we can cut the string at the first _ to receive nextcloud which is the name of the docker-compose stack. Finally, docker ps --format "{{.ID}}" --filter "label=docker-volume-backup.stop-during-backup=true" --filter "name=nextcloud" shows the ids of the containers that need to be stopped - without BACKUP_CUSTOM_LABEL. XD
But this solves the issue for docker-compose usage only.

@jareware
Copy link
Owner

To be honest, it is questionable what the default use case is. Imho, running multiple docker-compose stacks is the default.

That's a valid point.

But I wouldn't necessarily orient this discussion at all around docker-compose. As you correctly point out, all containers share the single docker engine, which means there's no real distinction between compose stacks. I think it's perfectly fine, and you can use this backup utility without compose.

Besides, even if we forced you to provide a BACKUP_CUSTOM_LABEL, you still might just put in the same value for both stacks, and be equally confused when they cross-talk.

I would be perfectly happy calling this a feature, not a bug. 😇

Thoughts?

@varhub
Copy link
Contributor

varhub commented Jan 28, 2022

Hello everyone,

Disclaimer: Hoping to not introduce too much noise. Especially due to I'm following this project but not using it.

This is not the post that I wanted to write, but is the one that remains after a 10 minutes reseach.

I had the naive idea to use depends_on as a way to nativelly establish the relation between the subject and the executor.
But reality strike back and is not usable outside compose but neither across compose files. So no silver bullet here.

In any case, my gut still tells me that
Its needed a briefly recap of this project/container in terms of multiplicity (aka ER model cardinality). Both, birth and current. To:
a) settle the foundations for further discussion
b) check if cardinality already changed

I dont trully understand the necesity of multiple cron daemons other than setup's simplicity and isolation. It defeats the spirit of cron itself. But thats Docker!.
Therefore, could cron be the consequence about the ambiguity about which containers to stop and the weak subject<->executor relation? Or it is just a Docker limitation?

  • In any case, let's assume that I want 2 backups: one daily (incremental) and another weekly (full). Can be defined in a single container, or the expected way will be to define 2 docker-volume-backup?
    • What will be the ER cardinality?
      docker-volume-backup[1:N] <--> data[1] <--> services[1:M]
  • On the other hand, will this be possible?
    docker-volume-backup[1] <--> data[N]
    • But as this?
      docker-volume-backup[1] <--> tgz[1] <--> data[N]
    • Or this?
      docker-volume-backup[1] <--> tgz[N] <--> data[N]

Finally, after a thorough review, the current approach of environment variable (BACKUP_CUSTOM_LABEL) + container label seems the best one to me.
I will only consider a change following Traefik configuration:

  • Provided that stopping containers requires a specific label: docker-volume-backup.stop-during-backup=true
  • The BACKUP_CUSTOM_LABEL should evolve to a specific label: docker-volume-backup.stop-group-id=$unique
  • And rename environment variable to match the label definition.
  • Or just choose: docker-volume-backup.stop-custom-label=$unique

Then, it feels obvious that both labels are equally required (and second one enforces the additional env-var), even when in reality it is not.
But it is also a breaking change!

@varhub
Copy link
Contributor

varhub commented Jan 28, 2022

Replying to @jan-brinkmann, that happens with docker swarn and kubernetes?

Self-inspection based on docker-compose may end up adding an environment variable 'COMPOSE=true' that triggers this specific behavior. Which is not simpler.

@jan-brinkmann
Copy link
Contributor Author

@jareware

Besides, even if we forced you to provide a BACKUP_CUSTOM_LABEL, you still might just put in the same value for both stacks, and be equally confused when they cross-talk.

I totally agree. However, I rate the probability as very low. :)

Do we both agree that there should be an advice in the documentiotn to use BACKUP_CUSTOM_LABEL when running multiple instances of docker-volume-backup? It is also missing in the example in the test folder.

@varhub

Finally, after a thorough review, the current approach of environment variable (BACKUP_CUSTOM_LABEL) + container label seems the best one to me. I will only consider a change following Traefik configuration:

* Provided that stopping containers requires a specific label: `docker-volume-backup.stop-during-backup=true`

* The BACKUP_CUSTOM_LABEL should evolve to a specific label: `docker-volume-backup.stop-group-id=$unique`

* And rename environment variable to match the label definition.

* Or just choose: `docker-volume-backup.stop-custom-label=$unique`

Then, it feels obvious that both labels are equally required (and second one enforces the additional env-var), even when in reality it is not. But it is also a breaking change!

Sounds reasonable to me. It is more or less the same approach a suggest. The difference is that it works implicite, isn't it?

Replying to @jan-brinkmann, that happens with docker swarn and kubernetes?

I don't know. Never tried Docker Swarm. Regarding Kubernetes, you canot mount the docker.sock. :) I.e., docker-volume-backup would never try to stop and start other containers.

Regards,
Jan

PS: Please excuse me for the late response.

@jareware
Copy link
Owner

jareware commented Mar 2, 2022

@jan-brinkmann thanks again for your great contributions!

Just wanted to drop by to say I've been mega busy at the day job, and haven't had time to give this my full attention. I will again in a bit.

Hope you understand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants