-
Notifications
You must be signed in to change notification settings - Fork 82
Old brick process is still running after volume reset -> stop -> start #1451
Comments
[root@gluster-kube1-0 /]# ps -ef | grep -i glusterfsd kube3-glusterd2.log.gz |
@PrasadDesala old brick process is serving the other 99 PVCs, isn't it? I fail to understand why is this a bug? |
Initially brick process p1 is serving all the volumes. Once I changed the volume option to a volume (lets say PVC100) and after volume stop/start of that volume a new brick process p2 is serving it. Now, p1 is serving 99 PVCs and p2 is serving PVC100 which is working as expected. Now I have reset PVC100 and stop/start the volume. I see p2 process is still running, there is no need for this process to run as now all the PVCs are having same default volume options and p1 is serving all PVCs. |
Hmm, I think this process was registered in the daemon which is why it still comes up as separate process. @vpandey-RH is there a easy way to handle this scenario? In any case, please note in GCS environment volume reset isn't an operation which we'd recommend users to perform. So the priority of this issue should remain as low. |
Observed behavior
On a brick-mux enabled setup, old brick process is still running after volume reset -> stop -> start.
Expected/desired behavior
Old brick process should not be running.
Details on how to reproduce (minimal and precise)
glustercli volume set pvc-520682df-0e6e-11e9-af0b-525400f94cb8 cluster/replicate.self-heal-daemon off --advanced
glustercli volume reset pvc-520682df-0e6e-11e9-af0b-525400f94cb8 cluster/replicate.self-heal-daemon
Information about the environment:
Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.94.git601ba61
Operating system used: Centos 7.6
Glusterd2 compiled from sources, as a package (rpm/deb), or container:
Using External ETCD: (yes/no, if yes ETCD version): yes; version 3.3.8
If container, which container image:
Using kubernetes, openshift, or direct install:
If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: Kubernetes
```
The text was updated successfully, but these errors were encountered: