-
-
Notifications
You must be signed in to change notification settings - Fork 737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vrrp master switchover lead to disconnection of tcp connections #2254
Comments
I made PR to solve this issue #2255 |
This doesn't relate to the issue, but there seem to be a couple of configuration errors in the config above:
|
I have been thinking further about this, and it seems to me that keepalived should be able to manage conntrackd since there must be many users who want to use conntrackd with keepalived for exactly the reasons you are using it. The root of the problem is that if the VIPs are installed before the conntrack entries are installed into the kernel by conntrackd, then if a packet is received by the new master before the relevant conntrack entry is are installed, then the kernel sends an RST. You stated in issue #2254 that executing the primary-backup.sh script "takes some time", and due to this delay, there is a sufficient window for RSTs to be sent (I think the best reference to the script is https://git.netfilter.org/conntrack-tools/tree/doc/sync/primary-backup.sh). primary-backup.sh does 4 things when a VRRP instance becomes master:
I presume only the first command needs to complete before packets can be successfully handled by the kernel, and therefore only the first command needs to complete before the VIPs are used. One thought I have had is that you could use nftables to drop packets until the conntrack entries are loaded. One way to do this, based on the configuration you provided above is:
startup-nft.sh
shutdown-nft.sh
This would require upgrading to at least keepalived v2.2.0 to support startup and shutdown scripts.
If you have more vrrp instances (e.g. for public and internal interfaces), then they should probably be in a sync group, and the notify scripts can be configured against the sync group (in which case more VIPs would need to be added to the list in the drop rule). Alternatively you can add a parameter to primary-backup.sh to indicate which VRRP instance the script is being run for and use a different table for each VRRP instance. While this can work when the node is being used for NAT (and also for virtual_servers/real_servers), if the node is being used as a router (i.e the purpose for which VRRP was designed), then it may not necessarily work. It would work using VMACs, since the NFT rule could be used to drop based on the (virtual) destination MAC address), but it would be rather harder to work out to configure nftables is not using VMACs, although it may be possible in some circumstances using destination IP addresses. The above approaches have the benefit that they can be implemented without modifying keepalived, however the specific nftables configurations could be quite difficult to work out. I think the best solution is to modify keepalived as follows:
An alternative is for keepalived to manage the calls to conntrackd. This has the advantage that if there are multiple VRRP instances for which transition to master state requires conntrackd commands to be executed, it can avoid conntrackd being called multiple times when there are simultaneous VRRP instance state transitions that require conntrackd to be invoked. I will think further about this, and any thoughts you have about the above would be much appreciated. Further, if you are able to test the nftables idea I have outlined above that would be most helpful. |
follow your suggestion ,I adjust the script as follows:
shutdown.sh:
primary-backup.sh:
on my test environment, nft is not available, I use iptables instead In my test, Tcp disconnections occurred too. It seems that this solution does not work well. |
In startup-nft.sh you probably need to add: In the backup section of primary-backup.sh you also need to add: It may be that with the iptables command in the backup) section, the addition isn't needed in startup-nft.sh |
Describe the bug
On my network experiment,I runned keepalived on two nat device to implement high availability. when master changed to another,
a script will be executed to commit nat sessions to kernel.
The sequence of actions:
The third step will take some time, it will lead to connection reset if client or server send packet before it's nat session not synced at this time.
Expected behavior
tcp connection not be resetted
Keepalived version
Configuration file:
Notify and track scripts
The text was updated successfully, but these errors were encountered: