You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The usecase I'm trying out with kubefwd is something similar to what is discussed at #214. I would like to run kubefwd in a pod and consume the exposed services from a different pod. The internal services are in a different cluster.
The reasons for not directly running kubefwd in the same pod as the client is as follows:
There are scenarios where terminal access will be provided for users running the client. As kubefwd requires to be run as root, running it in a separate pod is better security wise IMO.
Different resource requirements (CPU, memory) of client and kubefwd
As kubefwd does not support binding to ip addresses other than loopbacks, I used a iptables query to forward traffic arriving on eth0 interface to the relevant loopback IP:
Generically this iptables command needs to know the port bound to the k8s endpoint and the relevant loopback IP and the forwarded port. Both are same (9090) above.
Also, thinking of using k8s liveness probes with a simple telnet command to know when a portforwarded has restarted - The kubefwd pod will be restarted by K8s, and hence refreshing the endpoint configuration.
There could be multiple services port forwarded and exposed in different loopback ip addresses. Therefore, if this approach to be successful, would need to dynamically discover what is the correct ip address for each service. Atm I do not have a good way of doing it, but a potentially hacky way would be to do a grep in the modified /etc/hosts file to find the relevant IP - but I guess would have to wait till the kubefwd process does the /etc/host file modification (maybe can run kubefwd in a init container and share with the main container so that /etc/host changes are already done when the main container is started?) to do the IP extraction and iptable changes.
An alternative approach is to use a k8s ingress controller here to expose the internal services privately.
What do you all think about this workflow regarding kubefwd? I do understand this is not exactly the core usecase of kubefwd, but would greatly appreciate suggestions, ideas about drawbacks and possible pitfalls.
The text was updated successfully, but these errors were encountered:
isurulucky
changed the title
[Discussion] Exposing the forwarded ports to a different pod
[Discussion] Exposing the forwarded ports to a different pod in K8s
Oct 25, 2022
First of all, kubefwd is a really great tool!
The usecase I'm trying out with kubefwd is something similar to what is discussed at #214. I would like to run kubefwd in a pod and consume the exposed services from a different pod. The internal services are in a different cluster.
The reasons for not directly running kubefwd in the same pod as the client is as follows:
As kubefwd does not support binding to ip addresses other than loopbacks, I used a iptables query to forward traffic arriving on eth0 interface to the relevant loopback IP:
Generically this iptables command needs to know the port bound to the k8s endpoint and the relevant loopback IP and the forwarded port. Both are same (9090) above.
Also, thinking of using k8s liveness probes with a simple telnet command to know when a portforwarded has restarted - The kubefwd pod will be restarted by K8s, and hence refreshing the endpoint configuration.
There could be multiple services port forwarded and exposed in different loopback ip addresses. Therefore, if this approach to be successful, would need to dynamically discover what is the correct ip address for each service. Atm I do not have a good way of doing it, but a potentially hacky way would be to do a grep in the modified /etc/hosts file to find the relevant IP - but I guess would have to wait till the kubefwd process does the /etc/host file modification (maybe can run kubefwd in a init container and share with the main container so that /etc/host changes are already done when the main container is started?) to do the IP extraction and iptable changes.
An alternative approach is to use a k8s ingress controller here to expose the internal services privately.
What do you all think about this workflow regarding kubefwd? I do understand this is not exactly the core usecase of kubefwd, but would greatly appreciate suggestions, ideas about drawbacks and possible pitfalls.
The text was updated successfully, but these errors were encountered: