You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The knative service status should be “Ready” instead of hanging in “Unknown” state.
Actual Behavior
When I deploy a knative service, in a EKS cluster, it remains in “Unknown” status until the istio ingress-controllers are restarted, even if the application can be reached.
It then switches to “Ready”, and the next application deployed will be in “Unknown” status, and so on.
kubectl get ksvc gsvc-serving-db07943b -n eb7d5189
NAME URL LATESTCREATED LATESTREADY READY REASON
gsvc-serving-db07943b http://test-eb7d5189.serverless-dev.xyz.crashcourse.com gsvc-serving-db07943b-00001 gsvc-serving-db07943b-00001 Unknown Uninitialized
The application is exposed with a loadbalancer and reachable
{
"conditions": [
{
"lastTransitionTime": "2025-02-06T13:43:16Z",
"message": "Waiting for load balancer to be ready",
"reason": "Uninitialized",
"status": "Unknown",
"type": "LoadBalancerReady"
},
{
"lastTransitionTime": "2025-02-06T13:43:16Z",
"status": "True",
"type": "NetworkConfigured"
},
{
"lastTransitionTime": "2025-02-06T13:43:16Z",
"message": "Waiting for load balancer to be ready",
"reason": "Uninitialized",
"status": "Unknown",
"type": "Ready"
}
],
"observedGeneration": 1
}
strange findings
Logs from istiod
2025-02-03T10:57:39.622954Z info ads Push debounce stable[112] 1 for config Secret/eb7d5189/gsvc-pull-2f42fda6-serving-f61c3f70: 100.240948ms since last change, 100.240879ms since last push, full=false
2025-02-03T10:57:39.732479Z info model Incremental push, service gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local at shard Kubernetes/Kubernetes has no endpoints
2025-02-03T10:57:39.756497Z info model Full push, new service eb7d5189/gsvc-serving-db07943b-00001.eb7d5189.svc.cluster.local
2025-02-03T10:57:39.924255Z info ads Push debounce stable[113] 5 for config ServiceEntry/eb7d5189/gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local and 1 more configs: 100.548746ms since last change, 200.632268ms since last push, full=true
"outbound|443||gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local": {},
"outbound|8012||gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local": {},
"outbound|8022||gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local": {},
"outbound|80||gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local": {},
"outbound|9090||gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local": {},
"outbound|9091||gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local": {}
2025-02-03T10:58:02.542487Z info model Full push, new service eb7d5189/gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local
2025-02-03T10:58:02.722779Z info ads Push debounce stable[114] 3 for config ServiceEntry/eb7d5189/gsvc-serving-db07943b-00001-private.eb7d5189.svc.cluster.local and 1 more configs: 100.661715ms since last change, 180.224615ms since last push, full=true
2025-02-03T10:58:02.961683Z info ads Push debounce stable[115] 3 for config ServiceEntry/eb7d5189/gsvc-serving-db07943b.eb7d5189.svc.cluster.local and 2 more configs: 100.299834ms since last change, 160.890523ms since last push, full=true
Services before ingress-controller restart
kubectl get svc -n eb7d5189
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gsvc-serving-db07943b ExternalName <none> test-eb7d5189.serverless-dev.xyz.crashcourse.com 80/TCP 3h37m
gsvc-serving-db07943b-00001 ClusterIP 172.20.247.29 <none> 80/TCP,443/TCP 3h37m
gsvc-serving-db07943b-00001-private ClusterIP 172.20.132.196 <none> 80/TCP,443/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP 3h37m
Services after ingress-controller restart
kubectl get svc -n eb7d5189
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gsvc-serving-db07943b ExternalName <none> knative-local-gateway.istio-system.svc.cluster.local 80/TCP 3h37m
gsvc-serving-db07943b-00001 ClusterIP 172.20.247.29 <none> 80/TCP,443/TCP 3h37m
gsvc-serving-db07943b-00001-private ClusterIP 172.20.132.196 <none> 80/TCP,443/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP 3h37m
The external-ip from ExternalName turned from test-eb7d5189.serverless-dev.xyz.crashcourse.com to knative-local-gateway.istio-system.svc.cluster.local.
The external-ip from ExternalName turned from test-eb7d5189.serverless-dev.xyz.crashcourse.com to knative-local-gateway.istio-system.svc.cluster.local.
The externalname should point to istio it is used for different purposes e.g. traffic splitting.
I haven't checked all the details yet but is that external name being exposed on the AWS lb directly somehow (due to your ingresses) or Istio is not picking up changes? Could you try a more standard approach as in Knative docs as a smoke test?
Thanks for your reply !
The istio gateway mode I use is "simple", and the one set in the ingress contoller is apparently mTLS (controlPlaneAuthPolicy: MUTUAL_TLS).
I tried with a standard approach by installing knative/istio/net-istio using this piece of documentation and got the exact same result.
from podAnnotations, it works, but we loose the ability to keep client source IP which is not desirable.
A couple of combinations have been tested, based of this proxy config but we got no luck.
It seems like, a probe, maybe from net-istio has issues.
Moreover, we came accross this feature request which really looks alike what we're facing right now.
What version of Knative?
1.17.0
net-istio: 1.17.0
istio: 1.24.2
Expected Behavior
The knative service status should be “Ready” instead of hanging in “Unknown” state.
Actual Behavior
When I deploy a knative service, in a EKS cluster, it remains in “Unknown” status until the istio ingress-controllers are restarted, even if the application can be reached.
It then switches to “Ready”, and the next application deployed will be in “Unknown” status, and so on.
The application is exposed with a loadbalancer and reachable
Here are the details of the knative service status:
So, for the load-balancing I use an AWS NLB, and everything seems to be ok, all the targets (15021, 443, 80) are healthy.
I also noticed a couple of logs, probably related to the issue.
and
I'd also like to point out that I looked at the route and the ingress, the outputs of which are as follows
route
ingress
strange findings
Logs from
istiod
Services before ingress-controller restart
Services after ingress-controller restart
The external-ip from
ExternalName
turned fromtest-eb7d5189.serverless-dev.xyz.crashcourse.com
toknative-local-gateway.istio-system.svc.cluster.local
.Some tests
From a "Ready" service
From the "Unknown" service
Steps to Reproduce the Problem
Ingress controllers
My setup has some particularities. I use 3 different ingress controllers configured with the helm values as below:
In case you wonder, I use proxy config for matching source IPs and use it in AuthorizationPolicy afterwards.
Knative
I deploy knative using the knative-operator as follow :
domain-template
is linked to an operator we have, so nvm that.Knative Service
Same for the annotations/labels, is it linked to the operator
The text was updated successfully, but these errors were encountered: