-
Notifications
You must be signed in to change notification settings - Fork 93
[yugabyte] Use a proxy with restricted ports #1310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you investigated using Network load balancers targeting node ports directly ? My expectation would be that the nodes will always respond on the two services since the two kind of ports are expected on each nodes.
See https://docs.cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer#effect_of_externaltrafficpolicy, especially externalTrafficPolicy: Local.
|
After further research and discussions, my previous comment does not take into account the fact that a cluster may have more nodes than the desired yugabyte replication. In the meantime, could you please split the PR to isolate the public port adjustment ? |
|
Please split the documentation formatting improvement as well. Thank you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can move forward with this option for the moment. We may optimize later. In any case we want a solution to avoid expensive limitations involving unreasonable amount of cloud resources. Can you please make it optional to google deployments since AWS should not require it ?
There’s an upside to choosing this option knowing that some yugabyte services are unsecure, we may use haproxy to secure them if we need to expose it to other participants.
In addition, it could be an solution to #1139
|
|
||
| --- | ||
| apiVersion: v1 | ||
| kind: Secret |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would a configmap not be more appropriate ?
| name: yugabyte-proxy-{{$i}} | ||
| name: yugabyte-proxy-{{$i}} | ||
| spec: | ||
| replicas: 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add a comment to highlight the fact that it is two replicas per DSS node and the rationale behind it ?
| server yb-tserver-{{$i}} yb-tserver-{{$i}}.yb-tservers.default.svc.cluster.local:9100 check resolvers dns | ||
| --- | ||
| apiVersion: apps/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This component shall be documented somewhere. Here for instance: https://github.com/interuss/dss/edit/master/docs/architecture.md
The implementation of a single IP for the yugabyte master/tserver in #1259 had a hidden issue: using different ports on different services don't work well in GKE (and minikube), as the loadbalancer may target a tserver pod on a master port or a master pod on a tserver port. This wasn't spotted before as connection are retried and established after a few tries.
This PR fix the issue by using an intermediate, internal proxy, that will redirect tserver ports only to the tserver, and master ports only to the master services. Two proxies are used for redundancy, and they simply forward tcp connections, on layer 4.
The new proxy also only forward 'secure' port and drop non-encrypted ones, used only for yugabyte's web interface, contributing to #1214.
yb-admin, used to manage the cluster stills works as it use secure ports.Documentation has also been updated.
Tested in a aws|helm <-> gks|tanka cluster, working as expected.