I'm trying to learn working with kubernetes. I have project which use websockets and im trying to apply sticki session for that purpose while working wiht multiple pods.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: lct-api-deployment
spec:
replicas: 3
selector:
matchLabels:
app: lct-api
template:
metadata:
labels:
app: lct-api
spec:
containers:
- name: lct-api
image: localhost:7000/lct:latest
imagePullPolicy: Always
resources:
requests:
memory: "200Mi"
cpu: "200m"
limits:
memory: "300Mi"
cpu: "350m"
serviice.yaml
apiVersion: v1
kind: Service
metadata:
name: lct-api-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
spec:
selector:
app: lct-api
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Local
ports:
- protocol: TCP
port: 6008
targetPort: 80
And I have no idea why its not working. On the client side i'm using signalr with React app. The problem occurs when negatiote request does not land in the same Pod so ws connection cannot be established.
My question is: Is there any way to configure k8s Load Balancer to work with sticky sessions?
EDIT
After Kiran Kotturi comment my yaml files looks like this:
apiVersion: v1
kind: Service
metadata:
name: lct-api-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
spec:
selector:
app: lct-api
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- protocol: TCP
port: 6008
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: lct-api
spec:
replicas: 5
selector:
matchLabels:
app: lct-api
template:
metadata:
labels:
app: lct-api
spec:
containers:
- name: lct
image: localhost:7000/lct:latest
imagePullPolicy: Always
resources:
requests:
memory: "200Mi"
cpu: "200m"
limits:
memory: "300Mi"
cpu: "350m"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: lct
topologyKey: kubernetes.io/hostname
And still doesnt work. WS connection is established from time to time but it's just luck because it land in the same pods by accicdent.
As per the documentation, Sticky sessions will route consistently to the same nodes, not pods, so you should avoid having more than one pod per node serving requests.
If user sessions depend on the client always connecting to the same backend, you can send a cookie to the client to enable sticky sessions as mentioned below in the annotations field.
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookiesSticky sessions send subsequent requests from the same client to the same Droplet by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client’s browser. This option is useful for application sessions that rely on connecting to the same Droplet for each request.
Sticky sessions do not work with SSL passthrough (port 443 to 443). However, they do work with SSL termination (port 443 to 80) and HTTP requests (port 80 to 80).
If possible,update the port number to 80 instead of 6008 in service.yaml file and also add
containerPort: 80andprotocol: TCPin deployment.yaml file.You can use the github link for reference and make necessary changes to the service and deployment yaml files accordingly.