Basic Application Connection Refused on GKE Load Balancer

23 Views Asked by At

I have a basic application that backends to a mongoDB... The YAML is below. I can run this locally and it works fine... Whenever I install this application on a GKE cluster that is either public or Private, the connection is immediately refused in the web browser... Firewall is open for 0.0.0.0/0 to the cluster... I have rebuilt this multiple times and recreated the docker images 10 times at least to make sure nothing is wrong. The LB passes health checks, but It refuses connections still to the front end of the GKE... Here is the YAML I am using. Any clue what is the reason or step I am missing?

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
    component: back
spec:
  replicas: 1
  selector: 
    matchLabels:
      component: back
  template:
    metadata: 
      labels:
        app: test
        component: back
    spec:
      containers:
        - name: test
          image: **
          ports: 
            - containerPort: 3000
          env: 
            - name: PORT
              value: "3000"
            - name: CONN_STR
              value: **
---
apiVersion: v1
kind: Service
metadata:
  name: test
  labels:
    app: test
    component: back
spec:
  type: LoadBalancer
  selector:
    component: back
  ports:
    - port: 3000
      targetPort: 3000
      protocol: TCP
      name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
    component: front
spec:
  replicas: 1
  selector: 
    matchLabels:
      component: front
  template:
    metadata: 
      labels:
        app: mern-k8s
        component: front
    spec:
      containers:
        - name: test
          image: **
          ports: 
            - containerPort: 80
          env: 
            - name: BASE_URL
              value: "http://localhost:3000"
---
apiVersion: v1
kind: Service
metadata:
  name: test
  labels:
    app: test
    component: front
spec:
  type: LoadBalancer
  selector:
    component: front
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http

I've replaced names and strings in this YAML file with test and ** for privacy. Here is what the cluster shows when I run the yaml. This cluster has been deleted and IP's are not applicable anymore.

kubectl get all
NAME                                  READY   STATUS    RESTARTS   AGE
pod/test-back-787ffd69cc-q4wk2    1/1     Running   0          43s
pod/test-front-78d7d58f5c-f5tz6   1/1     Running   0          42s

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)          AGE
service/kubernetes   ClusterIP      10.115.128.1     <none>            443/TCP          4m13s
service/test-back    LoadBalancer   10.115.128.225   34.136.79.X     3000:30772/TCP   44s
service/test-front   LoadBalancer   10.115.129.223   104.154.235.X   80:31238/TCP     43s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/test-back    1/1     1            1           44s
deployment.apps/test-front   1/1     1            1           43s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/test-back-787ffd69cc    1         1         1       44s
replicaset.apps/test-front-78d7d58f5c   1         1         1       43s

As seen above, hopefully I filled out the correct spots.

0

There are 0 best solutions below