Split http traffic with CiliumNetworkPolicies

80 Views Asked by At

With a number of k8s clusters, connected with cilium cluster mesh I am looking to split http traffic by verbs, with L7 CiliumNetworkPolicies, so that (POST,PUT,PATCH,DELETE,HEAD) go to the master Ceph Object Gateway located in one k8s cluster and (GET) go to mirror Ceph Object Gateway located in another k8s cluster. Multisite ceph object store

I am looking for my application to be able to write data into the distributed storage, so that the data gets replicated across connected ceph clusters. Then, once there is a need to read the data, the application will read it from the ceph cluster that is the closest.

Each Ceph Object Gateway - one per k8s cluster, has a service with identical name:

apiVersion: v1
kind: Service
metadata:
  annotations:
    io.cilium/global-service: "true"
  labels:
    app: rook-ceph-rgw
    rgw: m-harbor
  name: rook-ceph-rgw-m-harbor
  namespace: storage
spec:
  ports:
 - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: rook-ceph-rgw
  type: ClusterIP

They are cilium global services => io.cilium/global-service: "true" In this particular case it would mean that:

  1. Ceph Object Gateway(s) from the mirror cluster(s) would have to connect to http://rook-ceph-rgw-m-harbor.storage.svc on the master cluster. At the same time it has to talk to the osd(s) and mon(s) on the same cluster.

    • How to ensure this using a (Cilium)NetworkPolicy
  2. The s3 client would need to write to the http://rook-ceph-rgw-m-harbor.storage.svc on the master cluster and read from the http://rook-ceph-rgw-m-harbor.storage.svc on the local cluster.

    • How to ensure this using a (Cilium)NetworkPolicy
  3. Should I go with local-redirect-policy instead ?

Mirror ceph-rgw(rhea cluster) pulls the zone from ceph-rgw(titan cluster) Mirror rgw pulls data from the master

Client (Harbor registry) pulls an image from local ceph-rgw Client pulls an image from local ceph

0

There are 0 best solutions below