Currently, network policies in Kubernetes allow you to control egress and ingress on the pod. However, if two containers run in the same pod, there is no way to have distinct network policies for each container.
I am trying to implement a Kafka consumer, which is able to read messages from a broker which is hosted in our private subnet and then dispatch this request to the side container, which runs untrusted code designed by random users on the web. Since there is no way to restrict container communication with a policy, this untrusted code can reach our Kafka broker.
I understand that this can be limited by enabling authentication on Kafka. However, the service would still be exposed to an untrusted container.
Is there any way to stop this from happening? We have explored Kata containers, Istio + Envoy and Cilium, none of which seem to help solve this problem.
You would need to enable asymmetric encryption (SSL / Kerberos) between Kafka and any client. This way, you would delegate public keys to any trusted client, while any untrusted code would be unable to connect without a valid key-pair. Also, the encryption would prevent untrusted code from packet-sniffing the network data local to that container/pod/host. None of this requires/involves Kubernetes.
Beyond that, run the untrusted code in the container on a limited-access (non-root) user account, and regular security best practices you should be doing in containers, anyway.
If you need finer network policies in the container, install
iptables, for example.