Why isn't local mount showing up in k3d?

159 Views Asked by At

I am running k3d version:

k3d version v5.6.0
k3s version v1.27.4-k3s1 (default)

I initiated a new single node cluster on my Mac by running the following:

k3d cluster create mycluster --volume /Users/gms/development/nlp/nlpie/biomedicus:/data

I then import my docker image into the cluster:

k3d image import b9 -c mycluster

I then spin up a deployment pod using:

  spec:
    containers:
    - image: b9
      volumeMounts:
      - name: workdir
        mountPath: /data
      imagePullPolicy: Never
  volumes:
  - name: workdir
    hostPath:
      path: /Users/gms/development/nlp/nlpie/biomedicus

Which creates the deployment just fine, but when I ssh into the pod:

kubectl exec -it <pod name> -- /bin/bash

and look at the directory /data it's empty and not pulling in my local data.

Not exactly sure what the issue is, especially since I've successfully created the mapping in my cluster (from the JSON output from the k3d containers):

"Binds": [
            "/Users/gms/development/nlp/nlpie/biomedicus:/data",
            "k3d-mycluster-images:/k3d/images"
        ]

and from issuing kubectl describe pod <pod name>, the mapping is there as well:

Mounts:
      /data from workdir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8mjmx (ro)

Volumes:
  workdir:
    Type:          HostPath (bare host directory volume)
    Path:          /Users/gms/development/nlp/nlpie/biomedicus
    HostPathType:

plus, if I create a file in the /data directory in the pod/container it does not show up locally.

One caveat is that I am defining the deployment spec in Argo workflows, but that should not be affecting the volume mount, since this works perfectly in k8s.

NB: the goal is to port this over to a multiple node k3s cluster, where the hostPath will point to a shared CIFS mount on our nodes.

1

There are 1 best solutions below

1
zori On BEST ANSWER

The issue here is that k3d is running containers in containers, so when you have created cluster using

k3d cluster create mycluster --volume /Users/gms/development/nlp/nlpie/biomedicus:/data

Then in the deployment yaml you should have path from k3d container, so volumes should look like

  spec:
    containers:
    - image: b9
      volumeMounts:
      - name: workdir
        mountPath: /data
      imagePullPolicy: Never
  volume:
  - name: workdir
    hostPath:
      path: /data

For testing purpose to verify it I used this one whole deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: workdir
          mountPath: /data
      volumes:
      - name: workdir
        hostPath:
          path: /data