s3proxy in a closed kubernetes network

51 Views Asked by At

I'm using an s3proxy to manage s3 api to blob storage. It's used by mongodb to store backup.

I'm installing s3proxy with RADAR-Base/s3proxy helm chart with this values file

s3:
  identity: $S3_PROXY_ID
  credential: $S3_PROXY_CRED

target:
  provider: azureblob
  endpoint: $S3_PROXY_AZURE_ENDPOINT
  identity: $S3_PROXY_AZURE_ID
  credential: $S3_PROXY_AZURE_CRED

everything works fined with a port-forward of the s3proxy service and using aws cli

➜  deployment git:(main) ✗ k port-forward -n s3proxy svc/s3proxy-s3-proxy 8000:80                      
Forwarding from 127.0.0.1:8000 -> 80
Forwarding from [::1]:8000 -> 80
puis que je tape dessus avec aws cli j'ai une bonne sortie
➜  deployment git:(main) ✗ aws s3 ls s3://mycontainer --endpoint http://localhost:8000
2024-02-06 16:20:16         66 fruits.json

but When I tried to do it in a pod I'm getting a forbidden http error

sh-5.1# echo $SECRET_KEY
s3proxy
sh-5.1# echo $ACCESS_KEY
s3proxy
sh-5.1# mc alias set my-s3 http://s3proxy-s3-proxy.s3proxy.svc.cluster.local ACCESS_KEY SECRET_KEY

(I tried to wget to check if it was the dns resolution that might cause the issue, but it resolves with a 403 forbidden)

(yes I'm using s3proxy as an access and secret key)

I thought it might be due to cors origines.

1

There are 1 best solutions below

0
lorenzo On

I just had to use --api s3v4.