How to secure the source code and database structure in Container as a Service (caas)?

267 Views Asked by At

Container as a Service (caas) is a buzz we hear around us. But there are some confusion about this model that I didn't find over the internet.

Actually, we provide a service to our customers that include sensitive data (financial documents etc). So, some of our clients hesitate to share the data with us.

So, they demand solution in form of docker container. So, in that case, all of the data and our app will be hosted in their own servers. We'll not have access to their data. And we'll charge them on monthly basis.

In short, we need to deliver all of our app (source code and database) in docker container as a blackbox. So, that the client can just interact with docker container over the network but couldn't get into the docker container to see our source code and DB Structure.

That's why I'm confused that how can we secure our source code and DB structure. (Source code is in PHP and DB is PGSQL)

Secondly, How can we sync that docker container's code with updated code?

Any help for this question will be highly appreciated.

1

There are 1 best solutions below

5
David Maze On

How can we secure our source code

Use a compiled language that isn't trivially decompiled (C++, Go). If you use an interpreted language that includes the source in the container (Javascript, Python, Ruby), once the client has a copy of the image, it's trivial for them to run it or otherwise open it up and look at your source.

and DB structure

There's no specific way to do this, other than securing the database and the application code. Anyone who can connect to the database can query the schema pretty easily.

How can we sync that Docker container's code with updated code?

Send the client a new image and have them delete the existing container and create a new one.

This is important, and takes some up-front design. When you do this, anything that was in the container's local filesystem will be lost, which means you never store anything in the container's local filesystem that you can't trivially recreate. You already have a database, so plan to keep most of your actual data there. If you generate logs, either generate them on your process's stdout (so the core Docker log system can collect them) or use a host bind-mount directory to put them somewhere they can be easily reviewed.

This is also the mechanism used by cluster managers like Kubernetes. You can tell a Kubernetes Deployment controller that you want 3 replicas of an image me/abc:123. If you then subsequently tell it you want 3 replicas of me/abc:246 instead, it will start new containers with the new image and then delete the old ones.

The reverse of this is you never need to think about "syncing code" or otherwise logging into containers at all, and docker ps should be able to immediately tell you what version of the system the client is running (by the specific image tags).