I'm learning about docker and can't seem to wrap my head around the idea of scaling with docker. Assuming 2 based docker containers, 1 for nginx, 1 for webapp (php/nodejs/etc) on 1 very powerful server (multi cores, multi threads) with a fixed amount of resource.
If these 2 containers are able to use up 100% of the server resource then there's no purpose in spawning another nginx or webapp container in the same server. The solution to scaling here is to spawn another server with 1 webapp container or increase the same server's resource. In this scenario docker is purely just a tool for a easy/fast/standardized environment setup.
In scenario where it's the webapp that is unable to fully utilize all the available resource on the server then it makes sense that docker enables you to spawn another webapp in the same server. E.g. 1 Nginx container & 3 webapp containers in 1 server.
What's the idea of scaling with docker here and what about in cloud platforms?
I try searching online but can't really wrap my head around it.
That's a concern which is very closely related to scaling a web app: if you have one server, it doesn't matter much how difficult it is to deploy your software. If you have ten servers, you're going to want a standardized and easy to set up deployment.
I agree, and this is how I deploy Django apps: there is one container running Django, and it uses gunicorn to run as many workers as I have cores (or slightly more - Django spends some time waiting on the database.)
Other commenters have suggested that you look into Kubernetes, but in my opinion, Kubernetes is usually not worth the implementation effort unless you have a very large and heterogeneous workload. Auto-scaling groups, which most cloud providers offer, can provide scalability with less complexity. In this set up, each VM is running Docker, but each Docker daemon is independent.