I have a ASP.Net Core 2.1 web API published as self contained and hosted as a windows service on two different environments: a windows 10 Pro and a windows server 2012 R2.
I intensively tested the API on my windows 10 using k6, to make load, stress, spike test, etc and according to the Task Manager, my memory consumption got very high, but after 2 hours of idleness, the memory used by the process is quite very low now (around 5MB) which is nice and normal, and which confirm I have no memory leak in my program.
My problem is regarding the same Web API, hosted as a windows service too within the production Windows Server 2012 R2. In production, the Web Api is far less stressed than my web API on my local computer (I mean, there are far less requests to it). I noticed that sometimes, the memory footprint is increases for no reason (no request are made), and then it shrink at once. I expected that after 2 hours of idleness, the memory would shrink to, but it stays to 460MB from the time is raised for no reason. Moreover the production server (windows server 2012 R2) has only 4GB of Ram, whereas my computer (Windows 10) has 8GB.
Does anyone can exmplain such different behavior, and why my server does not reduce its memory footprint for my We API. As it is a self-contained Web API, why do I have two different behaviors depending on the OS, and how can I investigate what is in memory. Is there a difference in the way the GC works between a windows 10 and a windows 2012 R2?