I've created a Verdaccio server in a Kubernetes Cluster, I only use it as a cache and no private packages are needed.
The Verdaccio server run perfectly, however, it runs slow when the cache expires, as it will pull the packages again from the uplinks. I am thinking to set the maxage value to a very big one, such that it will remain valid long enough.
I am not sure whether it is a good approach, as I am not sure what possible goes wrong if the cache "never" expires.
I have the config as below.
storage: /verdaccio/storage
uplinks:
npmjs:
url: https://registry.npmjs.org/
maxage: 30m
packages:
'@*/*':
access: $all
publish: $all
proxy: npmjs
'**':
proxy: npmjs
access: $all
publish: $all
log: { type: stdout, format: pretty, level: http }
web:
enable: false
Knowing whether setting maxage to infinity is a good appraoch.
The answer here is tricky, because cache is just an snapshot of the present of your dependencies, but tomorrow you might need to get security updates in of items in the dependency tree, you never know, so add a high value to boost performance is ok enough, like a 1 week or so. I see not very valuable to cache so often, because probably new updates won't be valuable for you, for instance major releases (which have a lower update rate in many projects).
Some techniques I use a lot in front-end is bundle vendors in single file and let it cache by the browser, at there registry you could do something similar, remember your can add as much uplinks you want.
This example is very granular and might grow very quickly but is just an idea to prove you can play around with uplinks and even if the same source apply different rules.
So my opinion is there no easy answer, always depends of your needs, but don't set too high, otherwise eventually you will run in 404 in your build pipeline for a specific just released dependency that is being added to one of your projects.