Ok, so.
I have avoided using #docker for development (because it's not fit for development).
Now I'm looking at how people work with it and #firebase and in case you haven't seen it, the best way I can express it is "#nix rebuild whenever you need to do something non-trivial, except with a shitton of side-effects".
It's absolutely unfit for purpose.
@johanneskastl I dislike three major thigs:
1. Wastefulness: services depend on Java? How about we run 10 JVM instances.
2. Security: docker-specific: jailbreaks touching the daemon and attacks on the daemon grant elevated privileges.
3. Ad-hoc use by many users: containerisation attacks the wrong half of "works on my machine" problem by replicating "my machine" to othere developers, staging, production. I hope this point makes sense. Basically we run side effects until we massage something into shape where stuff builds, then hold our breath and start shipping. My particular pet peeve is port forwarding to host. I unironically wish this docker feature was behind a feature flag. Another ad-hoc pet peeve – suppose I need to have an #LSP to enable advanced #IDE features. How do I provision correct versions of system (or user-profile) dependencies? To ensure perfect devX, I need to force my devs to connect to the box's LSP, which brings us to problem 1 - now I have 10 Java VMs and 10 Java LSPs. And my image sizes grow proportionally. And I have to optimise it away in prod. So what do people do? Ad-hoc solutions and holding breath!
Ironically to #2, in #ZeroHR, we use #nix to build #docker images of test task submissions and, in case of multiplayer submissions, we use docker compose on top of this to join the submissions into a network.
But we deploy our systems on bare metal, and we let production dictate staging and dev systems. Thus, we can rely on reproducible builds and, where absolutely needed, SaaS mocking, to ensure smooth devX.