Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Updating your custom registry with new upstream dep versions after testing in CI with the all services you care about is fine. But the OP seems to just blindly pull the newest wordpress images from upstream or am I missing something? How is this meant to work reliably?

I guess given wordpress's security record taking breaking your site from time to time is preferable to your site being broken into from time to time.



I think you're mixing up some things. If you run the image "docker.io/wordpress:6.3.1", then the container will be updated when the image with that tag (6.3.1) is being re-built (which is a best practice, because that's the only way how you get security updates for the libraries in the base image). The tag is just a pointer to the latest image hash.

Many Docker images also provide "semantic version tags". Wordpress does too, so if you run the image "docker.io/wordpress:6.3", you will get the latest 6.3.x version.

It's up to you (and the image publisher) to decide when to auto-update, and when manual intervention is necessary.

Of course this requires trusting the publisher of that image. But even if you build your own images, you still trust the base image. It's turtles all the way down.


I think they know about that.

But it's basically similar to running a "update" of you distros package manager automatically on the fly. (okay it's better due it having a smaller surface and somewhat better per package update schema controls)

And some people argue that you must not do so as it might unexpected subtle break your system.

And other say you must because (especially security) updates must be done.

And the truth is probably in between. (Like auto updates with self test and rollback, which in complex systems isn't trivial at all.)

Anyway especially for using local user space toolings on my computer I 100% will enable it. I mean iff it stops working I can fix it but if not (the normal case) it's low maintenance. Perfect.


> But it's basically similar to running a "update" of you distros package manager automatically on the fly.

Which is a thing now. My openSUSE MicroOS/Aeon machines default to running transactional-update every day, and updates take effect on the next reboot. Given that MicroOS is allegedly to SUSE as CoreOS is to Red Hat, I suspect the latter has similar defaults.


What are you talking about?

> Running containerized workloads in systemd is a simple yet powerful means for reliable and rock-solid deployments.

They say its reliable and rock-solid. Isn't that enough for you? /s

---

Honestly, the amount of companies who: Don't understand the problems, forgot history, and think we're innovating into new territory because of hyped up branding is utterly baffling in the whole container space.

I'm not saying they're all bad, and better common tools are a good thing. But i see so many companies operating at [required complexity level] + 1 in the hopes to no longer be bothered by simpler problems.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: