People asked for the feature. The company gave a reasonable explanation of why they didn’t want to do it—-that it would increase the burden on them. As a very reasonable compromise they coded up the feature and are offering it to actual customers. That way they can offset the extra burden with money, again from actual customers. Nothing about this is unreasonable except the massively entitled whining.
This is not a feature, though, especially not a new one. This is just asking for payment for the software to not do something to your machine. This kind of behaviour so far was only present in ransomware.
All the code required for this to work was already there. They only spent developer time changing it to only be available to Pro users.
You can rationalise this all you want, but this is just hostile, desperate and lazy behaviour from a company that's too desperate to make money, but is has continuously proven to be unable to come up with better ideas for attracting paid users. As others have mentioned, they got millions in funding so far but still have no idea how to make money.
Had it been announced a month earlier people would be sure it was an April Fool's joke.
Crazy how "not doing something unwanted to my computer" is considered a feature.
I remember when the user was in control of their own computer, not rando 3rd party companies. I issue a command to the computer, and the computer executes the command. If I want to update a piece of software, I update it. I guess this is a dinosaur mentality now.
This whole "allow the software to do whatever the developer's company wants or stop using the software" trend is pretty nasty.
Honestly, I remember getting into computers because everything was getting better. New capabilities made things faster, easier and smarter.
But sometime around the early 2000's businesses started exercising more control over the users.
I think it was basically the iphone that started a lot of bad precedents by removing the ability for the user to run his own software - both through he app store, and strict control by apple of what software could run on the phone.
I think this was initially done for security/piracy/etc, but the power grab self-propagated.
Apple thinks they are responsible for your iphone, but now they think they own it.
And when control was reverted to apple, they decided it was not important for people to see what was running on their phone, what it was doing, and who it was talking to.
At that point, software companies had more control over your phone than the users.
To me, the promise of computers got sort of tarnished - it's a little bit of a dark age.
When I hear of a new convenient feature, or smart-iot thing, I think that it will benefit the business and compromise the user.
But I still believe -- I think linux phones and tablets will be important checks and balances that can return control to the user and return to a more positive future.
No one is making you run Docker Desktop. It is not holding you hostage. There are alternatives in this very thread. Comparing it to ransomware is absurd.
When I read things like that it pulls me away even further from continuing CS. How blind are you that you do not realise that forcing a software execution is not ransomeware ?
I'll tell you what's wrong here : Docker is a fantastic technology but it simply couldn't find a suitable market tactic.
Now it resolves to this shitty practices. I don't even know how you can begin to think that it would add ANY burden to them NOT to run the update on your computer.
And here is my problem : It's my computer, I decide what is ran on it or not.
Next day we have some idiot getting keys on the docker update system and we simply have built a technologic crash because suddenly nobody can decide wether that update is ran or not.
Keeping control away of the user is the most stupid thing there is.
> It's my computer, I decide what is ran on it or not.
Absolutely nothing docker has done contradicts this. You want docker desktop, you install docker desktop. You don’t, you don’t.
Docker has no obligation to provide a piece of software (for free!) with exactly the feature set you want. I can’t understand how anyone could think they do.
I installed the Docker Desktop I wanted. Then you reached into my machine, deleted it, and replaced it with something else I didn’t want. I don’t understand how anyone could think they have the right to do that. A desktop application is not a SaaS website. Docker Desktop is the thing it was when I downloaded it, not a piece of real estate on my machine for you to manage remotely as you see fit.
> I installed the Docker Desktop I wanted. Then you reached into my machine, deleted it, and replaced it with something else I didn’t want.
To clarify, you installed a piece of software with this capability. Thus, you did not install a piece of software you wanted.
Furthermore, Docker has the right to write software that functions and updates as they wish. You have the right to not install it. It's really that simple.
A lot of software today is auto-updating, where updates cannot be disabled, if they can even be delayed. It's the developers' choice whether they want their software to function like this, your choice whether you want to install it.
>Absolutely nothing docker has done contradicts this.
Sure it does. It updates the software automatically with seemingly no way of stopping the behavior without first paying a fee.
Regardless of the intent of the individual installing the software on their machine, once installed it absolutely executes on its own; there's no denying this.
Fair enough, ransomware is certainly an exaggeration, but the absurdity of it comes from the fact that not upgrading it just means they don’t do anything, whereas they are actively doing something and asking for money not to. Usually, people pay money to other people to solve their problems, in this case the same people asking for money to solve the problem are the people creating the problem in the first place.
No it isn't, not the way in whch the comparison was made.
Previously, ransomware (and some games/media) were tge only software products that asked for a higher price tag to not install extra code the user did not want on their machine.
It is a fundamentally predatory practice, because the developers completely misunderstand consent. Just because I have some or a version of your software running, doesn't mean I want to invite you the hypothetical software developer to live on my system, auto updating things from under me.
It's a pattern that has fought harder and harder amongst softdev people to ignore the sanctity of other people's computer for the user's own good that needs to die.
Not comparing it to ransomware is absurd, frankly. The more it's normalized, the more people will reach for it as a business model.
If not ransomware, it's at least malware, as is anything that modifies the user's computer without consent. Also, this all-or-nothing "accept whatever the developer demands or uninstall and go away" mentality is not helpful, and a false dilemma.
Docker doesn't have to support anything they don't want. They can support only the latest version and just refuse to support obsolete versions. They can even deprecate API endpoints for versions they don't want to use anymore and ask users to upgrade. They could even sell support for those obsolete versions. This is how a large part of the industry operates and a great strategy for making money.
Docker Desktop is an app that runs on local computers. Just as everyone doesn't want Google or Apple forcing things on them, I also don't want third-party software developers doing the same and risking breaking my workflow because they didn't give me the opportunity control and be careful about my upgrades.
This is entirely possible. I also work on a company that sells Desktop apps, but we sell to non-technical users. All those users understand that upgrading to the latest version is routine when you have issues.
As far as I’m concerned Docker Desktop, like most desktop software, was finished years ago. No update has ever benefitted me in any way. But many of them have broken things.
I don’t need “support” to slowly triage, troubleshoot, and develop a fix when I know I can always fix the problem right now by downgrading.
I don’t need “bug fixes” or “stability improvements” when the only threat to stability is the updates. I just need it to stop fucking updating.
If I need it to do something new I’ll upgrade. By 99% of the time I need it to keep doing what it did yesterday.
Alice gives Bob free stuff, therefore Alice is entitled to kick in Bob’s door at 3am, take back the free stuff, and replace it with something else? And Bob must be grateful to Alice for this?
It's not free, its open source. They decided to build a company on the backs of a community that is more conducive to open source than closed source.
They got theirs. There wasn't anything free about it. They are the ones that fucked up and failed to build a successful business around an immensely popular tool.
This hostile behaviour towards a community that gave them everything is disgusting.
Companies ignoring social responsibility is pretty bad, but people white knighting dark patterns is so much worse.
This shitty behavior is so typical of docker. Their GitHub repos are absolutely full of shitty responses to the communities needs.
Their poor behavior and failure to work with enterprise is what resulted in k8s, rkt, and everything in between.
People asked for the feature to be disabled. There was a lengthy discussion in GH [0] aptly named "Please don't upgrade docker without asking first" few months back regarding why this shouldn't happen (auto updates, that is).
Considering that the team that codes the software acknowledged they screwed up [1] having to pay for this I have to agree with the comment that says this amounts to ransomware. "Pay us or risk getting f*ked up by our updates".
> Nothing you paid for or created is being held hostage here.
Nothing paid for sure but what happens when an auto update removes a feature you relied on previously? it's completely in their right to remove this feature from their product, it's their product, but it feels very disrespectful to potentially automatically remove working code from my machine unless I pay up.
I actually think the two sides to this argument completely come down to whether or not you see this as a "dick move" or not. I see it as a dick move, they have every right to do it, but now I'm looking for alternatives as dick moves usually predict more dick moves in the future.
Uh, you’ve been happily using docker for years and have your entire infra built on it, only to have the company suddenly decide that auto update is a great idea and break your stuff all the time?
Like, it’s not ransomware, but it’s not as if you can snap your fingers and move to something else.
Even if you can, that seems entirely counterproductive for the Docker company.
Yes it does. It may not run on it in production but a lot of us use it locally to replicate production as closely as possible. If my local machine is broken because they pushed out yet another breaking change and FORCED me to upgrade from a working version to that broken version.
> The solution to your problem is entirely in your hands, just hit that uninstall button
Well, you might be correct here, but still having automatic updates as a paid feature seems a bit excessive.
Applying your logic, having to paid for such a basic feature is madness.
Don't forget people found out about this in the worst case possible.
Imaging starting to use the app, then they introduce a breaking change and when you go researching you find out that you have to pay for avoiding breaking updates.
I grew up with shareware, nagware, and crippleware. It was annoying and we grumbled some but at the end of the day recognized that people were trying to make a living.
This doesn’t seem nearly as bad as that. They don’t want hundreds of different versions out there. As much as people claim otherwise they will get bug reports from old versions. Anyone that’s ever supported users knows that’s going to happen.
> They don’t want hundreds of different versions out there. As much as people claim otherwise they will get bug reports from old versions. Anyone that’s ever supported users knows that’s going to happen.
- Make it clear that only a certain version or a sliding window of versions is supported.
- Close reports from unsupported versions.
- Done.
Other projects/companies do this just fine last I checked.
If that works for you, you should definitely run your business that way. Likewise, if you’re a customer of docker I’m sure they’ll be very happy to hear from you as to what it’ll will take to keep your business.
Yesterday I grabbed a free book off a stoop in Brooklyn. Should I go on Twitter and trash the homeowner because there are pencil marks in it?
Do the pencil marks randomly appear on pages you needed to read because the homeowner magically changed them remotely? Did the homeowner then tell you, that you could make it stop happening if you paid them? I think yeah, you should trash the homeowner for being a dick in that case. Sure, free book, and the homeowner assumedly thinks the book is better that way, but at what point does this 'free book' became the responsibility of the new owner? Where's the line between the as is, and as is but we're gonna randomly make it worse?
Let's use your analogy. Is it okay for a free ebook to remotely update (in a way that makes the version you previously downloaded gone forever)? How about a free documentary you downloaded? A textbook?
Would it still be okay if the update takes content away or otherwise makes the item less valuable? What if it introduces content you find objectionable, and don't want on you machine? Still okay?
No, it's not. That's ridiculous.
If you agree to give something away for free as part of your business strategy then you can make all the changes you want to future versions. You can even stop offering new downloads or updates for free at all. What you shouldn't be able to do is retroactively change the product you already gave me for free.
You made a business decision. If it later turns out it was a bad one then tough luck. I have rights as a consumer and as a human being. Yes, even if you give me something for free. To suggest otherwise is blatantly anti-consumer and morally bankrupt.
Your whole series of replies has been "Docker can do whatever they want, don't like it? Uninstall!"
People are sharing their opinion on the direction of a popular software product. I'm confident they understand how to uninstall Docker if that was their intention. Continuously pointing out it's free like that's a valid defense is mind boggling.
> Yesterday I grabbed a free book off a stoop in Brooklyn. Should I go on Twitter and trash the homeowner because there are pencil marks in it?
For this to be valid, the homeowner who left the book out is demanding you pay them money otherwise they'll enter YOUR home and update the content of the book next time you try to read it. Of course this is completely acceptable because you can always throw the book away.
I don't see it that way. The team has released new versions with major issues on a fairly regular basis, and this was exacerbated by having a forced auto-update that could only be resolved by dropping down to v2.5. So they pushed people to an older, stable version they didn't want to support.
To my mind, the decision here shouldn't have been to make people pay protection money to stop a new release breaking their setup. They could make the whole damn thing paid-for instead and I'd be happier with that.
There is clearly something missing in their development and QA process that causes this auto-updating to be so visible and painful. They could have kept the feature as-is and worked on making sure new releases were solid, while also having a mechanism to revert.
I have no problem with my browser updating itself that way, as I never notice if something broken and got rolled back in the canary stage.
To be honest, avoiding criticism by whining about entitlement adds no more to the conversation than pure entitlement itself. I can't see any purpose it serves other than to start a flame war.
They didn't just put out Docker with a sign up that said "free stuff" though. They actively solicited the users; they told the community "Hey, use our software, you can trust us". If you do that you do have an obligation.
It's like, if you beg your friend to let you give them a lift and don't show up, you can't turn around and call them a "choosey beggar" because the lift was supposed to be free.
The response in that thread isn't _remotely_ reasonable, of course you can't make a developer tool like this auto-update.
Or give away free tree planks to build houses, but then later you say now everyone has to pay for the planks, or you'll magically change the planks to an a bit weaker kind of tree.
I don't understand- people aren't asking for Docker desktop developers to support old versions of its software in perpetuity; people are asking to forego an auto-update at their own risk. What burden do you think this places on the Docker desktop developers exactly?
That's how I understood it. It would be very entitled to say "Don't force me to update and support my old version until I'm ready". This move seems like the opposite of what we all expect: There's a new version, the one you are on is EOL, upgrade or beware. What was wrong with this, other than it didn't support their revenue goals?
There is a popular modern form of thought that developers should be the ones to decide which versions of their software are running sheets anywhere and what versions they are; that they are entitled to that level of control, and should exercise it because their users cannot be trusted to make an informed decision about running outdated and insecure software.
I think it's paternalistic and egotistical but the usual counterpoint is grandma getting a virus because Windows is unknowingly out of date. With developer tooling that holds a bit less water IMO.
Sure it's in their right to do, and we can't expect to get everything for free, but making "skip update" a paid feature? How tasteless is that, do we really want to defend that? You see that button as a free user and its a bit of a slap in the face, isn't it. It doesn't make me want to pay, it makes me think the company is completely out of touch. I'd rather uninstall and look for alternatives, then pay for skip updates.
> it makes me think the company is completely out of touch
As interest rates come up off the zero bound companies are increasingly going to have to learn how to make profits again. The culture this is out of touch with is not long for this world.
After reading your comment I think I agree with you. Most software nowadays requires you to update if you want it to keep working. You cannot play an online videogame without updating to the latest version. There is typically no way to use a previuse version of a website or web app.
We have gotten more entitled and now when we see something that doesn't exactly meet our liking then we cry about it.
Not surprising honestly , docker has had more 300M of investment and still has no decent stream of revenue to pay their 300+ employees of SV.
The entire industry knows it , the Docker devtools brought containers to the mass but anything beyond that is being taken by Cloud Vendors / Redhat etc...
The Open Container Initiative contributed massively to make Docker the standard for container but also for other initiatives to replace it....
I personally believe it will also be its death sentence, seeing how many competitions are coming to get a better / simpler container tool chains add to that a lack of leadership after the founder left with a big check from the fundraising it’s just a matter of time before Docker goes bankrupt.
I believe WSL2 and Apple's virtualization framework may eventually kill Docker desktop. Both provide enough underpinnings for an open source solution to match the "ease of use" thing that makes Docker Desktop popular. That leaves Docker Hub as their last revenue stream.
Docker is basically at its core a shell around cgroups/chroots and filesystems. An overengineered shell and a confusing cmdline (which got better to be fair)
There's no "technology" there per se (on Docker "core"). Docker containers and Dockerfiles maybe? Which got eaten by other orchestration technologies there?
> Which got eaten by other orchestration technologies there?
Exactly , Kubernetes wants to remove Docker , and the Docker orchestration layer « Docker Compose » is completely dead.
Again the founder has left the company with millions , there is no leadership whatsoever it’s a just a zombie company losing millions every year until it’ll declare bankruptcy.
Google / Redhat are very well aware of that that’s why they are trying to decouple their product ( Kubernetes/ Openshift ) from docker before it officially become deprecated .
Docker compose still seems like the easiest solution for a quick local dev env, no? Is there an easier turnkey solution for users? (I don't care about the setup as much as the ease of use by devs who didn't set it up.)
Docker compose is still really good for the use case that you want to lump a bunch of services together and go. It's great for local development and it's just fine for deploying a set of services to a single host.
The "issues" with compose are 1) it doesn't make Docker any money, thus doesn't see much in the way of enhancements, and 2) it doesn't even try to do the enterprisey things that Kubernetes does.
Also, you can drop podman-compose in and run most docker-compose files with no or minimal modifications, so it's not like docker-compose itself is even unique.
skaffold and minikube (which runs a simple Kubernetes cluster on your local dev computer) are great - the developer just types “skaffold dev” in their terminal and skaffold handles the entire process of building and deploying code into minikube, with instant live updates whenever files change.
This would be fine if Docker wasn't literally force-fed to me over the past six years. It would be fine if their updates didn't totally fuck up my development environment, or completely break it from time to time... it'd be okay if on MacOS you weren't forced into using the desktop application.
shrug In the end I don't particularly care, I still run things like a caveman using processes, systemd units, disk images, and plan for my resource usage. I just find it mighty frustrating and think its short-sighted. Anyone who gets burned by a Desktop update (which they will, this happened to me within this last two months) isn't going to think "oh if only I paid them! Duh! Silly me." Instead it's going to seem like extortion... "Oh no! We broke your development environment! Whoopsie! If you upgrade to pro you can revert and continue working immediately!" Docker was originally "magical" and each year it becomes increasingly less "magical" I can only hope it is worth it.
I'd probably pay if this is an issue for me, just like I pay for other stuff that has popups for as long as you don't pay (Sublime Text, Little Snitch, etc). You're only getting Docker for free because you help them by upgrading automatically.
In fact, it's pretty cool that the only requirement is to stay up to date. Some companies make you pay to get upgrades, which is less secure in a way.
If someone is using Docker professionally, maybe not a issue. I don't mind paying for software, and it could probably be expensed - although I am sure there's some cases where it isn't true or is more complicated.
No, what's galling about this is that Docker is used so widely. What about open source projects using Docker? Now you have to apply for an "Open Source Plan"? Great. How does that even work with contributors if they just want to install Docker to check out the project? If a project I was working on depended on Docker, I'm not sure if I'd rather spend time figuring out how to ditch Docker vs fill out their application.
In this case though I agree dockers move is a bit bait and switch but I see thing like Sublime differently. In the 90s my uncle paid for winzip and I, as a dumb teen started saying how dumb he was. He didn’t skip a beat and just said: ‘they gotta eat those people’. It always stuck with me. Now that I have income, I pay for software.
One of the very few open minded answers I've seen so far. Of course people will complain, specially in HN. Even though I agree Docker's communication and execution wasn't the best, I applaud them since they're trying to monetize their work in a different strategy than the standard "support plan". In the end, I really hope this ends up working for them since it'll ultimately be reflected in the quality of the product.
If you ever launch a Linux version I'll be happily be a paying customer.
If you use Golang, check out their newly introduced "embed" using which you can embed all your files into a single binary. I used to use Docker (and Kuernetes for a short while), got frustrated and switched to Buildah/Podman for quite some time, and then now have switched to using embed.
I have a Golang server with Flutter frontend interacting using GRPCWeb. Now I'm able to embed the Go server code, all the Flutter generated web code, generated Protobuf code, TLS certs, all the other resources, EVERYTHING into a SINGLE statically linked binary with no external dependences. I.e., you just run the executable and it just works. Of course it interacts with an external Redis binary but by itself it's just a single executable that can just be deployed with an scp to cloud, how much easier can you get? You do need your own orchestration mechanisms, but packaging wise it seems great.
You had me until you said TLS certs. As a SRE, I'm fully onboard with single executable deployments, but you should treat your binary as public information - don't include private material, inject it into the environment at runtime.
Part of that is about the possibility of leaking your key, but the other half is that you should be running exactly the same binary in production and staging, and you should have different keys for each. Recompiling can introduce behavioral changes (even in a well managed build environment like Golang's where dependencies are versioned through version control tags and cryptographic hashes of those tags)
I have a local encryption mechanism that hides these keys from view in the binaries (say using grep or strings). And I'm not comfortable with the alternative of the certs being on the server in plain view as files; I feel having them inside the binary in an encrypted form is safer. I keep the keys and certs away from GitHub of course, so only I have them as files locally. Am I missing something?
Aside from improvements with organization and environment separation when you separate the configuration from the code (and also not having to roll your own solution), one of the security risks is that you accidentally mix up binaries. You will think that will never happen until it happens.
A bigger security threat, I think, is that you have the private key both on your server and on your computer. It adds another location where you could mix up, hackers now have 2 possible attack targets, and it's more likely that your PC gets infected than your server. Either way, now, if your PC gets infected or your server gets hacked they will have the private key, while if you only have it on your server they won't necessarily have it when your PC gets infected.
The safest solution if you don't want to store the private key unencrypted is to generate an encrypted private key with openssl. You would however need to provide the encryption key every time you start the server. You will still have the unencrypted private key in RAM, but that's inevitable and also the case with your current method. The private key (even encrypted) should never leave the server.
Thanks for your response, much appreciated. I'm actually not planning to place the private key (or anything really) on the server other than the executable. I was in fact also thinking about using a password like you suggested (it's still under development). You're correct about the vulnerability of my personal computer and the need to take special care to protect the private key.
You are putting the private key on the server, it just happens to be encoded as data in the binary. It would not take an attacker long, looking at your executable in a sandbox, to figure out where it is - not matter how it is that you've obfuscated it.
From a threat modeling perspective, nothing you do will prevent an attacker that is able to run as your application user (or root) on your server. That's fine; The level of obfuscation you've put into place will (possibly) keep some of the script-kiddies who aren't targeting you directly from realizing they've stolen your key.
The attack you should be concerned about is on your distribution side: You can't copy that binary anywhere else without revealing your key. You can't put it in a docker image repository or a java package repository or even a s3 bucket - Because those become places that your key can be revealed. And you want to do those things. You want a copy of precisely the binary you deployed.
The key will be encrypted, not obfuscated - it won't be possible to retrieve it even if they have the binary without my password. This is what you also suggested, right? I just agreed with you above. Perhaps you missed the part about using the password? I'm confused.
There's two objections that remain: It's impossible to rotate the key without recompiling the binary (and every recompile is an opportunity to add bugs, though a good compile environment minimizes it), and that it's easy to mess up the encryption - Are you using a PKCS12 file format with DES? That's trivially crackable (to the point that most modern libraries recommend using a builtin password). And even if you've got the encryption part right, you're left with the password distribution problem, which is exactly the same problem; You've got a small (12 characters or 4k, not a lot by modern standards) bit of sensitive data that needs to be distributed to the app at startup.
Yes, you should generate a new keypair if the server disappears in a fire. No, you should not do whole-disk backups, but if you do secure them properly.
Podman[0][1] is an alternative daemonless containerization software. In my line of work, Docker poses too many security risks and Podman is the approved alternative.
Podman supports Docker images, Dockerfiles, and Docker-compatible CLI interfaces. Simply use alias docker=podman and you’ll never know the difference.
> Simply use alias docker=podman and you’ll never know the difference.
Unless you use Docker Swarm, in which case you'll lose access to simple orchestration that's not overengineered, unlike Kubernetes which you'll now be forced to use.
Or unless you use software that depends on the Docker socket being available (such as for Portainer, a graphical management utility [1]), in which case you'll lose the ability to use such software.
Or maybe you're one of the countless folks who tried using Podman but ran into a variety of issues that still haven't been addressed in many cases [2].
Or maybe you use Docker Compose and now have just lost the ability to run simple single node orchestration with YAML files, because Podman Compose is oftentimes flawed as well [3].
That's not to say that what Podman is attempting to do shouldn't be applauded, yet claiming that exchanging one set of leaky abstractions for another set of leaky abstractions is painless and will work for all of the people out there simply isn't true. I don't see Podman being a backwards compatible alternative to Docker on an API level, it'll simply kill it off eventually by supporting the OCI standard for containers and being a Kubernetes runtime at the lower level. Lots of people will need to re-engineer their approach to running containers and lots of time will be necessary to migrate away from Docker.
I share the same experience. Docker requires a root daemon, for full capabilities. Podman claims it can do the same rootlessly, reality is it almost does it all, evil in the details. Podman is to be applauded yes, but fabulous tools have been coded that rely on the socket listener,so aliasing podman will not work with that.
I think Docker is a fascinating enginnering story, an amazing tool to simplify and give life to groupc, which got released to the wild and sat while seeing competition offer decent alternatives that got adoption. They could have become the de facto container runtime implementation, perhaps swarm the de facto orchestrator, had they actually kept their distance ahead of every other competing alternatives. They could have charged almost all they want for enterprise licensing of a brilliant runtime and even more for the orchestrator. They failed. Seeing they hired 300+ employees and didn't maintain their pioneership is a testament of their incapability to run a business.
The hope for docker to survive is that the open source community forks and improves what's now called moby. As a business they missed the boat.
Doesn’t look like it would be easy to replace docker desktop with this? The big convenience part is that you don’t need to manage your own VM to run things in.
So podman-compose is really not where it’s at. This is coming from a long time ex-user. It’s a simple project with 117 open issues and 37 pull requests going back to 2019 (very few/no comments). There are many missing features and incompatibilities. I’m an ex-user because as of podman v2.2 there’s support to emulate the Docker socket which allows one to use Docker-compose.
Worrying in this thread how few people seem to realize that docker desktop is not a) just another word for ‘docker’ or b) just an app frontend for running docker’s daemon in the background.
People seem to forget: Docker is a Linux-only technology.
If you’re running a non-Linux desktop OS like MacOS or Windows you need access to a Linux system to run docker.
So docker desktop on MacOS runs a Linux VM for you under the hyperkit hypervisor hosting a Linux OS running containerd.
On windows the equivalent involves managing a dockerd running inside WSL.
So since we are talking here about an app that embeds an entire Linux OS image, the patching surface area is going to be considerable - it’s not just got to be updated whenever docker releases a new version. It’s actually a minimal Linux distribution.
This also means people suggesting ‘alternatives’ to using docker are missing the entire value that docker desktop brings to MacOS and Windows users.
That’s... a little bit of a stretch of what it means to be a ‘docker’ container. It’s more a ‘Windows container services’ image, which requires a host Windows OS to run...
But yes, this further gets to the confusion about what ‘docker’ is vs ‘docker desktop’. Docker desktop for MacOS is certainly not going to just magically make it so you can run “docker run -it microsoft-windows” to get a CMD prompt on your MacBook though (but with a Windows VM and the right environment variables set, maybe?)
So, this is a terrible UI for the feature, but there is an X button which hides the dialog. Apparently it used to be called "snooze", from this blog post [0]. All in all, it's really gross that they put an "upgrade to not update" logo on the button, but the actual behavior and the underlying reasons for it make sense. Supporting older versions is hard, if you are going to indefinitely ban a version, it's going to increase the burden of support on Docker, so they make it a pro feature.
It's a desktop app. There's absolutely no obligation for them to support anything. Server side, they can just deprecate APIs, and whatever chat/forum support they have can refuse to support outdated versions. They can even charge for it! This is how most companies go about it.
This is nothing more than desperation and lack of ideas on how to generate money, which is very unfortunate.
A more honest route would be just to charge for the app, charge for premium support, or come up with new Pro features by collaborating with the existing user base. This here is just lazy.
Or they could have done the same as many paid programs: charge money for updates. Instead they choose to do the opposite, which makes one wonder how they came to the conclusion that not updating is more desirable for their users.
Paying to not update is usually called "extended support." I belive it's not that rare and I think in enterprise even more common than in consumer software.
The reason, I believe, is that upgrading creates cost for the user, while maintaining older versions creates cost for the maintainers.
Extended support implies them having to actually support something, providing security updates for it and taking extra care of the version. Docker could charge for extended support, charge for a LTS version, and even turn features they'd would otherwise deprecate into premium features.
This here is merely attempting to prevent the user from running an unsupported version on their own machine. Software tends to be about "solving user's problems". The only problem this "anti-feature" is solving is one of its own creation.
1. Pay for updates (or "Enterprise" features), which implies there's extra value to be had moving up (the new leased car every 2 years model)
2. Pay for extended support after official EOL (the "does anyone know Cobol?" model)
3. New version, support DD-dates for older versions, stuff may stop working and you are on your own (That Windows ME box with all your MP3s in the basement)
This situation is some sort of forced #2. I don't know of any company that has done this without still offering #3.
> Paying to not update is usually called "extended support."
I haven't seen anyone demanding extended support, people just want to click "No" without having to pay for that privilege. No support, no extra sauce, just a button for the users who don't want to risk breakage in that moment.
Considering how updates randomly break things and wreck volumes and images my only assumption is that this is ransomware similar to how Windows updates operates
I don’t know how I did it, but I somehow managed to stay in one Docker desktop version (2.1, I believe) for over a year, then I had to update to fix a bug and suddenly Docker started automatically installing upgrades for me. That sounds like a change to me.
I wonder if this was good intentions by some developers getting subverted by an unaligned leadership ethos? (panicked? too $ focused? not enough user advocacy? no idea!)
I can easily imagine a few meetings that went something like this:
1. Security engineer meetings: "Flipping to default-on for automatic updates has been a major win for ~billions of people to help them mitigate the steady stream of 0-days exploiting gnarly web browser internals. Docker adoption isn't at that level but still big, and the individuals are pretty important. We should try it for our community too!"
2. Sales & CEO: "A ha, amazing! We can turn this into a $ win too. We've been needing more $ and conversion triggers for our install base now that we sold the enterprise business. Anyone with docker desktop on for > 2w is effectively a qualified leading with this, and this might finally tip them over to paid! $5/mo is nothing and this is just annoying enough that it might do it! (And if a developer won't pay for that, they're a freeloader / 1% / ... who'd never pay anyways...)"
3. PMs + designers + marketing + other user advocates responsible for trialing & trust: "Wait a minute! For their core workflows, developers need to easily control the versions of their tools in their environment. Messing with that willy-nilly for system version #'s breaks brand trust for our paid conversions!"
4. Sales & CEO: "Maybe. Let's try it and see. Overruled in the name of business experiments, and we can reverse course if it's too damaging!"
That's the majority Hacker News kind of opinion "Sales or the CEO fucked it up for the engineers" but another likely explanation is that the main source of frustration (forced updates for everyone) came from the engineering side because most developers hate maintaining older versions. Then someone said "you can't do that to enterprise customers, they demand to pick their own versions" which resulted in this button for the paid version.
This is a desktop app, so there's no need for engineering to maintain older versions. They can just ask users to upgrade to the latest before opening support inquiries. They can even deprecate old web endpoints if they want, and again ask users to upgrade to continue using.
I'm agreeing with you here for the first part: that's another great well-meaning dev explanation.
The ball-dropping is sales/product/leadership not being aligned on user advocacy. "Our velocity & sales > breaking user environments" means either the former is overly prioritized, or the people making decisions are too far from users (developers) to understand their daily needs. Managing docker versions for dev/prod drift isn't "1% of devs 99% of the time" but "99% of devs 1% of the time", so it's a real miss.
In an alternate org where the user advocates lead might look more like:
* <reject early on the current feature plan proposal as user hostile and instead...>
* free: default to auto-update (or not)
* free: flip
* pay ("Pro"): one-click to switch to another version, both forwards & backwards
* pay ("Enterprise"): centrally managed update schedule, version tracking, audit logs, etc. for all devs in your org
Folks there are some of the best, so I'm sure most of this was considered, and thus it's an org/process problem
Am I the only one who seen the pattern of using one Docker on one VM? i.e whole VM does nothing except run one Docker, or multiple Dockers but all part of the same app and running as a single unit, not distinct ones. I talked to a few friends and at least one of them told me they've seen the same pattern used in their company too.
I still don't understand why every software shop on earth feels the need to start with docker/k8s/et. al.
Build the damn app one time using SQLite on a single box. Then, figure out if the market even gives a shit about it. Building out a multi-cloud application architecture for your first prototype is a massive mistake. You don't need containerization tech if you can clone your application, hit F5 in visual studio, and the whole stack starts up automagically. You will probably find the performance of a single, powerful machine to be more than enough for your expected user base. SQLite can handle a fuckload of 'concurrent' writers if you know what you are doing.
There are now framework features available in some areas that help to obviate the need for containerization. If you are a .NET shop, look into self-contained deployments:
We have been using this deployment approach for ~3 years now and we have yet to have a single situation where a required dependency was missing or broken in production. And, we deploy to a very wide range of customer environments with all sorts of horrible IT practices applied to them.
Imagine being able to unzip a build of your software to a blank windows/linux server and expect that it work flawlessly 100% of the time, regardless of any prior/lack of configuration or other supporting dependencies on that machine.
To me, starting with Docker makes a lot of sense. I can isolate my development machine from the code execution environment. It makes it a lot easier for everyone on my team to run their preferred OS (Mac, Ubuntu, Arch, even Windows) and still be easy to run the built code.
Containerization isn't really that hard. At its most basic level, a Dockerfile is just a list of commands needed to build/run your program. I'd personally much rather have that as a sort of self-documenting build process than dig into "how does Visual Studio build things? Why is my local dev build process (F5 in Visual Studio) so different from our release build process (msbuild commands or whatever)?" Or worse, "why does our release process involve Joe hitting F5 on his machine?"
Deploying a container isn't that hard - I agree that k8s and such is likely overkill for most startups. A basic "docker run" bash script or something in that vein is not terribly difficult to write, maintain, and hook up to CI/CD. Even so, with growing cloud support from GCP/AWS/Azure etc, the leap from "We built an isolated container for local dev" to "Now we run that container in prod via k8s" is shrinking.
As you pointed out, there are other ways to do this, but at least right now Docker is fairly widely used and it's not limited to certain toolchains/languages. Using something just because it's popular seems like a bad idea on the surface, but if two tools are both reasonably sufficient, I'm definitely leaning towards the more widely used one - it means it's more likely new hire will be familiar with it on some level, there's more blog posts with best practices, etc.
> Imagine being able to unzip a build of your software to a blank windows/linux server and expect that it work flawlessly 100% of the time, regardless of any prior/lack of configuration or other supporting dependencies on that machine
> > Imagine being able to unzip a build of your software to a blank windows/linux server and expect that it work flawlessly 100% of the time, regardless of any prior/lack of configuration or other supporting dependencies on that machine
> I mean, that's basically _why_ Docker exists.
It is, but Docker isn't it. At the least, Docker won't run side-by-side with Virtual Box on Windows, so "regardless of prior configuration" is not met. In general, Docker adds an extra dependency on top of a blank system: Now you need to get Docker set up and running first, and then you can deploy your app. The alternative in question is about controlling the build to such an extent that you can reliably deploy the artefacts onto any system without having some runtime environment prepared.
I don't view Docker as an "extra" dependency: I view it as the _only_ dependency.
For example, at my last job, before we switched to Docker, a client dev that wanted to run a local backend had to install postures, apply initial schema, and then download and run the app. Not only was this a bunch of steps the client devs weren't familiar with, it led to backend devs being reticent about using the most appropriate tool for new features (e.g. shoving service discovery awkwardly into SQL instead of into something like etcd - and before anyone says "well that's fine" or "just do xyz instead," realize that's just an example of several). It was annoying to client devs to have to add new tools all the time, and annoying to backend devs to have to constantly troubleshoot misinstallation / misconfiguration.
When we switched to Docker, the client devs could run the backend without having to manage anything. Install docker (once), then run a very basic script that basically just ran "docker-compose pull; docker-compose up".
I don't generally want to be able to deploy to arbitrary environments. I want something that is easy to build, builds consistently, and lets all my fellow devs run whatever host they need to be maximally effective.
On Windows, it's true, if you need Virtual Box specifically then you can't use "Docker for Windows". You could either move the VB stuff into Hyper-V, or run Docker directly on a VB VM (a little less turnkey, but not particularly difficult).
> I still don't understand why every software shop on earth feels the need to start with docker/k8s/et. al.
YES YES YES
I am certain at this point they're doing it only to show off and to have something complicated enough to talk about the train new hires on.
One big argument for Docker is no dependencies, but Go and C# already can create fat native binaries that have no dependency on anything else (no .net framework or even VM, all native, same thing in Go). I believe Rust too offers the same thing. There is no excuse with all those different languages all supporting that.
There absolutely are complicated apps that justify Docker and k8s, but the vast majority in the real world do not fall into that, and most certainly that includes small ad-hoc internal services.
> One big argument for Docker is no dependencies, but Go and C# already can create fat native binaries that have no dependency on anything else (no .net framework or even VM, all native, same thing in Go). I believe Rust too offers the same thing. There is no excuse with all those different languages all supporting that.
Using Nix, you can build a self-contained deployment for just about any language/rutime you can imagine, and the target machine doesn't need to run Nix.
I've seen it, and I've also seen it in Kubernetes.
It's obviously not the intended use case, but it can have the advantage of plugging in much nicer with your existing tooling.
As an example, if I have a Kubernetes cluster, and a new app that needs a full VM.
My options are to containerize it (it likely already comes this way), and let Kubernetes scheduled it on it's own VM, or use a different tool in order to manage the unique VM for this app.
Unless there are reasons to avoid the containerize approach, I'm going to stick with the existing tooling rather than add additional complexity.
Most cases I've seen the apps don't need more than a two dozen megabytes out of the gigabytes in the VM. It was crazy seeing how much resources were being wasted for no good reason.
My company does it. Our atomic unit of software is a VM. All of our isolation, resource provisioning, regulatory compliance, lifecycle management, access management are based on VMs. Another way of saying this is “vms are typed.”
So docker is basically just a package manager and supervisord replacement for us because both ops and devs find it easier to build do deploys with containers than software on the host.
We use no actual features of docker, basically everything is disabled. We could use podman but that would be more effort for the same thing.
Yup, I feel they try to justify themselves and feel they've done something complicated that takes time to understand by newcomers, when in reality the whole thing could be a single Go or C# binary and the entire utilized cluster power is less than a single mid-sized VM that could just run said single binary, without any external dependencies that usually get used to justify Docker.
If people paying the money knew the amount of waste being done...
To add to the other replies, remember that containers do not provide isolation for security, they are simply namespaces.
Rootless mode helps provide some isolation and individual machines also provide some isolation.
Anyone who can launch a container with the `--privileged` flag has the ability to read the host VM's disks, or on a physical machine even update the bios etc... Launch a container with that flag and play with mknod and dd for the hosts root drive if you don't understand that attack vector.
The question you should ask is why you aren't using user namespaces or isolated machines if you are attempting to follow the least privileges and zero trust models.
While the choice of VM vs container is not one of good or bad in the general sense there are several reasons to choose containers.
1) Containers reduce the number of shared dependencies between the VM and the application. A container based on alpine will even be using musl libc vs glibc. And a VM running on a private cloud or different cloud providers will require different dependencies. As dependency hell is NP-complete, increasing the number of dependencies dramatically increases the effort of testing, trouble shooting, and maintaining a system. The simplified contract between a container and it's host vs a binary on a VM reduces that cost.
2) Instantiation time is much longer for a VM than a container.
3) Container management systems like k8 etc... tend to have more robust health checking, service registration, and recovery tools. If one is using containers already the cost of maintaining a parallel infrastructure for VMs is often high.
4) Developers can often run a container on their desktops/laptops and iterate faster as they avoid the challenges and costs in maintaining dependencies called out in #1 or the expense and time of spinning up development instances.
There are companies and groups that use containers and container management systems because they are the hot new thing, but even before the public cloud or even high density VMs were a reality I was on a team that leverage Linux VServers for all core services. It is still the only company I have ever been at where we could actually switch over to a cold DR site with everything from DNS to Oracle EBS being up in less than half an hour. All with only using rsync and tar for a publicly traded company.
You can get around package version selection being NP-complete by putting on other constraints like enforcing semantic versioning and auto updates. But IMHO using namespaces(containers) is one solution one should consider.
Here is a link to a paper that will describe the package dependency problem generalizes to the NP-complete SAT problem.
For me, containers are a way to limit that problem to being one of the container host only needing satisfy a contract for a far more limited set of needs (kernel ABI, network, storage).
Docker the company seems to be in a pretty tough place. Docker desktop and their hub are the only revenue streams they have. That's after ~$300M of investments.
I find plain Linux namespaces as sufficient for many things, for example:
unshare --map-root-user chroot ~/rootfs /bin/sh
Sure, it might not be able to clone the rootfs for you, but you don't need snapd to install it and it comes as part of util-linux, essentially being part of almost all Linux systems. (https://ilearnedhowto.wordpress.com/tag/unshare/ is a nice introduction.)
Shout out to the docker.io debian/Ubuntu package maintainers, which is what I use on all systems I manage.
After being burnt badly by the "official" docker auto-updates breaking stuff a few times, I've had enough and seeked a better solution. Turns out it was already there, no 3rd party package repos required.
Linux distros package managers go to very great length not to break users' work environments. They keep things working, secure and stable, and I am very grateful for it.
I hope someone will make a Podman Desktop installer. On mac it would setup a small VM and Podman remote CLI on the Mac. On Windows it would setup a small WSL2 distro. The problem with Podman is that it currently has a lot of bugs when not running on Fedora/RHEL or similar. And there is no free fedora distro available for WSL that I’m aware of.
I'll note that Docker only does this on the virtual machine infrastructure wrapper needed to run a minimal Linux VM to run Docker on non native platforms.
If you run Docker on Linux, which it was designed for, it is totally open source and free.
If you are on a Mac it is assumed you like paying for GUIs on top of things that are otherwise free. That checks out.
Many comments seem to be critiquing Docker’s monetisation strategy.
If this is accurate, why don’t they just ask users what they would pay for? I’m sure customers understand that things need to be financially sustainable, so why not just ask the userbase their vision of a monetised Docker?
Docker Desktop is really good, at least on Windows (I don't have any macOS experience to comment on it there). I've been using DD on Windows since it was available and use it for full time development.
On Windows Docker Desktop is really fast, rarely crashes (once every couple of months I'll encounter something strange and restarting it always fixes it), it works seamlessly with WSL 2 and it seems to install and run without issues on a huge array of hardware / software combos. The last one is important and I'm making this claim based on the lack of support questions I get on my Dive into Docker course around installing Docker. Prior to Docker Desktop there were so many reports with Docker Toolbox around installation issues / errors, but now I get almost none related to installation and this is with a pretty big sample size too.
It's also a quick way to get a 1 node Kubernetes cluster running on your dev box which works perfectly out of the box, even exposing your services over localhost. Minikube and other solutions still require manual steps to get this behavior and personally I've found there to be a number of installation issues that aren't present with Docker Desktop, all of which were fixable but still required manual intervention to fix.
I think GP is specifically complaining about "Docker Desktop" (the GUI), not "Docker" (the service) itself. Here are some alternative GUIs for managing Docker: https://www.cloudbees.com/blog/docker-guis/
Otherwise, VMs are usually overlooked as an alternative to "docker images", most probably because many people entered the industry after docker images became popular, and they don't know anything else. If you design it properly, stick with establish technology like QEMU, you can have the same environment without having to touch docker at all.
VMs are great. I've been using them for work stuff since Hardware virtualisation made it practical to run VMs with little or no performance penalty.
That said, swapping from Docker to VMs for things is nowhere near as simple or straightforward.
The near-frictionless ability to stand up/tear-down a 'known' good image and configuration, with the ability to swap in/out networking, volume or other environmental configuration is what made Docker so popular.
We use Docker containers for a wide range of things, and while we could reproduce all of that as VMs - it would take a lot more effort, and it would introduce a lot more delays waiting to build/replace a VM.
> people entered the industry after docker images became popular
I don't think this is the only reason people use docker. Docker let's you compile an artifact and deterministically recreate that artifact at any point. When you want to deploy this artifact everything is hermetically included in it. No network access or startup time. Just starts and it's good.
If you have VM tooling that lets me do that without spawning a VM repeatedly and doesn't give me an effective way to modify the FS & copy my programs in (packer) then let me know. I haven't found anything VM related that's simple to use
In addition: vms still have a performance hit much more significant than processes.
For docker desktop you should be able to manage your own Linux VM install and forward the dockerd socket to your host machine and use the docker tooling with the appropriate DOCKER_HOST environment variable. As far as I remember, that's basically how docker desktop works.
Docker desktop, unlike the docker command line tool and dockerd server, is not free software, and isn't even source available. It also transmits a HUGE amount of sensitive data about your system to Docker without your consent, so it's spyware.
I can't really recommend avoiding this software any higher than I do. If you work on any sort of private codebases or under NDA, this thing is a liability magnet.
Install the free software command line tool via a package manager (not brew, that one is also spyware) and set
> It also transmits a HUGE amount of sensitive data about your system to Docker without your consent, so it's spyware. [...] If you work on any sort of private codebases or under NDA, this thing is a liability magnet.
Can you please share a full list of what gets transmit with and without having the "Send usage statistics" option enabled? I'm assuming you have this data available because the wording of your reply is so specific.
sneak, the commenter didn't ask how to reproduce it, they asked what it sends. What is in the zip file and what exactly does it upload when it crashes? Are you certain that it uploads the whole zip file upon a crash?
The commenter asked for free research, which I'm not about to reproduce for them (at a minimum it takes the time to format/reinstall a machine), so rather than not answer I told them precisely what they need to do to get the data they desire.
> The commenter asked for free research, which I'm not about to reproduce for them
Not really. You specifically wrote that anyone using Docker Desktop is now liable for breaking NDAs and that a "HUGE" amount of sensitive data about your system is being sent to Docker without consent.
I mean, I'm taking for granted here that you didn't wake up yesterday with intent to write those sentences without having ever done the research yourself at least once. You're the one making these claims. If you've done the research, why not just post it here so other folks can verify what you're saying?
What I'm getting from your latest reply is you're trying to make it sound way worse than it is and are using a lack of information to guide folks into thinking the worst case scenario by filling in the gaps with their own interpretation of what you wrote.
Transmitting a crash dump is much different than them sending sensitive data without your consent, especially considering you can turn the "send usage stats" option off which no longer sends crash dumps, and the dump itself is only sent on a crash. Also the help text under the option says that it sends crash dumps too.
Your original reply made it seem like every time you run a container with a volume, the contents of your source code is sent to Docker because in a lot of people's minds that's a "HUGE" amount of sensitive data and ties into your "private code base" and NDA liability sentence before.
> I'm taking for granted here that you didn't wake up yesterday with intent to write those sentences without having ever done the research yourself at least once. You're the one making these claims. If you've done the research, why not just post it here so other folks can verify what you're saying?
I have provided specific instructions for verifying precisely what I'm saying. I'm not going to spend hours reproducing this simply because I'm being cross-examined in a comment. It is immaterial to me if you believe my reports of the truth or not.
It's been a long time since I had Docker Desktop crash on me (it seems to be very stable these days), but I was sure there was an optional "send crash report" button?
Don't try to dictate to others how they should work, as though you're some kind of oracle on the topic.
I'll decide (together with my employer, when appropriate) what software and tools to use, based on a range of factors. Free-ness may be one of those factors, but it doesn't get to unconditionally veto everything else.
It's just a strong recommendation, GP isn't pretending to have authority over HN commenters. This type of 'command as suggestion' construct is pretty common in modern english.
No, it's not a suggestion, it's definitely a command.
Don't use nonfree tools, or you'll become a sharecropper ripe for abuse, such as the complete and utter "it's our computer now, fucko, even though you paid for it" nonsense demonstrated in TFA.
Free software can't behave like this because the moment it tries we'll simply patch it out.
Wow, I had no idea this existed. Thanks for mentioning it. I block the telemetry with settings and with Little Snitch, but this is what I needed from the beginning.
Sure but not everyone is a "free software only" person either. VSCodium comes at a price if you invested time in learning those extensions which won't work in it.
It transmits your on-device activity to Google without consent. It includes a unique tracking identifier generated on install that never changes, like a supercookie. Every time you run brew, it transmits this to Google, which allows Google to assemble a city-level tracklog of your device based on client IP geolocation, along with a list of all the packages you have installed, and when.
It does this silently, and without obtaining any sort of consent, which is why most people are unaware that homebrew is spying on them.
> You will be notified the first time you run brew update or install Homebrew. Analytics are not enabled until after this notice is shown, to ensure that you can opt out without ever sending analytics data.
I'm confused- the brew developers say you can run `brew analytics off` to "prevent analytics from ever being sent" [1]. Is this not accurate? Are analytics still being sent? Is your concern with the consent, or are the brew developers lying when they say this command prevents analytics from being sent?
I think what he means is that there isn't explicit consent given for the analytics, as in opt-in rather than opt-out. You can disable it but that's not the same.
When they implemented it, they opted everyone in and buried the notice in a wall of text. I only caught it when Little Snitch notified me that brew was reaching out to Google.
The project still doesn't seem to understand how bad of a mistake this was and how bad their response to it was. But as the project lead told us while playing the victim, if we're not contributors, our opinions on the matter mean nothing.
Every time I've ever seen anyone question their decision to embed Google spyware in their product, however, the GitHub issues are closed and locked, so I don't know if you'll have very good luck. I stopped trying to convince them
to behave ethically and simply use nixpkgs now instead (which incidentally in my experience works better) and do my best to inform people about the facts so they can make their own decisions (something I wish homebrew would do, instead of deciding for them to use their computer to spy).
What is spyware to you? If it's spying on me without consent and sending private information about my computer it is definitely spyware, regardless of the database they use.
Since they don't ask for consent and uses PII, it is illegal under GDPR, probably CCPA and other laws too. It's also Not Nice™.
What private information about your computer is it sending? Browser, OS, screen size, location, ISP. Under GDPR, and the way i see things, none of that is PII.
It generates a unique identifier which it transmits on each invocation. The identifier uniquely identifies the installation of homebrew, linking all of those other bits of data together across time and space.
This should be cross posted to r/BeggingChoosers.