Is it possible to modify the car to disconnect from the remote Tesla service and keep the 90 firmware on there? What critical functionality of the car would you be looking at losing, other than updates?
Is it normal for unionization votes to have this kind of opposition? 56.6% doesn't seem like an overwhelming majority, so I'm curious to see how it plays out for all of the employees there.
It would be interesting as well to know how/if the vote was drastically different based on role - it sounds like this was a company-wide vote, which would necessarily include many more roles than just software engineers.
I understand that Kubernetes is a complex project, but I struggle to see how this comment adds to the discourse.
For those that do need a solution like Kubernetes, charts like this are helpful, and the knowledge requirements certainly aren't too steep relative to comparable platforms.
What about Kubernetes makes it especially worth going in and commenting that you don't have a use case for it, in a thread dedicated to Kubernetes? Would you make similar comments about other platforms/libraries/technologies that you, for whatever reason, don't have a use case for?
Because echo chambers need their bubble's pierced every so often. Kubernetes is such garbage that no one runs it on their servers and if they do they have an army dedicated to managing it 24/7. Go to any company that's running it and ask them how they feel about it.
Because many people (including managers) take HN seriously and then go and try to implement or push for this being implemented because they read it on HN. Unfortunately they cannot go to HN when a outage explodes in their face, caused by an unknown problem in k8s. There is no clear value proposition for using k8s. It is trading time to figure out what would make the most sense for a project for black box complexity in most cases and that is a very bad deal. You pay the price once for the first one and all the time for the second one. If I do not comment here then you have a nice echo chamber that all fine and dandy with k8s and there is no downside at all using it. Btw. it is not only k8s and HN supposed to be a discussion site not a fanboy club of broken tech. I understand that the latter is much more appealing to many people.
I definitely agree with you - my team switched to an "ephemeral cluster" model which allows us to very quickly spin up an entirely new cluster and drain traffic to it as needed.
It's something that we've ended up implementing on our own with a lot of Terraform, but that's had its own obstacles and is something of a small maintenance burden. I'll be taking a look at sugarkube!
Definitely can sympathize with you on this, having spent plenty of time myself fighting some clusters that ended up in a broken state, and trying to get them going again.
I think that this pain is sometimes more severe in the context of automated provisioning tools out there and the trend towards immutable infrastructure - folks tend to not have the know-how to dig in and mutate that state if need be.
It's really important to have a story within teams, though, about either investing in the knowledge needed to make these fixes, or to have the tooling in place to quickly rebuild everything from scratch and cutover to a new, working production cluster in a minimal amount of time.
I'm just beginning my journey into the vanilla Kubernetes world.
As I build my knowledge I am also building Ansible playbooks and task files. After each iteration I shutdown my cluster. Do an automated rebuild and test. Delete the original cluster and start my next iteration.
I have an admin box with everything I need to persist between builds (Ansible, keys, configuration files, etc) and can deploy whatever size and quantity of workers (VM) needed.
It has been a good process so far. I haven't yet put things in an unrecoverable state, but if that happens I can rebuild the cluster to my most recent save and try again.
I don't see it taking a lot of resources to have a proving ground. I would definitely not feel comfortable going to production without the ability to reproduce the production clusters' exact state.
I anticipate exactly what you describe as a roll back mechanism. At all times I want to be able to automate the deployment of clusters to an exact known state.
I think building a cluster, walking away from it for a year, and then coming back to it for a break fix/update/new deployment is a huge gamble.
Hi I'm not sure if you saw my comment below, but this is 100% the usecase Sugarkube [1] was designed for. Depending on where you are in setting things up it might save you time to give it a try. There are some non-trivial tutorials [2] and sample projects you can use to kickstart your development. It does only currently work with EKS, Kops and Minikube though so wouldn't be suitable if you're using something else to create your K8s cluster.
This is my thinking too. Build a new cluster and push it all over to a new cluster. If you feel like understanding the old (and can afford it), keep it around and try to figure it out.
Very much agree, but never managed to reach this point. One reason is that the amount of hardware needed for this is pretty prohibitive. Second is that configuring a new cluster (last time I did it) was so much work, and I never managed to automate the process, that there was just simply no way I could have created a new cluster in time to get our websites back up
Adopting any piece of technology just because it's a fad or otherwise trendy is rarely a good justification.
It sounds like the primary reason you're deciding to move away from it is because you don't face any of the problems that it's there to solve, rather than it being an operational burden.
To be fair, k8s imposes a large complexity load. So what it gives you has got to be worth the time your developers/devops/sysops will spend learning to work with it.
We're migrating to k8s, and it's fantastic for us, but we have a large complex system that we're moving into the cloud, which really benefits from k8s features - loving the horizontal pod autoscalers, especially when I can expose Kafka topic lag to them (via Prometheus) as a metric to scale certain apps on.
I think that the "Even in JavaScript..." comment was written with the idea that race conditions might not be present/possible in single threaded environments, such as JavaScript...I read it as "Even in a single-threaded environment, such as JavaScript..."
I don't think that most folks would hold up JS as some sort of gold standard of language design, and it doesn't seem that the author is doing so here.
In the scale of "understanding imperative semantics", awareness of race conditions is probably near the very high end of difficulty, while pointers are only somewhere in the middle. And that's a problem, because there are plenty of languages, JS included, that free you from understanding pointers while still making it very easy to create racy code. As such there are a lot of programmers going around with a false understanding of whether they are writing robust concurrent code - because they don't think it is concurrent. It doesn't say "concurrent" on it - this is a footgun naturally achieved with any sufficiently complex loop - and often they have made some effort to tuck away mutability in tiny functions, hindering efforts to find and fix the resulting race conditions.
Inversely, it's relatively easy to abstract away the complexity of pointers (so we do), but useful abstractions that make race-y code impossible are horrendously hard to get right (so we don't).
Rust's borrow checker is a great example of the sort of complexity you _have_ to bring in to have your language protect you from data races.
That's true but actor model achieves freedom from data races by forcing ALL messages to be synchronized. This can be useful in some situations but imo isn't a panacea for concurrency issues
I'm assuming you mean implementation-wise. Only cross-core communications need synchronization and since messages are asynchronous you can pay synchronization costs only ones for the whole batch.
> Can you give an example of why DNS based service discovery was needed?
Not to be flippant, but any case where things need to connect to other things. At scale they usually need to connect to other things through some sort of load balancer. You get that out of the box with kubernetes for services hosted inside the cluster, and there are straightforward solutions to ingress for clients outside it. Another important feature is pod scheduling. Yes you could wire up a few machines using docker compose and any of a few different networking approaches, but if one of your VMs dies are those workloads going to move to a healthy instance by themselves?
Service discovery just makes it easier to link up services if your architecture is truly microservice. We used to have an incredible amount of config that tightly coupled our API's.. now we use a combination of service discovery and an API gateway (Ambassador) to decouple the services, cut down on the number of random endpoints in our config, and we also get the added benefit of load balancing, rate limiting, and additional logging.
There's always a tradeoff with scale. If you have four servers than obviously all of this stuff is overkill.
> If you have four servers than obviously all of this stuff is overkill.
I disagree, actually. I have four servers at home, and have some pods that have been running little tinkery things, and a bunch of open source software, with ridiculous uptime and little or no effort, even when I reboot one of those "servers" to do some gaming on Windows.
Now, do I need that uptime for all of those services? Not really. But for some of them, I want it, and it'd be annoying if I had to go figure out why they'd stopped running. The reality is, things just keep ticking without me worrying when they're on k8s.
These skills transfer into very in-demand job skills as well, and if I ever build anything that gains traction, I already have all the tools, configs, and knowledge to deploy that app across 500 generic cloud servers.
I agree with you. I think people tend to associate Kubernetes with the other underlying problems they're having with their infrastructure when they start thinking about using it. Just like it's tough when moving from 0 pieces of software in production to 1 piece of software in production, it's just as tough moving from 1 piece to 2 pieces. But if you do that transition correctly, then the 2 to infinity part is easy. I think you will find it just as painful to make that move with any orchestration system. (CloudFormation? Convox? They're not easy, and you get the feeling that nobody else is using them.)
I wouldn't recommend Kubernetes if you only have one application you run in production. Just rsync your production image to production whenever you remember to do a release. But if you have more than 1 thing, it's time to start thinking about it, because the "do whatever" that works great with 1 thing starts to break down when 1 becomes 2. That is not Kubernetes's fault. That's just the nature of the beast.
Because I can run `ping` in a busybox container that knows how to do all the same service discovery as a more complicated fully fledged microservice. No extra libraries. No smattering of supporting services (on top of DNS, which you need anyway, of course). Setup and debugging is 10x simpler than any of the more complicated service discovery solutions.
And it just works. Simply and intuitively. With everything. Because it's been an IETF standard since 1983.
Seems hard to get enough users to make this worthwhile at any given bar - people have to adjust their "out at a bar" time to include checking into a dating app.
The right growth strategy here seems a lot more localized - targeting specific bars/nightlife districts and seeding from the ground up. Can't get any useful signal from 10,000 users if they're all spread out across North America.
I say that after seeing this app promoted on HN a few times now - probably not a super effective growth channel.
The sensor reading could have just as easily been 24 degrees rather than 70 and caused the same crash w/ the author's proposal.
Not saying or excusing the failures that led to this crash, but it seems like an oversimplification of the needed solution to suggest that ridiculous data be thrown out, with the benefit of hindsight being used to determine where the threshold of "ridiculous" is.