Hacker Newsnew | past | comments | ask | show | jobs | submit | muvlon's commentslogin

In my head, the levels are exactly swapped. Connecting two wires together reliably is harder than through-hole in my experience. Through-hole PCBs are actually designed for solder, so surface tension basically does your job for you. Also, with the PCB you have a solid surface to push on, whereas with two wires everything's a little bit more loose. Lastly, if you want the connection between the wires to actually be reliable, you're probably looking at splicing, which takes some more skill yet.

Interesting. I guess I see through-hole stuff as more advanced because if a PCB is involved, there’s usually something expensive around there that you can actually damage with a (genuinely!) poor soldering technique. Through-hole or pads, that’s ESPs and drone flight controllers territory for me, or building DIY batteries.

A random frayed power cord, well, you can dump a ton of heat in it and start 6 times over and it won’t really matter. Worst case you replace some isolation or a warped connector.


That's not really the culture of debian to be honest. Yes they run old major and minor versions, but they do ship patch updates as fast as they can. Even on debian stable, you absolutely are supposed to update all the time. The culture of "just don't touch it" is a different one (but also exists, I've seen it).

Let's Encrypt has to be down for days before people begin to feel the pain. DNS is very different, it breaks stuff immediately everywhere.

No it doesn't. DNS breaks as soon as TTLs run out. It's your choice to set them so low that stuff breaks immediately.

What do you recommend then? DNS doesn't usually change that often, but if you mess it up when it does, you're in for some pain if TTLs are high!

Not the one you're replying to, but I'd keep TTL high normally and lower it one TTL ahead of a planned change.

I would define high as "double time needed to fix a dns issue" and account for weekends

This is the way.

Unfortunately you can't set DNS TTL arbitrarily high (or low) without some resolvers ignoring your suggestion and using arbitrary values.

Most historical outages lasted minutes or hours. One arguably lasted much longer, when someone lost control of their servers due to civil war.

I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.


This assumes that the host name you want has been recently queried. If it's not cached, good luck...

TL;DR: If it's not cached, does it really matter if it's offline for some time?

Long version:

If you're so popular all around that you really really want a very very short TTL, people will query all the time from all the places that "count", won't they? So it's gonna be cached.

If you're not so popular or not all around, what does it matter even if you had a very very short TTL? You're not loosing much.


This is one category of good alerts, but not everything.

I think alerts are to ops what tests are to dev. You have "unit alerts" for some small thing like the disk usage on a single host, "integration alerts" like literally "does the page load?" and then what you describe are "regression alerts", trying to prevent something that went wrong once from going wrong again. These are great but just like you wouldn't have 100% regression tests, I think it's also smart to try to get ahead of failures and have some common sense alerts defined.


While I hate suid as much as the next person, it's really not the problem here.

The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.


And the Pentagon has historically gotten away with damn near everything even in the judicial branch by appealing to national security.


What problem is this actually solving? I've deployed DHCP countless times in all sorts of environments and its "statefulness" was never an issue. Heck, even with SLAAC there's now DAD making it mildly stateful.

Don't get me wrong, SLAAC also works fine, but is it solving anything important enough to justify sacrificing 64 entire address bits for?


* privacy addresses are great

* deriving additional addresses for specific functions is great (e.g. XLAT464/CLAT)

* you don't get collisions when you lose your DHCP lease database

* as Brian says, DHCP wasn't quite there yet when IPv6 was designed

* ability to proactively change things by sending different RAs (e.g. router or prefix failover, though these don't work as well as one would hope)

* ability to encode mnemonic information into those 64 bits (when configuring addresses statically)

* optimization for the routing layers in assuming prefixes mostly won't be longer than /64

… and probably 20 others that don't come to mind immediately. I didn't even spend seconds thinking about the ones I listed here.


Privacy addresses... Isn't it silly to talk of privacy if the prefix doesn't change?


Absolutely schizo.

"I wish to participate in a global telecommunications network and I wish to connect immediately to all my friends and be available to them 24/7 and I wish to play games with strangers across the country and I wish to receive all my email within 300ms with no spam and I wish to watch the latest news from Iran in 4K streaming Dolby"... but priiiiivacy!


SEND secures NDP by putting a public key into those 64 bits, and also having big sparse networks renders network scanning rather useless at finding vulnerable hosts, so there are reasons to make subnets /64 other than SLAAC.

Also we can always reduce the standard subnet size in 4000::/3 if we ever somehow run out of space in 2000::/3 (and if we don't then we didn't sacrifice anything to use /64s).


DHCP requires explicit configuration; it needs a range that hopefully doesn't conflict with any VPN you use; it needs changes if your range ever gets too small; and it's just another moving part really.

With SLAAC, it's just another implementation detail of the protocol that you usually don't have to even think about, because it just works. That is a clear benefit to me.


When it fail, you find there is no option to tune its behaviour.

Plug in a rough router and see quickly you can find it.


What kind of failure are you referring to? What would you want to tune? You can still easily locate all devices on your network.


This doesn't register as corpo talk to me, more tongue-in-cheek nerdy mission control talk. See also "rapid unscheduled disassembly".


There are a bunch of subbrands but there are also a lot of genuine small Android phone companies, especially in China.

Some of these serve some interesting niches that might now disappear due to this DRAM supply issue, e.g. Unihertz for extra small phones or CAT for extra durable worksite phones.


Is there any 'guide' to this ecosystem...because 'odd niche communications gear' is always interesting.


Notably they didn't fully shed it, they compartmentalized it. They proposed to split the standard into two parts: r7rs-small, the more minimal subset closer in spirit to r5rs and missing a lot of stuff from r6rs, and r7rs-large, which would contain all of r6rs plus everyone's wildest feature dreams as well as the kitchen sink.

It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: