If QC gets to the point where breaking RSA and ECC in the real world is actually going to happen, I'd imagine you will find a consensus rather quickly.
The argument regarding no certificate pinning seems to miss that just because I might be on a network that MITM's TLS traffic doesn't mean my device trusts the random CA used by the proxy. I'd just get a TLS error, right?
Not if someone can issue the certificate signed by the CA your phone trust.
Imagine being in a cafe nearby, say, embassy of the certain north African country known for pervasive and wide espionage actions, which decides to hijack traffic in this cafe.
Or imagine living in the country where almost all of the cabinet is literally (officially) being paid by the propaganda/lobbying body of such country.
Or living int he country where lawful surveillance can happen without the jury signoff, but at a while of any police officer.
> Imagine being in a cafe nearby, say, embassy of the certain north African country known for pervasive and wide espionage actions, which decides to hijack traffic in this cafe.
How would they get your phone to trust their CA? Connecting to a Wi-Fi network doesn’t change which CAs a device trusts.
Because there is a quadrillion trusted CAs in every device you might use. A good chunk of these CAs have been compromised at one point or another, and rogue certificates are sold in the dark market. Also any goverment can coerce a domiciled CA to issue certs for their needs.
All modern browsers require certificates to be published in the certificate transparency logs in order to be considered valid.
These are monitored, things do get noticed[0], and things like this can and have lead to CAs being distrusted.
It's not foolproof, and it's reactive rather than proactive... but in general, this is unlikely to be happening on major sites or at any significant scale.
I'd wholeheartedly recommend people taking some time and reading through the CA Compliance issues on Bugzilla. The entire CA program there, in my opinion, does a fantastic and largely thankless job of keeping this whole thing on the rails. It's one of the few things I can say I had _more_ trust in the more I looked into it.
China telecom regularly has BGP announcements that conflict with level3's ASNs.
Just as a hint in case you want to dig more into the topic, RIR data is publicly available, so you can verify yourself who the offenders are.
Also check out the Geedge leaked source code, which also implements TLS overrides and inspection on a country scale. A lot of countries are customers of Geedge's tech stack, especially in the Middle East.
Just sayin' it's more common than you're willing to acknowledge.
Well yes, CAs and the ICANN model of DNS are intertwined and fundamentally broken in multiple ways. However the system as a whole is largely "good enough" as can be seen from its broad success under highly adversarial conditions in the real world.
That's not really how security works. Either it's broken, or it's not. Security is only as good as the weakest link in the chain. Whether it's good enough or not... hard to say.
That sort of reasoning only applies to algorithms - those shatter the way glass does. Other stuff is more pliable. It's entirely possible to shoplift but there's a nonzero chance you'll get caught. Is the supermarket's security broken? There are many known attacks against it so I'd say that it is.
Notice my wording above - fundamentally broken in multiple ways - by which I mean that there are clear and articulable flaws with the model. Nonetheless it's clearly quite functional in practice.
This is stopped by certificate transparency logs. Your software should refuse to accept a certificate which hasn’t been logged in the transparency logs, and if a rogue CA issues a fraudulent certificate, it will be detected.
Certificate transparency doesn't prevent misissuance, it only makes detection easier after the fact. Someone still needs to be monitoring CT and revoke the cert. I actually believe most HTTP stacks on Android don't even check cert revocations by default.
I'm not too sure what the detection process is like, but being found to sign fraudulent certificates results in your CA being untrusted and is the end of your business. So it's not going to be done lightly even if there isn't automated systems to catch it instantly (which there likely are at least for major websites)
The detection process basically boils down to 'server admins need to check CT themselves'. A CA also doesn't have to be malicious; a non-CA malicious actor could also exploit a vulnerability in the verification process of an honest CA. Depending on the severity of the situation that's unlikely to get them removed from the root stores.
Interesting example: last year Cloudflare found out that a CA had been (incorrectly) issuing certs for 1.1.1.1. They only found out 1.5 years after the first cert had been issued. The CA didn't do it with malicious intent, and as far as I know they're still in business. https://blog.cloudflare.com/unauthorized-issuance-of-certifi...
I don't believe it's supposed to proactively check the logs as that would inevitably break in the presence of properly configured MITM middleboxes which are present on many (most?) corporate networks.
The point of the logs as I understand it is to surface events involving official CAs after the fact.
Clients are supposed to check. For example, Apple requires a varying number of SCTs in order for Safari to trust server certificates. https://support.apple.com/en-us/103214
So how does that work with middleboxes? Corporate isn't about to forgo egress security (nor should they).
I don't currently MITM my LAN but my general attitude is that if something won't accept my own root certificate from the store then it's broken, disrespecting my rights, and I want nothing to do with it. Trust decisions are up to me, not some third party.
Corporate managed machines can control the software running on the computer to do anything. I'm not sure the details, but chrome certainly can support corporate MITM. There's likely some setting you have to configure first.
The default should be to reject certificates which aren't being logged, and if you as a user or corporation have a reason to use private certificates, you just configure your computer to do that. Which fully protects against the risk of normal CAs signing fraudulent certificates.
The entire point of transparency logs is to detect a cert issued by a different root CA despite both being trusted. The corporate MITM cert won't be present in the logs by design.
Ok, fair point. However, I would consider any MDM-enabled device fully "compromised" in the sense that the org can see and modify everything I do on it.
An MDM orga cannot install a trusted CA on non-supervised (company owned) devices. By default on BYOD these are untrusted and require manual trust. It also cannot see everything on your device - certainly not your email, notes or files, or app data.
As someone who has an MDM-managed device, I beg to differ. Although, this one uses newer style android MDM, which involves factory resetting and doing special things during OOBE. Even if it used the older style, nothing's stopping the app for requesting file access, notification access, etc. and not working until you grant the permissions.
Nothing is stopping any app from the Play store to request any particular permission, not just MDM apps, right? And yet, no app can read arbitrary filesystem data including random app data without your device being rooted first.
If anything, one of many MDM purposes is to prevent orgas from enrolling rooted devices in their fleet.
Certificate pinning can be useful, especially in particularly sensitive areas. But I wouldn't expect it as a standard security practice. If anything I appreciate that it isn't done so that reverse engineers can thoroughly study the traffic on their own devices. I agree that it was odd that the article mentioned it more than a quick note, let along made a big deal out of it.
Bubblewrap is exactly what the Claude sandbox uses.
> These restrictions are enforced at the OS level (Seatbelt on macOS, bubblewrap on Linux), so they apply to all subprocess commands, including tools like kubectl, terraform, and npm, not just Claude’s file tools.
Oh wow I'd have expected them to vibe-code it themselves. Props to them, bubblewrap is really solid, despite all my issues with the things built on top of it, what, Flatpak with its infinite xdg portals, all for some reason built on D-Bus, which extremely unluckily became the primary (and only really viable) IPC protocol on Linux, bwrap still makes a great foundation, never had a problem with it in particular. I tend to use it a bunch with NixOS and I often see Steam invoking it to support all of its runtimes. It's containers but actually good.
> This would only apply if they were distributing the GPL licensed code alongside their own code.
As far as I understand the FSF's interpretation of their license, that's not true. Even if you only dynamically link to GPL-licensed code, you create a combined work which has to be licensed, as a whole, under the GPL.
I don't believe that this extends to calling an external program via its CLI, but that's not what the code in question seems to be doing.
(This is not an endorsement, but merely my understanding on how the GPL is supposed to work.)
I fully agree. The original comment and the other replies to it are bewildering. There was nothing to gain here, yet people are throwing ad hominem attacks left and right.
> However, The Guardian reported that Lidden’s solicitor, John Sutton, had criticised the Border Force for how it had handled the incident, describing it as a ‘massive over-reaction’ because the quantities of material were so small they were safe to eat. He reportedly said that he had been contacted by scientists all around the world saying that the case was ‘ridiculous’.
At least for the "to the right" part, I think the monotonicity of time has that one covered unless someone invents a time machine to improve metrics in the past.
reply