They weren't vulnerable to it in anything but an academic sense. They call that out up front: "There was no impact to the Cloudflare environment, no customer data was at risk, and no services were disrupted at any point."
This was probably written by their security team. Security teams are paranoid. They want everything patched everywhere all at once at a severity level zeo. Also, PR. Also, also, if through some lack of imagination, this was somehow involved in an exploit of their services, it would look really really bad. So, CYA.
Yeah I think what I'm trying to clarify here is: are they doing a threat hunting exercise out of concern for multitenant exposures, or out of concern for internal privilege escalation?
Cross-tenant would be very surprising! But I don't know enough about their architecture.
It's weird, right? The underlying CNE primitive here, for CopyFail, is not novel. These happen all the time. Why the announcement? Is it just because CopyFail got so much attention?
I can upload arbitrary code to Cloudflare workers, which they run on their systems. It's sandboxed, but in the big bad Internet, if you were Cloudflare, how much would you really trust that sandbox?
That's a straw man and not what he asked. Literally, he asked: "why they would have been vulnerable to CopyFail?"
I've been a sysadmin/programmer since the mid-90s. Local root exploits are a dime a dozen. If your infrastructure relies upon the tenuous difference between root and non-root accounts, you've already lost. Cloudflare isn't an ISP handing out shell accounts on Unix machines.
So again, yes, of course you should patch your Linux machines. Defense in depth and all that. But the question remains: "why Cloudflare would have been vulnerable to CopyFail?" (in anything but an academic sense). Because I do not believe that they can possibly be relying on the difference between root and non-root account.
I mean, in some sense, Cloudflare simply accepts the security posture of "already lost", right? They run workloads for multiple users within the same process separated by nothing more than V8 boundaries, which even Chrome (which always claimed to run tabs in separate processes but actually didn't due to various edge cases) finally stopped doing (now afaik they do fence origins within processes) as it was so risky... Cloudflare's best lines of defense past "we patch often" are merely that they sort of KYC at least most of their users so they can log everything they run with their identity and that they take users of similar trust levels (age of account, level of KYC, amount of usage, etc.) and group those into processes... but, at the end of the day, they rely on something that I would certainly never consider reasonable to ship in production.
I don't care about your credentials. It doesn't take a genius to realize that having known major security holes is not ideal.
It is pretty clear they aren't too concerned about this being a issue for this business, after the first paragraph in bold on the blog:
"There was no impact to the Cloudflare environment, no customer data was at risk, and no services were disrupted at any point. Read on to learn how our preparedness paid off."
As mentioned, you never want to give options to a potential attacker/exploit by keeping known vulnerabilities present in your system. You cannot always predict every single avenue an attack could leverage.
Imagine having a data center with barbed wire fences, guard posts, security and cameras covering every square meter of the facility. You wouldn't just leave a door right open because in theory, people shouldn't be able to walk right in. But why would you willingly leave a door open? Even if the possibility is 0.000001%?
People like you would be the first to turn and say "Cloudflare are morons for not patching this!!! Me and my 1 billion years experience and goat status would of prevented this' when some major Cloudflare hack occurs and it was found that phishing 30 different people and using 9 different exploits (including Copyfail) allowed the attacker to bring down Cloudfare
"Consumer preferences have shifted away from preservative-laden canned food in favor of healthier alternatives," said Sarah Foss, global head of legal and restructuring at Debtwire, a financial consultancy.
Grocery inflation also caused consumers to seek out cheaper store brands. And President Donald Trump's 50% tariff on imported steel, which went into effect in June, will also push up the prices Del Monte and others must pay for cans.
Del Monte Foods, which is owned by Singapore's Del Monte Pacific, was also hit with a lawsuit last year by a group of lenders that objected to the company's debt restructuring plan. The case was settled in May with a loan that increased Del Monte's interest expenses by $4 million annually, according to a company statement.
During the coronavirus pandemic, when more people were eating at home, demand rose to record highs, Del Monte said in the filing, and the company committed to higher production levels. Once demand began to ease, Del Monte was left with too much inventory that it was forced to store, write off and “sell at substantial losses.”
The company also said it had carried a large amount of debt since it was acquired in 2014 by Del Monte Pacific Limited, which borrowed to finance the acquisition. Interest rates continued to increase, and the company’s annual cash interest expense has nearly doubled since 2020.
If you're up for a 12 minute video, besides re-iterating the points above (particularly underscoring the debt issue), it also points out that the company has changed hands many times in its history.
I'm glad to see PyInfra is still under active development. I don't currently use PyInfra, but I previously used it for a couple years to manage a build farm of about 100 Mac Pros. Those machine had previously been partially managed by Chef to ill effect.
I found PyInfra to be a great tool for the job at hand. Even though it didn't have many of the operations I needed, I found it easy to write new operations specific to macOS management tasks.
I recently looked at it again to help build EC2 Mac AMIs in combination with Packer, but I ended up with pydoit this time instead.
GHEC is a terrible fucking product too and for the life of me I don't understand why they didn't use subdomains to namespace customers from each other and from github.com.
It should be mycompany.github.com because the way it is now, we have to rename all our damn repo orgs as we move from GHES to GHEC ("github.com/mycompany-org/repo") which is no guarantee either because anyone could create that org before is. All sorts of terrible UX falls out from not having name-spaced the GHEC customers.
Sibling comments have made their point. I'll just add:
“But the book was on the shelf…”
“On the shelf? I eventually had to go down to the cellar to find it.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the book, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on a shelf in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”
Ah, so it could be used in the daytime. I read the whole article assuming it was only useful at night. (When else would you be flying a bomber and need high accuracy?)
This was probably written by their security team. Security teams are paranoid. They want everything patched everywhere all at once at a severity level zeo. Also, PR. Also, also, if through some lack of imagination, this was somehow involved in an exploit of their services, it would look really really bad. So, CYA.
reply