This really doesn't say much though. What specific measures are in place to ensure user privacy and data protection?
Does personal information get sent to OpenAI or Claude as part of the functionality? Can users request deletion of their data, and if so, what is the process? Are there specific protocols in place to ensure security? (i.e. Do you use encryption at rest?).
> What specific measures are in place to ensure user privacy and data protection?
Unless you intend to personally audit their code, I'd argue it couldn't possibly matter. Even businesses like Apple publish all kinds of documentation that belies the reality of their infrastructure. The iMessage Security Overview doesn't mention the NSA's retention period for encrypted communique; the push notification documentation doesn't tell you about the government middleman processing each alert.
You either trust people blindly, or you validate them personally. Getting a pinkie-promise about privacy from the CEO is worth absolutely nothing in real-world security terms.
> We want to ensure that security and privacy researchers can inspect Private Cloud Compute software, verify its functionality, and help identify issues — just like they can with Apple devices.
So... in Apple's own words, they are allowed to cherry-pick who's allowed to read their code and audit their privacy, in the same way they strategically deny researchers the ability to audit certain iOS features.
Microsoft and OpenAI aren’t even providing users with services with any actual confidential compute architecture. You actually need to trust them, but they don’t even claim to do what apple does, so you would need to hallucinate they are making promises they aren’t and believe THAT, and also hope they aren’t hacked or served with a warrant. It’s a different matter with what Apple is doing.
A different matter without distinction. Apple is equally as unaccountable as OpenAI and Microsoft, their only difference is their usual marketing strategy that the industry never took seriously in the first place. If it came out that they were sending the NSA all of your "private" LLM requests (like what happened with push notifications[0]), Apple would just sheepishly admit it and continue advertising their same security-oriented shtick. They're shameless.
We don't know what those logs will tell us, and if it was designed with privacy in mind it shouldn't say much. Binary software images also don't tell us what the binary is doing, similar to how having all of the iOS files doesn't give you insight into how the OS was programmed. If the server source was fully open source and each machine could attest it was running an unmodified binary, then we might have a level of accountability. As-is, this is no better than Apple's "Trust me, bro" mindset they exercise securing iOS and MacOS.
Does personal information get sent to OpenAI or Claude as part of the functionality? Can users request deletion of their data, and if so, what is the process? Are there specific protocols in place to ensure security? (i.e. Do you use encryption at rest?).