> Generally speaking, the fact that JS source is pushed down on every access means that there's no way for you to actually review it.
The blog post says that it is necessary for code to be loaded as a local, signed browser plugin. This does not make JS source pushed down on every access.
> Even in this case, the server-driven JS still could send unencrypted data from the DOM up to the server, before crypto operations have happened. The trust boundaries are nonexistent here.
Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?
> The blog post says that it is necessary for code to be loaded as a local, signed browser plugin. This does not make JS source pushed down on every access.
When this response was written, the blog post said nothing of the sort. In fact, in that point the text has not been changed whatsoever.
> Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?
I'm speaking of an adversarial environment where the server is seen as an untrusted peer (hence the need for client-side crypto above SSL). The reason it's not considered is simple: servers get owned, services change their minds on issues, and people screw up. While it is theoretically possible to accomplish secure client-side crypto given enough constraints, this does not map well to reality.
> When this response was written, the blog post said nothing of the sort. In fact, in that point the text has not been changed whatsoever.
Allow me to quote from the blog post: "In fact, I believe that it is necessary to deliver JavaScript cryptography-using webapps as signed browser extensions, as any other method of delivery is too vulnerable to man-in-the-middle attacks to be considered secure."
> While it is theoretically possible to accomplish secure client-side crypto given enough constraints, this does not map well to reality.
Well, in my case, we got our browser plugin audited by Veracode and things worked out.
As I said, that point (the one you quoted me on originally) does not make mention of signing, and the quote you gave there did not exist when I wrote the comment, as you well know. This is silliness of the highest order.
> Well, in my case, we got our browser plugin audited by Veracode and things worked out.
"Things work out" until they don't. As a fellow security professional, you should know as well as I do: no matter how many audits you do, you're still fucking something up, and someone will find it if it's valuable enough to do so. This is as true of your code as it is mine or anyone else's.
The blog post says that it is necessary for code to be loaded as a local, signed browser plugin. This does not make JS source pushed down on every access.
> Even in this case, the server-driven JS still could send unencrypted data from the DOM up to the server, before crypto operations have happened. The trust boundaries are nonexistent here.
Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?