It's ironic that this update plays up how Briar "hides metadata" when the audit found that the application deanonymizes its users by exposing DNS lookups during RSS updates.
Indeed. On the plus side, I found the audit very readable, and a great source for some good Android security advice.
I do wonder what plans are in place for migrating user data and identities - of all electronic devices, the one most likely to be lost, stolen, broken has to be the phone - and it's not really great if loss of the device means loss of access to the network and built-up web-of-trust.
I see there's a mechanism to introduce contacts to each other - perhaps that could be implemented (technically) similar to pgp key signing/web-of-trust - that would still require a means to backup ones secret key, in order to regain access though.
This is the first public beta, so presumably anyone testing the software were well aware of the risks, and they would fix the vulnerabilities found before making the release.
As someone who does professional security audits, I would just like to say that there is no such thing as "passing" a security audit. In fact, most pen testing shops will carefully dance around actually making that claim in writing for a customer, because they know they are going to look bad when a bug is inevitably found in code they reviewed (and it's probably a dumb idea for liability reasons too).
There are certain certifications with falsifiable conditions that can be marked pass/fail. But, as I'm sure many folks here are aware, these are incomplete and often completely dubious. They don't purport to be "security audits".
What a real security audit tells you is that of the (probably 2-4) consultants that looked at a product for a few weeks (probably 2-6), these were the security bugs they found.
That alone contains little information, because the skill level and domain expertise varies greatly among consultants and companies. I can guarantee that if these results were withheld, and they gave the same codebase to another reputable outfit, the set of findings would be very different. There would likely be some overlap, particularly in the most obvious types of bugs, but bug hunting is way closer to art than science.
I know nothing about this project, and my intent is not to create doubt, but users of secure messaging apps should understand what an audit is and what it isn't.
Like other commenters, I was surprised to see 3 days of looking at crypto. It could be that the crypto is extremely simple and uses a few well understood APIs in a straightforward way, so this isn't a guaranteed red flag by any means, but it's a bit unusual.
And like any software, this is a 1 line patch away from being blown wide open. With every commit, an audit becomes increasingly meaningless. Just ask cperciva!
And perhaps I'm being cynical, but I always felt like the "conclusions" section of the audit report has an unspoken purpose of walking back from calling their baby ugly and keeping a decent rapport to ensure the possibility of future business. Not that I think what Cure53 wrote was not genuine, but there are natural incentives to be a little generous there. Again, I'm speaking from experience writing those sections as well.
I haven't looked at the audit yet (and agree with your comments), but I can say a bit about what Briar is doing with crypto.
The focus is on a time window based hash derivation of keys for symmetric cryptography and tags to recognize streams. It currently uses blake2s and XSalsa20/Poly1305. Bouncy Castle is used for the core algorithm implementations when possible.
Connections are made via QR code and use ECDH with cofactor multiplication. There is also a simple bittorrent-inspiried synchronization level that is new and an encrypted storage layer for data storage (I'm not sure but I think this may use pre-existing code).
So there is some amount of crypto to look at but it is fairly basic and not doing anything exotic. The layering and heavy use of symmetric crypto makes the crypto simpler than might be expected based on the features (and battery use heavier).
Version 1 of anything is likely to have issues and hopefully even the release will have a disclaimer to that effect, but there is always a tradeoff between needing some amount of support for further development and trying to make the best app possible before releasing. Briar has been in development for years and they are aware of that tradeoff and trying to both be cautious and not allow the project to die from lack of usable result.
As for the audit[1], how would HTML sanitization on sender side protect the reader? On page 12 they suggest adding "HTML sanitization" in onSendClick function. It is as lame as protecting against XSS with JavaScript. Attacker will simply remove this code and recompile app.
This looks interesting, but I wonder how safe it is in the stated use case of journalists, activists in an authoritarian country. It can use Tor, which hides whom you are communicating with, but the fact that you are using Tor sticks out like a red thumb.
The authorities probably just have to flip a switch to put you under closer surveillance if they see you use Tor. Or they'll just send someone to your registered address and see whats going on.
What I really think would be cool would be a protocol based on massive steganography and obfuscation. You would have kernels which tell it how to wrap data in an innocent looking container (HTTPS traffic, SMT, IRC, Cat pictures and recipies over plain HTTP, DNS, ICMP pings, ...). Ideally, you would have dozens. And they would be shareable between nodes. You could define them in a DSL, and make them sandboxed and provable (that they round-trip, i.e. can decode what they encode, and terminate properly - that restricts what you can do in them though). You could even autogenerate the kernels. The last two points would require a bit of R&D of course.
The goal would be to be able to create new "protocols" faster than authorities can learn to detect them. Then wrap a regular encrypted protocol in this obfuscation layer.
Not really, the idea would be to hide data by using different amounts of spaces in text files, in the least significant bits of pixels in images, or in the access pattern to a certain service. The data looks like legitimate traffic. You could run the tool on absolutely all traffic, but that would be computationally intensive. And the data you get out is still encrypted, so ideally you can't tell if it is random (from extracting data where none is hidden) or real encrypted data.
Also, you would have dozens or hundreds of kernels, and you could generate them by analyzing innocent traffic, or hiring a bunch of students to write them quickly. My idea is that the kernels are not part of the source code per se, but rather distributed by the protocol. To contact somebody you need to speak a common kernel, but then they can send you new kernels automatically. You could come up with a measure of how well kernels survive censorship and use that to decide which to pass on.
It's a bit like auto updating malware, but for good :-). My only novel idea is to make a DSL or bytecode for the kernels, so that you can prove that they are benign and correct, and autogenerate them or use kernels from strangers. I don't know at all if this is feasible or not, but I have a couple of ideas how to make it work. No where near a POC yet so this is all still wishful thinking though.
"the idea would be to hide data by using different amounts of spaces in text files, in the least significant bits of pixels in images, or in the access pattern to a certain service" is not appropriate for the claimed use case, i.e. activists in totalitarian regimes.
In such an environment, the traffic of suspected activists will be analyzed.
Assuming the kernels are open, it's possible to see in analysis of certain data that "amounts of spaces in text files, in the least significant bits of pixels in images, or in the access pattern to a certain service" have encoded information, even if the extracted information looks like random/encrypted data. At this point you don't have plausible deniability and rubber hose cryptoanalysis can be used.
Switching to new kernels happens too late since you don't know when they've identified a kernel until they start arresting people - it's not like they're simply going to block it immediately.
i.e., the described service is resistant to mass censorship and automated filtering, but these use cases actually need to be able to resist attribution and retaliation, which are quite different problems.
I downloaded the beta and installed it, but I guess I need to physically find a friend who also installed it. I'm not in Silicon Valley, so I doubt that will happen soon....
When I see that one of the requirements for privacy-preserving software is to have been in the same physical location as the person I need to connect with, while running said software, I immediately stop reading and move on to other things.
I've done this for roughly five years.
Assuming I've never wanted to become a Debian developer, is there any important piece of privacy-preserving software I've missed out on? Is there likely to be any important privacy-preserving software I will miss out on in the next five years?
Bitcoin - used PKI to download it. Now Bitcoin is reproducibly buildable, so you can read the forum to see if any zealots notice different hashes (which they certainly would unless you are personally being targeted in testing out the software). Make some small transactions to see if it works.
Bitmessage - downloaded it a few days after the initial release. Sent a message over Bitmessage to the Bitmessage author. Got one back from the author.
Tor - trust that the directory servers are doing their jobs.
Signal - haven't used it but if I did I'll piggy-back on phone numbers to message people I already know.
git - used PKI to initially grab the code, trust my own dev machine as I've made commits, occasionally posted commit hashes over various secure/insecure mediums for various reasons (may have done this in person wrt a bug, can't remember).
Notice that in all these cases, trying out the software (at least in the U.S.) does not at all imply that you trust it. You could practice installing Tor 20 different times, on 20 different untrustworthy Windows machines and simply use it to search for cat pictures. Then, the 21st time, you could take all kinds of precautions and build a special box just for running Tor, armed with all the first-hand knowledge about how it works and what its trade-offs are.
I can also completely fuck up something in git and get so frustrated I just clone it again from the repo I don't have to trust because I just check the hashes and go on working.
Requiring physical proximity and a formal key exchange before I can even use the software simply cannot work IMO. It a) requires special planning, coincidence, or proselytization to try out a working version of the app, b) it balloons the length of the engineering cycles and makes it hard to just start over, c) the reliance on in-person meeting implies a level of trust between you, your keys, and your smartphones that neither party should take for granted.
As far as the audit, I feel like 13 days is surprisingly short. I base this on my experience getting new jobs and familiarizing myself with new code bases. Maybe I'm slow.
13 days (let's call it two weeks, assuming full person/weeks of time) is not atypical for an assessment. If you have multiple people working simultaneously on a two week assessment, you can "comfortably" assess fairly complex applications.
What is surprising to me is that so little of that time was devoted to cryptography. For a secure messenger that time should be ratcheted up a bit (though the security infrastructure and general software implementation stuff is also very important).
It depends heavily on how much code there is and what language it's written in. Also, code auditors can often eliminate large swaths of the codebase with high confidence when it's clear that there is no attack surface, so it isn't always necessary to grok the whole codebase.
It is a term used by media and seen by general public when talking about many unethical or illegal things like child pornography and drug marketplaces, so perhaps not the best thing to associate with in your marketing material.
So should we stop saying "hackers" because it is also a term used by media and seen by the general public when talking about many unethical or illegal things?
It is a fair point, and I don't necessarily agree with the perception or misuse of either word.
But you have to admit there are connotations and that leads to bias when you start talking to laypeople who don't understand the other meanings or usages of 'darknet' or 'hacker'. There may be better ways to communicate that won't immediately stop the general public in their tracks when they hear a particular 'tainted' word.
Who knows? The whole problem is that "darknet" is a label that doesn't mean anything definite, a buzzword but not a term.
You may say that a tool/network/protocol is decentralized and/or secure and/or anonymous and/or censorship-resistant and/or routed through Tor and/or doesn't leak identity and/or tamper-resistant and/or has plausible deniability etc etc and all these labels would mean something - "darknet" does not.
A "darknet messenger" might tick any set of these boxes, but the meaning is completely different depending on which of these labels apply.
Wow, i had never heard of this project. I'm very much a fan of matrix protocol, and associated riot app...but - besides differences in protocol - I really like the addition of blog posting and rss feed reading. I mean, this could sort of take off, and become the new basis for social interaction - besides just "texting" securely with your contacts. I wish somehow both makers of briar and matrix combined superpowers to combine the perfect, unified stack!
Side note: Is there a way to export (basically archive offline) the content; to be clear only one's content, not someone else's? I'd hate to have some important messages lost if I were to lose my phone. Not saying i want a central server...simply some method to archive my own stuff for safe keeping - encrypted of course.
I'm glad I took a peek. This is actually interesting to me.
...Briar is a secure messaging app for Android.
Unlike other popular apps, Briar does not require servers to work. It connects users directly using a peer-to-peer network. This makes it resistant to censorship and allows it to work even without internet access.
The app encrypts all data end-to-end and also hides metadata about who is communicating. This is the next step in the evolution of secure messaging. No communication ever enters the public internet. Everything is sent via the Tor anonymity network or local networks.
Maybe because tox is developed by people who don't know what they're doing.
Money quote from a tox dev:
"Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification."
I once tried to read their "protocol documentation" and realised that it was effectively non-existant and the only way to understand what was going on was to read the toxcore code which was written by 4chan.
I'm not a crypto expert, but I also personally wouldn't put much stock in the security of their protocol or implementation.
The app hosts a Tor hidden service which other peers can connect to. No NAT punch through required as Tor will relay messages instead of a direct P2P connection.
I wonder how it compares to Threema. Signal's servers are located in the US, and the service requires you to provide a phone number, which is a deal breaker for me.
Purely naively I would guess that it means during whatever audit they ran, no signs of insecurity were observed. Maybe it would be better to say that it didn't "fail" the audit?
You can't really fail an audit though. The point of an audit is to make your application more secure. Using terms like pass/fail just reinforces a sense of fear where there shouldn't be any.
A pentest consists of an analysis period, typically about a week. Then any flaws in your app are communicated to you, along with steps to reproduce them. When you feel you've fixed the issues, a retest is scheduled and the pentesters verify that each flaw has been fixed.
A healthy application is one that's pentested on a regular basis. Ideally after every release, though only big companies can afford that.
>You can't really fail an audit though. The point of an audit is to make your application more secure. Using terms like pass/fail just reinforces a sense of fear where there shouldn't be any.
Yes, a security audit is an examination of an application and the processes around it. In this case passing means the application "is able to offer a good level of privacy and security. In other words, the Briar secure messenger can be recommended for use."
All it means to "pass" an audit is that at the conclusion of the audit there were no outstanding vulnerabilities that the "auditor" had found.
"Audits" and "passing" make some sense for network security, where you can run a checklist of best practices and known vulnerabilities. But you can't really "audit" source code in the same sense, any more than you can contract someone to spend 2 weeks finding all the sev:hi crashers or data loss bugs in your database.
It would be good if organizations could stop pretending that "passing" a software security assessment was meaningful.
What you really want to know is how many person/days Cure53 spent on Briar, who Cure53 had staffing the engagement, what the scope of the engagement was (what components were off limits), and whether they found anything that was subsequently fixed (it's an industry secret that one of the reasons you do an audit is so you don't have to publish the "real" findings).
From the report, it looks like they spent 13 calendar days testing (it's not clear how many person/days were spent), of which only 3 were dedicated to cryptography, and the audit was constrained to the Android clientside code.
For perspective: the "industry standard" software pentest of a reasonably complicated web application is 2 people, 2 weeks.
Does it? I know it has that meaning on the evening news, but even on network television it means something like "the secret internet where crazy stuff is".
I mostly hear it used to mean what it is supposed to mean, but that is rarely from non-technical people.
The problem is that as long as the name has that association at all, it's going to be a way to attack Briar.
Darknet now: Anonymous place drugs sometimes happen
Darknet if it becomes bigger: That place where terrorists and criminals hide, the FBI and NSA say it's a risk to national security and we have to stop it at all costs! What do you mean it's just a secure messaging app? I'm not a terrorist, I don't need anything like that!
You could also see it the other way around, your application is not secure enough if terrorists, dissidents, drug dealers, whistleblowers and child pornographers don't feel confident to use it.
You could but I doubt the general public and policy makers are going to see it this way. They're still looking for a backdoor that only the "good guys" can access.
From an ethical point of view building this software must be difficult. On one hand you are building something that advances technology and could be used to help free people from an oppressive government, on the other hand you are also building something that could be used (and if it works most likely will be used) to aid acts that we all agree are morally wrong.
Yes but there's also a number of existing applications of encryption that are widely deemed acceptable, like securing bank transactions and medical data. Anonymity is much harder to defend, as it doesn't have such clearly worthy purposes.
Yeah, but no governemt is lucky with having people operate entirely freely. It makes corruption difficult, and every government has corruption to some degree
https://briarproject.org/building.html