So just to be clear on what is being alleged, because the write-ups are omitting this detail: from what I can tell FB paid SC users to participate in “market research” and install the proxy.
The way most of the writeups make it sound is that it’s some sort of hack, but this doesn’t seem to be the case. (I’d love to get more detail on exactly what the participants were told they were getting paid for, but I’d be surprised if they did not know their actions were being monitored.)
The accusation that it’s wiretapping if one party in the communication channel is actively breaking the encryption (even with a tool provided by a third party) seems tenuous to me, but IANAL. If this is wiretapping, is it also wiretapping for me to use a local SSL proxy to decrypt and analyze traffic to a service’s API?
> Note this is a new case, different from the one that TechCrunch also covered in which Facebook were paying teenagers to gather data on usage habits. That resulted in the Onavo app being pulled from the app stores and fines.
This has since been edited in OP, and the full quote I think supports my claim more:
> Note this is different to what TechCrunch had revealed in 2019 in which Facebook were paying teenagers to gather data on usage habits. That resulted in the Onavo app being pulled from the app stores and fines. With the new MITM information revealed: what is currently unclear is if all app users had their traffic "intercepted" or just a subset of users.
It was fully clear and fully remunerated. It was no way a hack and thats disingenuous wording so you can hate on FB. If you install a vpn then you are affirmatively giving control of your traffic tot he vpn. FB isnt under any obligation to explain how networks work. In the same way we don't explain dns or routing. Is your boss obligated to tell you ACH transactions are in the clear and anyone can watch settlement? No. You're not being hacked when your bank sends payments via ach.
No, the writeup isn’t omitting anything, you’re mixing things up, which this article explicitly called out.
This article is about Onavo Protect[1], “Free VPN + Data Manager”, which was not paying anyone. There was a separate program where Facebook paid teenagers money to install their Facebook Research VPN through their enterprise distribution channel, bypassing the App Store and its rules, so that paid version was even more invasive.[2]
So no, this Onavo bullshit isn’t defensible at all.
This is a bit tangled. I think this is new information but it’s all about Onavo. From OP:
> Note this is different to what TechCrunch had revealed in 2019 in which Facebook were paying teenagers to gather data on usage habits. That resulted in the Onavo app being pulled from the app stores and fines. With the new MITM information revealed: what is currently unclear is if all app users had their traffic "intercepted" or just a subset of users.
So this seems to be new information about the Onavo Android app, but it’s not clear to me if the “install cert” button described was exactly the implementation of the previously reported research cert, or a new vector where people other than market research participants were MiTM’d. The analysis is just a bunch of circumstantial observations that _it is possible_ FB was doing more skeezy stuff than was previously known. But nothing here is incompatible with the previously reported stuff being all that happened, AFAICT.
The TechCrunch article clearly states that Onavo was the method they used to get the FB Research cert onto devices. (Presumably they distributed a different build of Onavo with their enterprise distribution channel), it quotes:
> “We now have the capability to measure detailed in-app activity” from “parsing snapchat [sic] analytics collected from incentivized participants in Onavo’s research program,” read another email.
This sounds to me that there was one Onavo research program, but who knows, we have multiple project codenames.
> The analysis is just a bunch of circumstantial observations that _it is possible_ FB was doing more skeezy stuff than was previously known.
No, it was already well-known way back in 2018, which is why that piece of shit app was withdrawn from App Store in the first place. Facebook’s enterprise account later got suspended in 2019 for distributing the paid piece of shit through enterprise MDM.
The claim in the OP is that they might have been MiTM’ing arbitrary users, I believe the previously reported claims were that they only MiTM’d paid research participants. (Please share some links if you have evidence to the contrary, I’d love to get to the bottom of this.)
That doesn't mean that the MITM traffic interception would be enabled for regular users that have downloaded the app from the store. As stated both in the article and in the comments here, both "free" VPN and "paid market research" VPN used the same codebase. Is there any evidence (other than "facebook bad") that the MITM part was enabled for anyone other than consenting/getting paid participants?
Why do people work on such projects? I mean specifically the engineers. You're still paid the same engineer salary, except now you expose yourself to criminal prosecution. The corpo is at least getting some extra returns for the risk, you as an engineer are not. So dumb.
Maybe you're on H1B and if you get let go you have to go back to Sri Lanka, whose government collapsed 2 years ago and left the country in political disarray. Some people have better choices than others.
Like I wouldn't work on this project, but I have US citizenship. In college I slept over at some of my Indian friends' apartments and often they had like 8-12 guys sleeping in one bedroom, it was just a bunch of mattresses all laid together with no specific sleeping arrangement. Generally they made a giant pot of stew/daal/whatever once a week and ate the same thing for every meal all week, some even long after graduating with PhD's and getting low-tier visa-mill jobs. This was not a T10 school, our international students rarely came from wealthy families. One of my Saudi classmates came from a poor family in a remote village near the Iraq border and brushed his teeth with a twig from the Salvadora persica tree.
I couldn't really blame them if they didn't have another good option readily available.
I can't resist annotating the Sri Lanka comment, it was responsible for some of the most absurd headlines I've ever read; completely beyond parody. Typical example:
- Fertiliser ban decimates Sri Lankan crops as government popularity ebbs
If you have other good options, thats just greed. Sure its painful to turn down $200k in RSU’s but if you can jump ship and still get paid a respectable $160k I don’t have much sympathy for your choice to fuck over millions of people just so you can buy a house two years sooner.
I don't either. It seems to me that a lot of CS people don't have the same values as we do unfortunately. A sad mix of computers seeming apolitical - 'I just wanna hack' and as a well paid industry, the same money maximizers that would in previous years been business majors.
>> Maybe you're on H1B and if you get let go you have to go back to Sri Lanka...
I mean that's there too, but in this case, the guy who ran this spyware op was a former IDF turned chief of Facebook in Israel, later promoted to CISO for all of Meta.
> I blame the engineers when its obvious they had other good options
Their manager was promoted to c-suite for running a covert worldwide spyware op (that also informed the company's M&A strategy). I'd reserve most of my blame on corporate culture that incentivized & rewarded such orgs and its management.
Shame on them if they continue to be associated with an institution found guilty by the International courts of occupation, torture, sexual assault, dispossession, crimes against humanity.
Your scenario describes real people, but Facebook was not built by vulnerable visa holders.
Facebook hired and retained engineers over its entire company history by offering enormous amounts of stock. They successfully demonstrated there are a lot of engineers willing to build unethical products when offered 2-3x their previous salary.
holy fuck can we please stop letting circumstances be the excuse we continuously fall back on, when enabling and reinforcing behavior with long-term impact and consequences.
imagine all of the times in history where this type of enabling of behavior reached an extreme, and now ask yourself where do you draw the line.
are you really asking me to enjoy the growing consequences of corporate overreach in the name of data, and all the sketchy ass, unethical, and invasive work all these foreign engineers are getting paid ridiculous salaries to propogate, and feel good about being held hostage because said engineers.. don't have a home.
so we are supposed to enable them to wreck mine (ours)?
> so we are supposed to enable them to wreck mine (ours)?
No, we’re supposed to attack it from a different direction. Whether these people are H1B or outsourced overseas, U.S. corporations will always be able to find people in desperate enough situations (civil war-torn country with a literal famine going on). We can absolutely blame and shame the engineers who have other options for sustenance and medicine for their families, but if you want to solve this problem, it can only be solved through the legislative and executive branches.
I was talking about this with friends the other night. If you've been in the industry long enough, you've probably been party to creating something horrible. It takes a while for the reality of horribleness to crack the glamour of creation and monetary reward, but once it does, everyone I personally know has quit and lived with the regret.
I know people who have worked for adtech, gambling and HFT industries who now try to convince younger devs to avoid them. I personally worked briefly for a private prison corp, and I feel dirty and remorseful that I had anything to do with that industry.
Due to an incarcerated family member, I had to deal with privately run prison telecom software, which was as awful and exploitative as you would expect, I could see where someone might feel guilty for working in this area. Evil business model.
But one of the worst things about the software was all the bugs. Silent failures so we couldn't tell what was happening, if it was a software problem or if our loved one was being prevented from communicating with us. The messaging and video call system failed us at some crucial moments and created a lot of emotional stress.
In fact I think this is part of the awful business model -- cut costs even if it hurts people.
Bad software can really make the lives of incarcerated people much worse. So if you were able to do a decent job on that software, whether it was prison telecom or internal tools for a prison contractor, you may have still had a more positive impact than you think, despite the broader business model being totally evil.
I was involved in the internal reporting. The clincher was when I saw the P&L for the internal sales of "luxury" items. Literally selling to a captive customer base.
It's weird that such a data point was the final straw, but eventually these small details build up and the whole edifice comes crashing down.
It's especially tragic as the company seemed to be full of talented, intelligent and nice people. Such seems to be the typical makeup of faceless evil.
The software on those "Temu" quality Android tablets they sell in prison is the worst I've ever encountered. And I've never seen an update of any note in the years they have been running them. If there is even a dev anywhere that could fix a bug and deploy it...
Anyone know the best way to pull an image off a locked-down Android tablet? I have a prison tablet here and I want to see what is inside the APKs.
Sadly I can't get into the settings. It boots into a custom menu, which might just be an app running over the default shell. I also can't get it into recovery mode. I need to do more digging.
> I know people who have worked for adtech, gambling and HFT industries who now try to convince younger devs to avoid them. I personally worked briefly for a private prison corp, and I feel dirty and remorseful that I had anything to do with that industry.
Sounds like getting to feel good after grabbing the bag. Particularly the first three considering how much they pay (even moreso if the gambling was crypto related).
> everyone I personally know has quit and lived with the regret.
Quit for a significantly lower wage job? Or quit in 2021 when they could trivially get another job likely with a raise?
I sound aggressive but these are serious, not rhetorical, questions. I don't know your friends, maybe they're the real deal, in which case massive kudos to them, I'm very happy to see others doing the same and I wish more were like us. But "living with the regret" is empty words if meaningful sacrifices haven't been made to atone for those sins.
FWIF, I left a job that paid more than twice what I'm able to get anywhere else without moving across the globe, for ethics reasons. And the industry wasn't as bad as the ones you've named besides HFT, which is imo pretty average when it comes to societal negative externalities for a tech company.
Trying to bring an open mind, I could see a number of plausible scenarios where an engineer could do this, with various degrees of legitimacy.
It's certainly a complicated subject, but I think in general companies are really good, especially big ones, at getting people to work on things they might not be comfortable with otherwise. This thread has been talking the extremes like immigration status, but there are all kinds of subtle pressures as well. Some people might not believe they have the political capital to outright refuse a project (especially a pet project of the CEO) vs choose to accept and try to nudge the project onto more solid footing. And I suspect many engineers are terrified of being labelled as not a team player, which aids in the creation of group think, but makes it very difficult to foster a healthy culture of discussion that would bring forward the serious concerns of this work. And there is almost always some room of uncertainty as the last convincer... is it unethical to work on the project if the consumer is fully informed and offers consent to the invasion of privacy?
If there is an extreme where it's justifiable, for any reasonable engineer to accept the project, then it get's really muddy on where exactly the line is, and when it should be drawn.
I also suspect many of us envision ourselves having much more fortitude than we really do as well, imagining the heroic efforts we'd put in to changing a companies mind from a bad idea... where the more likely outcome for most of us is to fall silently into the background.
When I was in the music biz I pushed back hard against DRM. I lost, but being on the inside I could swing the needle to the least restrictive DRM as possible (e.g. it let you burn a CD for instance). Most of the other devs I worked with would have simply taken the ultra-restrictive spec, coded it and gone home happy each night. (I did code some shitty ActiveX object for Sony to put on one of their unrippable CDs though... it let you download a DRM-hobbled version of the song)
I can count on one hand though the number of devs I've worked with that saw coding as anything more than a 9-5 grind and would have spoken up if asked to do something shady.
Or a person with a sick kid, or who is about to be evicted, or who made some bad financial decisions or for some other reason is about to run out of food money. In those situations it's very easy to rationalize that the good outweighs the bad.
I've only been in a similar situation once. I could barely sleep at night for a week before I finally told them that I couldn't do it. In my situation I would have taken a financial hit if they decided to let me go, but my wife works and I have savings and there was no immediate threat, and it still was a difficult decision.
Why would you diminish all those silent heroes who do decline the morally bankrupt job despite not making rent , or having to carry bad financial decisions?
The truth is that in the US we do have some very expensive social safety nets, and it always comes back to the morals of the individual. You can rationalize just about anything against all kinds situations, but in the end we are talking about someone morally corrupt, or morally steadfast.
Dont justify the injustifiable.
Instead Judge character in the hard times and use that opportunity elevate the heroes that do the right thing im the face of adversity.
I'm not diminishing anything. I'm just not willing to condemn people without taking into account extenuating circumstances.
People regularly justify things that are not justified. When there's a lot of pressure, rationalizing is very easy. It's not even easy to realize that something is being rationalized.
I'm not justifying the unjustifiable. I'm saying that a person doesn't have be morally "bankrupt" to do something bad. Condemning people as morally bankrupt without taking into account extenuating circumstances is certainly not justified.
You really think the engineers working on this will be personally liable for this? That would honestly surprise me, the worst i can imagine is punishment for the company as an entity.
> from what I can tell FB paid SC users to participate in “market research” and install the proxy.
The app was available on both the Google Play and Apple App stores for anyone to download.
> The way most of the writeups make it sound is that it’s some sort of hack, but this doesn’t seem to be the case.
It could be that you are confused with a previous case. From the blog post:
> The wiretapping claim is new and perhaps not to be confused with the prior controversy and litigation: In 2023, two subsidiaries of Facebook was ordered to pay a total of $20M by the Australian Federal Court for "engaging in conduct liable to mislead in breach of the Australian Consumer Law", according to the ACCC ... Facebook had shutdown Onavo in 2019 after an investigation revealed they had been paying teenagers to use the app to track them. Also that year, Apple went as far as to revoke Facebook's developer program certificates, sending a clear message.
> If this is wiretapping, is it also wiretapping for me to use a local SSL proxy to decrypt and analyze traffic to a service’s API
If by "local" on your own network/machine with your own traffic then obviously no.
The email snippets are impressive on multiple levels, mainly how fucking stupid/arrogant people at FB must be. Openly talking about MITM, and then getting multiple other companies to include this kit in their products as well is just beyond stupid for putting in writing. "Hey Zuck, I have an idea on your proposal. We should get together to discuss in person" would be suspect, but at least it's not incriminating. It's like these people have never seen a movie, or read a news article on other companies getting caught.
A piece of advice I've taken to heart is whenever I'm sending something in writing, to think about how I would feel if I needed to repeat the same things in court or if I found those messages in the news. Not that I've ever said anything near that egregious but it still helps.
Whenever I'm discussing something in person I think about how I would feel if it turned out my employer was breaking the law and me not putting it in writing stopped the injured parties from obtaining just compensation.
When putting down something in writing, you should also remember cardinal Richelieu's quote: "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him."
Trouble at many orgs regarding the Israel vs Palestine conflict. No matter on which side of that clusterfuck you are - unless you are in a lobbyist group for either side, someone at your org will be offended and raise a stink.
No, just that there's no reason that your instant messages can't be used against you, and in some organisations, regulations mean that the chat history has to be logged and kept.
This is excellent advice. Another thing I will add is if something is not ethical, misleading, or dishonest, just do not do it. The world will be a better place if people behave ethically. Also, I strongly suspect that long term success in business requires ethical conduct.
So you'd rather they were smarter and able to hide the traces of their malicious behaviour?
The real problem here is the complete absence of any kind of ethics. It sounds like the kind of place where if you consider ethics to be a blocker you'd be laughed out of the room, or fired. Corporate culture is to chase profit above anything else. It's especially bad in software, though, as so many people don't even seem to think about the ethical implications of their actions ever.
Billionaire bosses are all surrounded by opportunists and flatterers. Over time like the Great Pacific Garbage Patch the size of this group grows to unmanageable dimensions, cause anyone acting moderately sane will be treated as an existential threat to their lives of fantasy, domination, manipulation, luxury, leisure etc and pushed out.
> acting moderately sane will be treated as an existential threat [...] and pushed out.
Or converted, by making them take actions so that "if we go down you're going down with us."
Organized crime works that way too, come to think of it. They may call it "loyalty", but it really means "give us a way to coerce you into compliance."
Thankfully our fearless American regulators would never shy away from hanging these scoundrels out to dr- hey wait, where are the lawyers going off to?
To paraphrase Clarke's three laws, a sufficiently advanced quantity of yes-men and tech industry bro "move fast and break things" types is indistinguishable from a hostile malware actor.
Their contribution to the genocide in Myanmar has said everything about Meta you'll ever need to know. It's a tragedy that working for Meta is generally seen as neutral whereas working at any defense-related companies is often met with scorn, despite the overwhelmingly greater negative impact that working at the former has.
And this doesn't even touch upon Instagram.
I guess that they pay too much and employ too much of our industry, greatly reducing criticism because we all have a friend who has worked at Meta or we may even have applied ourselves at some point. Whereas we don't know anyone who has been at e.g. Anduril at the likes.
I have several extremely talented friends at Meta, and the one constant is they left any attachment to the output product when they entered the workplace. Whereas they previously (at other top tech companies) did take pride in their employees output. Meta is “success at all costs” and heavily metrics driven.
I think that’s what contributes to things like Myanmar and other countries hate speech proliferation. When you don’t care about how your product is used, and can focus on just the technical aspect, you lose any sense of responsibility.
Conversely, we’ve hired many ex meta people, and they’ve always almost all unanimously said how much they NOW like having pride in the products they create, after jumping ship.
Imho it’s an issue of top down culture from Zuckerberg, and previously Thiel.
> Conversely, we’ve hired many ex meta people, and they’ve always almost all unanimously said how much they NOW like having pride in the products they create, after jumping ship.
Just curious, did the ethics of their prior projects ever come up during the interview? I think I would have a problem hiring someone who worked on a product despite having ethical misgivings about how the product affected end users. Unless they could explain the extenuating circumstances that forced them to work on that product (sick family to care for, work visa being held hostage, and so on). If their response was simply, "I made metric X go up and got paid Y to do it," I don't think I could hire them in good conscience.
It’s a very easy question to dodge so doesn’t add much value.
Most of them just bury their heads in the sand and say that the negative effects weren’t made known to them, and they started looking for new work after they found out. Short of interrogating them, you can’t really suss out of its true.
Instead we ask what they’d do in hypothetical scenarios around our own products.
I'll take that bet. Of course, you have no idea where I work and I do, so you're not a good gambler. The stench of social companies is noticeable by people that do not have their heads in the sand. Companies that still believe that ex-FAANG are automatically gawds deserve what they get.
How much does your company pay IC8 or equivalent per year? $2M liquid? $3M? Hard for anyone to feel moral qualms when they’re earning generational wealth.
Plenty of people in our democracy choose to work in noble professions that don’t pay well but are the bedrock for society.
Excusing people’s ethics because of money makes it worse not better IMO. Especially in the context of tech where there are plenty of well paying jobs that don’t make money through increased misery.
Not to downplay it but at least this requires users to download the Onavo app, which isn’t so common.
The one that I wonder about a lot is this: there are two (non-deprecated) types of webview you can use in iOS: WKWebview and SFSafariViewController. They’re intended for very different uses.
When you tap on a link in the Facebook app they should use SFSafariViewController. It’s private (app code has no visibility into it), it shares cookies with Safari, it’s literally intended for “load some external web content within the context of this app”
Instead, FB still uses WKWebView. With that you can inject arbitrary JS into any page you want. Track navigations, resources loaded, the works. Given the revelations we’ve seen in this article and many others I shudder to imagine what FB is doing with those capabilities. They’re probably tracking user behavior on external sites down to every tap on every pixel. It seems insane to think they might be tracking every username and password entered in their in-app webviews but they have the technical capability to. And do we really trust that they wouldn’t?
I wasn’t aware that WKWebView granted the app such power. Is there a way for me as a user to figure out if WKWebView or SFSafariViewController is being used if I have a web page open? Although I don’t use FB, I do use the web view of other apps and don’t want them to be able to do this either.
SFSafariViewController is less customizable visually so the standard "sheet coming up within the app" that looks always the same regardless of the app (at least in most apps and of course not Meta's apps) is that one.
Having said that, since WKWebView is just a view that can be customized visually, nothing can stop someone to create a WKWebView-wrapping view controller that looks exactly like the "safe" Safari one anyway.
Yes, there are ways to distinguish between them as a user, for example you can check to see if your browser plugins are available. I also went through some of the most popular iOS apps and created a list of which app uses the correct SFSafariViewController vs the potentially malicious WKWebView.
> Not to downplay it but at least this requires users to download the Onavo app, which isn’t so common.
10 million installs on Android, according to AndroidRank[1]. What we don't know (yet) is what % of those installs had the FB competitor traffic MITM'd.
i don’t have instagram but i have facebook; when people send me links to instagram videos on messenger, the view doesn’t let me watch it unless i login (in fact create an account), i can only watch it loading externally into safari
I don't know why but Facebook is the one tech company that I just can't have a good opinion about. I like and dislike Google, Microsoft, Apple, NvidiA, AMD, Intel and the rest for different things but I just hate Facebook. I closed my facebook account about 10-11 years back put a filter to keep facebook out of my search results. And I have to say it works I rarely see anything about Facebook on my Google news feeds etc. I still use WhatsApp though as that is the biggest communication app outside China in Asia
- they’ve had a long history of trying to undermine privacy to extend profits. From stuff like in the article, to tracking pixels, alleged ghost accounts, and fighting anything that hampers tracking. Of the companies you listed, only Google has any crossover, but doesn’t come anywhere near as close.
- they’re irresponsible with the effects of their algorithm to amplify hate speech. None of your other companies have anything like that.
- they are dishonest in their marketing. Almost all their Quest ads and feature reveals use concept visualization to deceive users for example on what is possible. Mark often speaks in double speak when addressing issues. Double speak isn’t unique to them but they definitely take dishonest advertising to the limit versus the other companies on your list.
I know Meta are having a popularity renaissance with their open weight (not open source) models in this AI cycle, as is Mark with his his recent PR blitz to reinvent his image.
However I think they’re culturally the only one of your companies listed who lack a moral core to their work. I think culture is top down, and both Zuckerberg and Thiel have instilled a culture of “success at all costs” for the way Meta operates.
The other companies on your list are definitely capitalist too, but have some sense of responsibility with their output.
With what is happening outside the us and how facebook and even YouTube ie google is deplatforming people fighting and raising awareness against totalitarian regimes but twitter is not. I get you hate Musk but I don't agree Twitter is not in the same realm as Facebook.
> Disagreeing publicly does nothing if I'm the one empowering my opposition in the first place.
Of course it does. It does spread the word. That’s important.
You can be an activist and have a real life. You can despise Meta but have acquaintances on WhatsApp you can’t or don’t want to move. You can be an anticapitalist and still agree to join a group of friends inviting you to McDonalds. You can be an ecologist and have a car because you live somewhere without car free infrastructures.
You have the right to be critical of your own life while still acknowledging you can’t control everything.
Having WhatsApp may be wrong for you but it may be less wrong than leaving your friends groups.
> If friends can't install another app to talk to you, maybe they don't really wanna keep in touch.
Interestingly, your assumption works the other way too : If you can't install another app (one you don't want), maybe you don't really wanna keep in touch.
I prefer to see it from the other side : I value my friend more than which app they want to use. I do encourage my friends to use Signal but I can't force them.
> When I moved to Telegram I told family and friends where I'd be and they all installed Telegram and actually liked it so they stayed.
Same. And now I realize that Telegram is just a little less shitty than WhatsApp and that it doesn't even do E2E by default and I have no more willpower to migrate everyone on Signal.
> It's been 3 years now and I haven't touched WhatsApp.
Looks like you weren't as pure as me 3 years ago since I never touched WhatsApp until 3 months ago. And you know why ? I've been integrated into a new group of friends which was on WhatsApp. The thing is, making your current friends migrate to another app is difficult. But you can't know where your future friends will be and you will not be able to make them use your app and making new friendships is hard enough in life that I'm not going to filter them by the app they use.
Oh and I have the magic power to be able to use the 3 apps on my phone ! And with Beeper, it's now a superpower.
Anyway, FWIW, I find this messaging apps situation totally pathetic. If it was me, we should all be using an open and decentralized messaging protocol where everybody can own its own data. But since I can't convince all my friends to reach me via e-mail or Matrix, I will suffer the shitty apps and will do my best to use Signal as much as possible (but Signal is shitty in my eyes too since it's centralized).
"There is a current class action lawsuit against Meta in which court documents include claims that the company had breached the Wiretap Act."
This is not a wiretapping case. The claims are all for violations of the Sherman Act. Plaintiffs' attorneys _incidentally_ found evidence during discovery that Facebook may have breached the Wiretap Act. There are no wiretapping claims. It is an antitrust case.
Doesn't this violate the DMCA too? This is circumventing an encrypted system.
Does the DMCA not have enough teeth for something on this scale? Maybe an issue of standing or provable-damages? Did the plaintiffs forget about it? Curious and confused.
I think a relative of mine once almost signed up for another market research thing that would have done essentially this, redirecting all their phone's internet traffic through a VPN & proxy controlled by the market research company, including installing their Cert. They would have received some small compensation for it, and of course consented to having it installed. I don't recall the company being misleading about anything, exactly. That being said, while I generally am not in favor of overly paternalistic policies, I wonder how meaningful the consent of someone with relatively little technical knowledge for something like this really is. They were not misleading about things, but also didn't fully spell things in a way that would really drive home what was going on for someone unaware.
Just because some market research companies do informed disclosure, says nothing at all about how Onavo did this (and Onavo didn't advertise themselves to users as "market research company", just as some free neat app that would categorize your internet data usage).
Cars now come with Google services / Android baked into the damn infotainment system, with no possible way to pull it out. What could possibly go wrong with an advertising company seeing everywhere you go, and everyone who rides in your car?
This is true, but so far there are ways to disable much of this.
For example on a Ford, you can literally pull the fuse for the GSM modem. On a GM, you can pull the antenna from OnStar, and put a resister there in replacement... thus rendering it unable to communicate to home base.
This doesn't solve everything, but it at least stops the immediate phone home.
Buying used/old doesn't work here, at least not for long. Lots of salt on the roads, cars older than 8 to 10 years are rust covered, and it doesn't help components either.
Not to mention, if everyone bought used, there'd be no used cars to buy.
Yeah... They are all connected to the "three letter agencies", in Google's case it was very early, but I believe nobody can stay popular and not have all of these agencies infiltrate then take control of them.
Apple, Google, Facebook, Twitter, Alexa, they are a gold mine for agencies, but even news sites, movie studios, and YouTubers. This is why they've been after Tik Tok for so long, they know how useful that app / network is.
Unfortunately this is unsurprising; with bad actors like Meta there are likely many potential "dark patterns" put in place.
I can imagine e.g. security risks involving sensor data exfiltration where accelerometers and gyroscopes etc are monitored to infer audio information. By covertly relaying and processing the collected data externally it would be possible to reconstruct sensitive information without direct access to the device's microphone.
It's not unlikely that they pull off something like that.
Meta and other pernicious companies and government bodies are probably employing many more, even worse and much simpler eavesdropping techniques in the wild.
Yes, it can be possible. I stormbound a with a friend who recommend hire a professional team who provide me access her phone through spy app i install on my phone it work like a magic. I advice you use this team know as hireprohackers20@Gmail.com for your won job to be handle
Snapchat do certificate pinning for it's main API domain. I am not exactly sure why analytics domain are different and why not have certificate pinning. (I thought analytics go through the same API domain, but it must be wrong then).
I used to work for a startup that collected data by using MITM attack with a VPN server, and other means. The users got paid a small sum of money to participate.
If you or I did this, we would already be in jail for phishing plus whatever add-on charges the Feds could file.
Meta has Washington in their pocket so this will never leave civil court. The penalty will be less than the money made, meaning somebody gets a bonus for being creative.
That's one possible read. The other possible read is that Apple and Microsoft both agreed to let the CCP
decrypt all user data, which makes them less trustworthy in my book. You really gonna believe they couldn't have a similar arrangement with the US TLAs after that?
> Looks like this was the real reason Facebook could not comply with China's data sovereignty laws and had to abandon the market.
How so?
> The fact Apple and Microsoft services both work in China shows they are a little more trustworthy.
Absolutely not. Companies apply different policies in different countries they operate in. This tells you nothing more than those companies came to a mutually beneficial agreement with the Chinese Communist Party.
Indeed. Even McDonalds has different menus, local workers, local employee standards, and even how their business signage looks, depending upon country/location.
seriously, how does this not violate wire tapping laws? does agreeing to ToS mean you also agree to being spied on in a way that protects them? you are deliberately circumventing encryption for malicious purposes. if people got in trouble for DeCSS for circumventing encryption, how is this okay?
pithy "because they have all the monies" replies not wanted.
> This wasn’t simply Facebook hijacking random people’s traffic because they accepted the ToS or used the Facebook app
Do you have further insights or references on what was the "trigger condition"? This is a new case, separate to the previous litigation related to the VPN app.
Big tech and telecommunications companies are effectively miniature arms of the U.S. government at this point.
As seen by the "Protect America Act" of 2007[0], the government will retroactively cover their own ass and your companies' ass if deemed important enough to the intelligence apparatus. There isn't a chance in hell that Meta would be brought criminal charges for wiretapping.
Which is clearly a red flag operation so that whenever someone serious tries to tout this, they'll be rebuffed as it's an article in the Onion. Those clever bastards!
I'm assuming they were doing it for the federal government at this point. There's no reason for them to spy on another app, they can hire almost any developer they want.
What is described in the article is not some elaborate scheme or novel work of software engineering. Rather, it's exactly what 99% of corporate networks do (proxy server with SSL inspection using a custom root certificate) "to combat cyber threats".
As coincidence would have it, this is the perfect alibi provided by a snake oil "cybersecurity" app by one of the world's largest companies.
Every tech company that has promulgated the lie that a VPN operated by a third party provides added security is indirectly responsible for this. Funneling all your traffic through a shady intermediary does no such thing, and in fact often does the opposite.
I do know that this is done - in fact worked at a pretty major smartphone manufacturer and never logged in to any personal account on work devices. It was pretty obvious by even just looking at the security info on chrome/firefox that the certificate used was a root signed by the company itself. I used to shout at the top of my lungs to my friends, that hey, _this_ is how your information is vulnerable to the corporate overlords, but I guess they weren't as paranoid as I.
The first thing I checked when moving to my next employer was if they were intercepting SSL traffic like this. (They weren't - they used Falcon)
> does agreeing to ToS mean you also agree to being spied on in a way that protects them?
This relates to a much bigger problem of courts upholding contracts even when nobody actually believes they represent an informed and voluntary agreement.
We aren't quite at the Looney-Tunes step of enforcing extra clauses that were hidden in invisibly small print, but things are drifting in that direction.
It isn't because they have the money, it's because they have given the government access to whatever data they want. When it comes to three letter agencies it really isn't about money, it's about power and in today's digital world data is power.
To answer your specific question, this isn't okay. Both the government and large corporations have been given way too much power and we really have no hope of making any meaningful change until the people reclaim this power and put those in charge out on their ass.
My work puts a big banner on the login screen that says up front that they can and will record and monitor everything on this machine. And IMO that's fine, because it's their machine. If they wanted to do that to my machine it would be a problem.
No place I’ve worked has ever told their employees that they do this, but most of them do. Some employees I’ve spoken to are quite surprised that their “encrypted” connections are being monitored.
> there's nothing wrong with corporations tracking use of their hardware.
As written, that means they can secretly enable the camera and microphone to surveil my house, supposedly to check the usage (or non-usage) of the hardware.
Surely that's very "wrong", if not also illegal in most places. Not everything about or near the hardware is fair game.
I wrote one sentence about how "there are ways for companies to go too far", which I think is pretty dang uncontroversial and trivially-true. However that user replied with what is clearly a disagreement, with corporate justifications and placing sole responsibility on employees to avoid the hardware.
This leads to two competing options:
(A) They simply can't imagine any scenario where a company might "go too far" and be at fault.
(B) Their stance is much milder, but for some reason they are replying to a straw-man argument that isn't what I actually wrote.
Of those two ambiguities, I went with (A), but if you think (B) is a more-charitable reading...
Or that the discussion was about information on and being transmitted through the devices and I was limiting my opinion on "there being nothing wrong with corporations tracking use of their hardware" to that scope, and not extending it to include spying on people in their homes using the device peripherals.
No, they shouldn't be flicking on your laptop camera or mic remotely, as these are pretty obviously violations of your privacy.
My rights are not subordinate to my company's, if anything it should be the reverse. My employment contract is intended for mutual benefit and the company also reserves the right to privacy from me in some things, even things in the scope of my employment. It should be acceptable to do things outside the scope of your employment using corporate devices, and you should retain a reasonable expectation of privacy when doing so.
There are some places where I don't really have an opinion; if you work in, I dunno, pet grooming, and you and your employer agree to... "shared custody" of a device then sure. But I'm not sure that's possible in some fields. As I understand it, financial companies are legally mandated to record every single message sent/received on the machine so they can prove that nobody's doing insider trading or whatever. I would kind of expect something similar for medical field companies. I'm open to suggestions, but I can't personally see a way to uphold that obligation while giving the employees privacy if they want to use the company device to ex. check their personal email.
I signed a contract with my employer that when I'm using the computer they give me to conduct their business on their behalf, they have the right to observe my usage of that computer.
The situation in this article is completely different.
None of my employers have done this to my knowledge. Some of them have had the ability to run commands on my computer, so they could in theory install such a thing without me noticing, but the default OOTB experience was not that.
tl;dr: They acquired an app called Onavo, with 10 million customers, and used it to install a CA certiticate thus allowing them to act as a MITM proxy.
tl;dr: If you install and fully trust a root CA on your client device, of course your TLS traffic can be MITMed.
edit: the problem, obviously, is that this app tricked the non-technical people into installing/trusting the root CA for malicious purposes. Clearly this was malware.
That's great for someone reading this forum to be aware of, but moms have no idea what any of the words you just wrote means. So if they were told they get a coupon for installing or some other bit of ridiculous things malware devs use, and yes I'm calling FB software malware. All of if it. Messenger, FB.app, everything. If it's from Meta, it's malicious.
That's a very good point. I have within recent memory installed my own internal CA that I run on Android devices that I own and trust, and the process on android 11+ is sufficiently daunting that 99.5% of peoples' moms could not do it in one or two clicks. You have to go deep into system settings and manually import the CA. This requires first file-transferring the CA file somewhere onto local /sdcard storage and possibly having a file system explorer app installed to be able to view its location on "disk" and pick it.
As is pointed out in the article, I would presume that Google saw the threat from allowing an app to install and trust a root CA as well, and removed the ability for a "one click" install of a root CA:
"KeyChain.createInstallIntent() stopped working in Android 7 (Nougat). A user would have to manually install the certificate. It would no longer be possible to have Facebook's CA cert installed directly in the app."
I would argue that everyone over the age of 8 can do it with sufficient motivation and quality documentation. $10-20 and the promise of more money doing some low-effort "consumer survey" or providing "extra analytics" is pretty enticing to a massive number of people really struggling in this country.
Despite being hard-up I don't think the vast majority of these low-income individuals would agree to being so egregiously wiretapped and data mined for future political ads on youtube or bundled into some other product without better compensation.
So I mean, just taking a quick look at the contents of /etc/ssl/certs and what Firefox shows me when I hit its View Certificates button, I see among dozens of other actors, Amazon, Microsoft, GoDaddy, and the Beijing Certificate Authority. No software has ever asked me if I want to trust any of these guys, they've been silently trusted during a software install I suppose. Does this mean they can all MITM my TLS traffic if they so choose?
Theoretically, yes, they could, I think. However, with Certificate Transparency, the fraudulent certificates these Certificate Authorities could create would have to be published in CT logs to be valid, where they would be quickly noticed, and the CA would (hopefully) lose credibility and be removed from device's trusted CA list.
HSTS causes your browser to pin the first cert that it sees (from sites opting in to this scheme), so nobody (even the legitimate operator) can swap it out before it expires.
And specifically to the scenario in OP, app clients these days do not use the OS cert store, they will ship a single well-known server cert and only accept that one. This doesn’t help with your Firefox usecase though.
When HSTS is enabled, browsers don't pin the specific cert, just that HTTPS is required. Pinning the cert would mean users would experience outages (because you can't swap the cert early), which would be a terrible experience.
HSTS is https required and it needs to be a validated cert; issued by a trusted CA and not expired (maybe also not before the not before date). And the usual ignore it and move on button is gone.
Doesn't help if you're worried about a trusted CA issuing a cert for your domain without your approval though. Certificate transparency helps a bit with that; Chrome requires certs issued with a not before after april 30, 2018 to be in CT logs[1], so at least you'll be able to know a certificate was issued for your domain. If that happens, you can ask the CA/Browser forum to investigate and there's a good chance the CA will get kicked out if there's not a good explaination of what happened. That's not perfect but it's better than without CT when you could only know about an unauthorized cert if you managed to see it.
[1] I think max validity was two years back then, so all current certs need logs
That’s not sufficient - you also need to intercept traffic somehow which they successfully accomplished by buying this vpn company and using them to proxy victims traffic through their infra
Edit: Not excusing Facebook here, but feel like this whole thing is in a weird grey area. It is like getting paid to have a Nielsen box monitoring your TV and then complaining when you find out it also knew what you watched on your DVD player.
Read the wording on the apk[0] - while it does mention they collect data to improve fb product it sure doesn’t mention the data includes telemetry for competitors’ apps.
I think what is missing is a timeline and clarity about the actual steps users had to take.
1) Onavo was a (free?) VPN app acquired by FB in 2014. Facebook used it to collect “market research data.” People chose to download this, but thought it was a security product.
2) At some point (it looks like 2016?) they launched an iOS app called Research, using the same tech, which required users to install a certificate meant for internal Facebook employees. They paid these users to monitor their traffic.
Are you saying that the MITM was happening for users of (1) or (2) or both?
Blockchain-based DNS would allow people to actually own their domain names instead of just renting them for an annual fee.
Domain transfers could be effected on-chain for a fee. Spam prevention is built-in because any action recorded on the blockchain incurs a fee. The fee is determined by the free markets and nobody holds a monopoly over the market.
People who trade domains would end up subsidizing those who hold domains; allowing them to hold domains for free, permanently (once the domain is bought and initial transfer is made).
It removes the need for an authority like ICANN who decide who gets to control what.
You don't have to limit yourself to one blockchain, new blockchains could launch and be treated as distinct gTLDs.
You don't need Certificate Authorities and the complex, trust-based infrastructure to implement certificate verification. It can all be done on-chain, anyone can sync their own nodes to verify who owns what domains.
Blockchain is naturally good for high-read scenarios. It can scale in terms of number of reads without limit; just add more nodes. The writes are limited, however, transaction fees serve as a natural regulating factor; it can always meet the demand, for the right price; which is determined entirely by the markets and based on usage of computational resources, not based on monopoly pricing.
Bitcoin, with a measly maximum of 4 transaction per second, has already proven that transaction fees can stay reasonable, even with extreme levels of hype on a global scale.
BTW, if you managed to read my comment, you can consider yourself lucky because this perspective I'm sharing is consistently suppressed and heavily down-voted... You can be sure that there are financial interests behind the current DNS system which do not want to leave room for any tech which might liberate the internet from the clutches of the incumbents and which might force them to compete on a free market.
Where did I mention a specific blockchain or coin? How can I monetize my previous comment?
It is deceptive to infer that there is a financial benefit. My idea is purely practical. It doesn't align with any mainstream financial interest, unlike yours which only serves incumbents.
The way most of the writeups make it sound is that it’s some sort of hack, but this doesn’t seem to be the case. (I’d love to get more detail on exactly what the participants were told they were getting paid for, but I’d be surprised if they did not know their actions were being monitored.)
The accusation that it’s wiretapping if one party in the communication channel is actively breaking the encryption (even with a tool provided by a third party) seems tenuous to me, but IANAL. If this is wiretapping, is it also wiretapping for me to use a local SSL proxy to decrypt and analyze traffic to a service’s API?