FastHTML has a NotStr(X) component that renders X as HTML.
I just copied a big HTML Tailwind component to a NotStr() and it worked fine.
I then split it in two, before and after, so I could make the dynamic bit from natural FastHTML components and it worked fine returning Div(before, dynamic_parts, after).
Plan to convert most of my smaller websites to FastHTML in the next few days before it's much more enjoyable for me.
Most people don't know what their home is worth, and if you get two appraisers or realtors to give you an estimate they'll likely be quite different.
Similarly, if you think a house is worth $500k but you list it for $600k, you don't know if someone will decide they love that house and they're willing to pay that amount or not.
That's one reason Opendoor tends to list houses at a fair premium to what they paid, as sometimes someone decides it is worth it and they make 20%+ on homes like that.
Yeah, and by the same token, you don't know if the buyer offering the top amount is really going to pull through, giving an offer from OpenDoor a certain appeal.
I think Opendoor still does poorly in new markets, but then it improves as they work out the quirks of the local market and start asking the right questions and collecting the right information.
A big part of Opendoor is creating the right apps and processes to collect this information to feed their models. The machine learning part is important, but can give the false impression it's just about data scientists crunching numbers at head office, when in reality there's a huge real-world operational machine that's driving it.
Zoom claimed they had to remove Chinese participants from the US-hosted meeting but didn't have the functionality to do that so (wrongly) banned the US hosts.
They said it was wrong to do, reinstated those accounts, and are building the functionality to enforce those Chinese laws without ever impacting users outside China.
I'll ignore for a second the fact they refused to even acknowledge Tiananmen Square in that post, despite the fact that as was pointed out they're a US company that isn't beholden to China and they're posting in English on their US-based website.
They are actually admitting that they're going to prevent people IN CHINA from connecting to a meeting that is presumably hosted IN THE US. That doesn't make it better, it makes it WORSE. You're basically telling the world that China will dictate how you operate WOLRDWIDE not just in China.
I'm sure that if the US had as strict control over our/their internet as China does over theirs, non-US sites would be forced to operate differently for peers in the US too.
Yeah - I'm pretty sure this is the real concern and verifying a phone number is reasonable trade-off.
I know this argument is often quickly dismissed on HN since people see child abuse or 'going dark' as an easy excuse for the government to leverage to get more control (and it has been used for this), but that doesn't mean the problem isn't serious or doesn't exist.
The people carrying out the abuse are sophisticated.
I have a friend that works at WhatsApp and their entire team is focused on trying to remove groups that exist to share child abuse imagery (via metadata since content is encrypted).
I fall on the side that secure encryption is critical for all of the reasons that technical people normally argue that it's critical and breaking it doesn't work/is a bad idea, but I also understand and empathize with the difficulty encryption by default causes for the organizations fighting this abuse.
That said, I have serious disagreements with Zoom unrelated to this particular e2ee issue (https://zalberico.com/essay/2020/06/13/zoom-in-china.html), I think they don't actually care about protecting the speech of their users or securing content from authoritarian governments. It's still good to avoid them for that reason alone.
Indeed, E2EE will enable criminals to go undetected. And this is a real problem. However, it’s an arms race that will end with criminals having proper, strong E2EE anyways. Trying to reverse this is like trying to reverse entropy, the toothpaste does not go back into the tube. It may seem like it is still doable now, but I’d be willing to place bets that feeling will evaporate shortly.
Of course, criminals are ordinary people too. They care about convenience and network effects as much as anyone. Which is why I think it’s insane that governments want to jeopardize the trust people have in proprietary, huge E2EE platforms that actually have the means to aid them in investigations. Yes, breaking the crypto may not be an option, but at least collecting useful metadata for use in investigating, and potentially ethical hacking, is an option.
I fear the day when the trust is gone because there is a very real possibility that some day many will be using decentralized E2EE chats, maybe even P2P. It’s not just conjecture of course, Matrix exists today and is already very impressive (in my opinion) in terms of usability.
The internet is opening up the concept of having nearly private communication with pretty much any individual in the world. It isn’t free of implications, but also, as more of our lives move online I feel its absolutely crucial that every day people can feel confident they’re not being monitored. The problem of CSA and other criminal behavior existed before the internet and it will certainly exist after. It’s absolutely past time to re-evaluate laws surrounding child protection, which seem to me to mostly be reactionary at this point (in that many of them are spawned as a result of a specific incident.)
> Indeed, E2EE will enable criminals to go undetected. And this is a real problem. However, it’s an arms race that will end with criminals having proper, strong E2EE anyways.
Individual child abusers aren’t part of a monolithic organization with training on how to secure their comms and practice OpSec.
The number of criminals who still create evidence against themselves on unencrypted platforms (SMS, phone, etc) is significant, despite E2EE options already being available. People are even being arrested for rioting after admitting on public TikTok videos to participating.
I think the only way criminals will standardize on E2EE is if every platform and communication mechanism is E2EE by default. Otherwise they will continue to make mistakes or think they can slip under the radar.
> I think the only way criminals will standardize on E2EE is if every platform and communication mechanism is E2EE by default. Otherwise they will continue to make mistakes or think they can slip under the radar.
FWIW, I believe this is the future if lawmakers don’t prevent it. A look at some E2EE software today:
- WhatsApp
- Matrix
- Signal
- iMessage
- Firefox Send
- MEGA
- ...
The list will grow.
In my opinion, E2EE today is like TLS 10 years ago. TLS was once a nice-to-have when it came to communication that was not strictly necessary to encrypt. Today, TLS is more sophisticated, stronger, and easier to implement than ever, and damn near a necessity for anything, even toys.
Granted... E2EE is necessarily harder, since it requires application-level implementation of crypto primitives, things definitely get complicated. Still, I believe the state of the art will continue to improve and tooling with it. Eventually there will probably be defacto libraries and maybe even OS frameworks to deal with E2EE key management, trust, etc.
To be clear, I view this as strictly a good thing and an inevitability. I don’t think transport encryption and encryption-at-rest are good enough anymore for private communication. Of course for public sites like Twitter or Tiktok it’s all you would logically get, but for any group or direct communication I now believe E2EE is slowly becoming the new baseline, and it’s mostly the complexity of it that hampers adoption.
Now that iMessage and WhatsApp are E2EE though, there is a lot of messages flowing that, exploits notwithstanding, are “truly” private, today, and I think the number will only go up. The only real question in my mind is, who’s next?
As far as criminals making slip-ups, this is guaranteed; even the best make mistakes obviously. But assuming all criminals are foolish and stupid is a mistake; I believe there’s a lot of selection bias in there, since we don’t get to find out those who truly never get caught. Time will tell if any of this really matters, or, if, as usual, it’s just another panic that has no tangible effects. I vote on the latter, but I still do believe proliferation of E2EE will change the game in ways we can’t really anticipate 100%.
If you think this strengthens the case against encryption laws, I suggest you rethink. There’s plenty of valid arguments against banning strong encryption and this isn’t one. You can’t simultaneously argue that E2EE keeps people’s conversations private to eavesdropping and then suggest that it doesn’t prevent eavesdropping for law enforcement purposes- at face value it does, and image hash databases to prevent the spread of known CSAM exist today; see, for example, Project Arachnid. And yes, law enforcement eavesdrops for law enforcement purposes. That’s why wiretap warrants exist. Whether its a good thing is another argument entirely, but it is indeed the status quo.
Put plainly, there will always be crimes you won't be able to catch. You prioritise resources on the most pressing ones and build up resources in the real world to tackle them in other ways. Dystopian lists on the client to control what you're allowed to say or think or report your thoughts back to the government still violates the principle E2EE is built upon.
There is no middle-ground. You either are secure or you are not. The genie is out of the bottle either way.
> The people carrying out the abuse are sophisticated.
In this case wouldn't they build their own solutions (potentially based on existing open-source solutions like Asterisk + Linphone or Jitsi Meet) or they might've built them already?
Phone numbers are also very easy to obtain anonymously, so I am not sure SMS verification would help track down abusers when it'll lead to a prepaid SIM or some innocent user's phone that happened to be compromised by malware.
> Phone numbers are also very easy to obtain anonymously, so I am not sure SMS verification would help track down abusers when it'll lead to a prepaid SIM or some innocent user's phone that happened to be compromised by malware.
It depends on which country really. In some places in Europe it became almost impossible to do that (sadly).
I agree that these reasons are why it's not a good idea to break or outlaw encryption since bad actors can still use it and good people that need it are blocked, but this doesn't mean that making it the default doesn't enable more abusers to get away with it that might be caught otherwise.
There's a spectrum of sophistication, if it's harder more of them will make more mistakes that make them easier to catch.
So how do you define that giving away phone numbers is the right trade-off in the "spectrum of sophistication"? It effectively means lack of anonymous communications for everyone, i.e. global surveillance (personally identifiable metadata is in the hands of Zoom).
I didn't say it was 'right', I said it was 'reasonable' - and there aren't easy answers to this.
Also to clarify, specifically a reasonable trade-off for Zoom (I don't think there should be a general law that requires IDs for video software use or something).
If you read the rest of my comment beyond the first line (particularly my blog link), you'd see that I agree with you when it comes to companies taking an ethical stand against authoritarian governments.
What you're arguing is a strawman, we agree more than we disagree.
> If you read the rest of my comment beyond the first line (particularly my blog link),
I read, and I think your argument is hollow, and, assuming your goodwill, you are not understanding the matter at all, and if not, I see an ill intent.
I do not appreciate all what you say at all. Any argument against encryption must be quashed without exceptions, and second thoughts.
It is only since the start of 21st century, the experience akin to "legs broken, skin flayed alive, and head cut off" has been a grim reality for far more than a million people by now, mostly for, really, nothing. What are talking about this! And what you talk about?
Attack this argument, not something not even having a passing genuine relation to the matter.
As you’ve responded here and elsewhere, calling an argument “hollow” is not a substantive disagreement.
It seems any argument that you don’t already agree with (basically only your exact position) is classified this way.
The rest of your comment is basically incoherent, and the parts that do make sense are obviously wrong. It’s also a willful misinterpretation of my position.
People were flayed before the 21st century. Acknowledging the issues with encryption is a critical requirement in making an effective defense of it. I am not arguing against encryption.
If this is an issue you actually care about (which it sounds like it is), learning how to build consensus and honestly consider the positions of others would be a valuable skill to develop.
As it stands you’re doing more harm to the pro-encryption position (which is also my position) with how you’re attempting to defend it.
People would give more support to government efforts to fight child abuse videos, if the government stopped using child abuse control tech to violently suppress human rights.
> I know this argument is often quickly dismissed on HN since people see child abuse or 'going dark' as an easy excuse for the government to leverage to get more control (and it has been used for this), but that doesn't mean the problem isn't serious or doesn't exist.
When a company says they want your phone number in order to use their resources, so they can take steps to avoid having their resources used for (certain) crimes, that's well within the bounds of reasonable.
The problem most people have is when the government tkes away the use of _super important feature_ from the populace as a whole (even using their own resources), because it _can_ be used for crimes.
Are we talking about recirculation of existing content or new cases of abuse? How much of it is new? How much of it is duplicates? How much of it involves the platform facilitating crimes to produce it? One article noted something very alarming, that resources are diverted from more serious crimes to chase these ones.
Only 4 comments in and we hit one of the four boogymen of the civil rights apocalypse. How many comments until we get to domestic terrorism or illegal drugs?
> one of the four boogymen of the civil rights apocalypse
The public is willing trade away privacy in exchange for protection from certain categories of risk. Instead of denying that, one can lean into it by ensuring strict definitions and enforcement options within those categories while preserving full privacy for those without. Arguing pedophile rings and terrorism are a cost of a privacy policy is a good way to sink that policy.
What if the only practical way to 100% stop all crime is to shutdown the internet?
Now, I'm not saying there is nothing that can be done to reduce it. I very much hope there can be, especially if counsellors can find warning signs and we can better figure out how to spot the danger signs, both online and off.
Facebook took a good step forward by putting warnings up to minors when someone outside of their social circles has contacted many others, although there are other things which could be done.
Should they be allowed to contact them through onion routing during such situations? Where do you draw the line of when such technologies can be used? Is it better not to open this can of worms and risk a slippery descent? What are the chances of false positives, will it unfairly impact relatives? Will it give a black mark to privacy technologies and civil liberties to be associated with automatic blocks? What if minors want to engage in activism, should this be limited? At what point does pushing and pushing start the lie about your age shenanigans again?
This is about Facebook here but it ties back to arguments about doing this or that for the greater good.
Is a more grounded approach better? Ensure minors are well-educated of the risks and dangers online? Invest in mental health services to avoid minors falling into depressive slumps where they might be susceptible to such criminals? In the rare event they drag anyone back home, whether they think they're of a similar age or not, they bring them before the parents first?
I would make a cogent argument to rebuff your straw man, but it's not worth my time if you don't share a priori assumptions with me about E2EE being uncrackable. It's just math. I don't see why the talk of trade-offs even is relevant to the discussion. People will use secure tools with E2EE or they will suffer the consequences of not doing so. Doing illegal things is already illegal. Banning or watering down E2EE so that it becomes no long E2EE is throwing the baby out with the bathwater.
Your mistake is bringing a technical argument to a political question.
My personal political answer to "how to have end-to-end encryption and prevent its use for child rape" would be to tax the companies which profit from E2EE, and use that money to fund death squads, which livestream dragging child rapists out of their home, anywhere in the world, and beating them to death with truncheons.
I'm joking, of course (or am I?) but I do consider this the general shape of a viable solution. E2EE is essential for a modern life which isn't a hellish surveillance dystopia, and the detection and prosecution of child rape is criminally underfunded.
> E2EE is essential for a modern life which isn't a hellish surveillance dystopia, and the detection and prosecution of child rape is criminally underfunded.
This is creeping a little close to populist rhetoric. The crimes you've described are obviously awful but angry politics will only lead to knee-jerk solutions.
It's clearly underfunded in relation to the difficulty in prosecuting these cases. Banning E2EE is a way of lowering the bar of difficulty in prosecuting these cases. The crime is reprehensible, and worthy of enforcement due to the heinous nature of abuse. Curtailing abuse via violating human right to encrypt is not the way to end abuse. Thus, more funding is likely justified, if it leads to an end to abuse. This social benefit of reduction and elimination of abuse should not come at the expense of human rights and E2EE.
I see, so your main concern here is prosecution. It is indeed true that prosecution is understaffed and underfunded. I feel there are other problems at play too.
CPS should be able to spot children in abusive homes and respond to reports of unusual activity. They should be able to spot clearly unstable caretakers.
Counsellors and teachers should be able to spot unusual behaviour from children. Mental health services can help someone escape falling into such a situation in the first place by keeping them from falling into depression which leads them to rely on such a person.
Local police shouldn't dismiss leads so readily. This is the it is impossible for him or her to do such a thing mindset which prevails so frequently.
Parents shouldn't trust their relatives so readily and should keep an eye out. 90% of cases happen at home.
If they stopped showing off their crimes online, would the entire system come to a crawl? I'm worried by how much of a reliance there is on divining crimes off the internet.
If they can enter the meeting, either they have to get confirmation from the host who would send the keys to the person entering the meeting or they already have the keys and can enter the meeting and decrypt the stream.
They apparently 'define it differently' to every other company, organization, and infosec professional. This sort of thing used to be called lying, but it's essentially an 'alternative fact' now:
Zoom, however, denies that it’s misleading users. The company told The Intercept, “When we use the phrase ‘End to End’ in our other literature, it is in reference to the connection being encrypted from Zoom end point to Zoom end point,” and that “content is not decrypted as it transfers across the Zoom cloud.”
Whether the paper is any different is sort of irrelevant if they're starting off from a place of bad faith. One time after another this company has 'accidents' like this, while removing CCP distinguished nonpersons from the platform. A sense of skepticism is certainly justified.
Zoom has said that employees can enter a meeting, but there's no way to do that without being seen on the participant list and there is no way to record a meeting secretly. They've also said they wouldn't build these things.
>We also do not have a means to insert our employees or others into meetings without being reflected in the participant list. We will not build any cryptographic backdoors to allow for the secret monitoring of meetings.
They bought Keybase to bring on a strong security team as they try to build end-to-end encryption into 1,000 person meetings which is currently not possible with any solution.[1]
You're absolutely right that past decisions focused on ease-of-use over security.
For evidence that they've changed their focus you can see their April 1 blog post[1] and the weekly video AMAs they do that are summarised in their "90-Day Security Plan Progress Report" blog posts.[2]
They're making a lot of progress.
The Keybase acquisition is about building out a strong security team that will help them implement end-to-end encryption in 1,000 person meetings, which currently isn't possible anywhere.[3]