To what end though? 4 billion addresses is not enough on its own, even if they were reallocated from hoarders. I think that NAT and especially CGNAT have been very detrimental to the shape of the internet, where it's nearly impossible to self-host a public service without a VPN of some kind. Needing to pay some company for the ability to host a server that isn't behind NAT is a barrier that doesn't need to exist when IPv6 has a nearly limitless number of addresses.
You're not wrong, but practically speaking, hosting a VM is so cheap and comes with the advantage of serving from a datacenter that I would never want to host anything off my residential connection anyway.
The $1 to $5 a month to have excellent, reliable connectivity (that no residential connection provides), DDoS protection, and isn't tied to my home IP outweighs any home hosting benefit in my experience.
> If a rebase conflict occurs, the operation pauses and prints the conflicted files with line numbers. Resolve the conflicts, stage with git add, and continue with --continue. To undo the entire rebase, use --abort to restore all branches to their pre-rebase state.
So it tries to replay commits in the stack and will stop halfway through that individual stack (layer?) to let you fix it if there's a conflict.
I hope that GitHub continues copying Graphite's homework in terms of functionality, because from what I can see they don't have equivalents to `gt split`, absorb, and so on. Those can be very useful in the right contexts.
It appears the CLI is only half-baked so far. Given how many things they've borrowed from Graphite (a tool which adds this type of workflow), it should only be a matter of time until they add a `split` command. Graphite lets you split a large set of changes by commit or by hunk which is very handy.
That is genuinely horrifying. I wonder what the stats are for an average "artisan, hand-typed" project would be if it got as much attention as OpenClaw has. But 1.8 CVEs a day should scare any rational people away from the software... right? Surely?
I’m not an openclaw user or a vibe coder but - the use case of OpenClaw is “give me access to all of your data, programs and information, and I will make decisions and do stuff without asking you permission”. It’s the MO of the project. Even if it was perfectly designed, I think it would have more RCEs by the fact that the Venn diagram of use of the app and high risk areas are a perfect circle
> the use case of OpenClaw is “give me access to all of your data, programs and information, and I will make decisions and do stuff without asking you permission”. It’s the MO of the project.
You say that, but you also say
> I’m not an openclaw user
Your first statement makes the second one rather obvious.
As I said some weeks ago, I've given up pointing out on HN: "Well, you could just not give it your data" only to be repeatedly told (by non-users) that the whole point is to give it all your data.
> This is the "you're holding it wrong"[1] argument
But isn't that what you're doing?
Every single submission on HN has threads where people point out how it's useful to them without giving it access to much/any data. What is the benefit of pointing out what the homepage is saying other than to imply that we are holding it wrong?
And what does it say about you that you're going off based on marketing on a website rather than actual, competent tech users who actually weild the tool?
Until recently Gentoo boasted performance as a reason to use it. Yet as someone who's been in the community for over 20 years, I can assure you the majority of users didn't care about the performance and aren't optimizing their builds for it. Who cares what the site says?
I'm OpenClaw user and I never would do that. You can do with OpenClaw that, but it is definitely not the only use case, and I would argue that not even the one that makes sense overall. Most people want to be careful which decisions you want to outsource and which not, and you can direct the AI to work however you prefer. Personally I have developed some projects with OpenClaw, and it does have very limited permissions.
> and that's why heavily regulated industries like healthcare, education, and transportation have seen basically no innovation in 50 years.
Not to get distracted, but aren't these three all incredible examples of innovation over time? Healthcare alone is significantly better than it was 50 years ago and it's not really close. 50 years ago, this hip new treatment called electroshock therapy was being used to "treat" being gay. It was also within touching distance of getting a lobotomy for depression or anything else your husband thought was a problem.
The rates of depression in the US are at an all time high [1]. The primary theory behind the cause of depression and mechanism of most antidepressants has been abandoned [2]. Not treating homosexuality as a disease isn't an innovation, it's a cultural change.
You could maybe argue mRNA vaccines or semaglutides are big innovations, I think we've made a ton of progress against HIV, and it seems like we've made progress against cancer, but when you factor in how much government money goes into this research and compare it against the advancements we've seen in computational technology it's a lot less impressive. You could buy a raspberry pi for like $50 today that outperforms every computer made 50 years ago, whereas the cost of most medical imaging has actually increased [3]. Likewise the inflation adjusted cost of college degrees and building new rail lines or really any infrastructure has increased precipitously since 1970.
Returning impl trait is useful when you can't name the type you're trying to return (e.g. a closure), types which are annoyingly long (e.g. a long iterator chain), and avoids the heap overhead of returning a `Box<dyn Trait>`.
Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.
Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here"
> Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.
curious if you have benchmarks of "catastrofically slow".
Also, on linux, mainstream implementation translates async calls to blocked logic with thread pool on kernel level anyway.
Could you elaborate on that error handling part? To me, Rust is the only sane language I've worked with that has error-like propagation, in that functions must explicitly state what they can return, so that you don't get some bizarre runtime error thrown because the data was invalid 15 layers deeper
I don't know what quotemstr was specifically talking about, but here's my own take.
The ideal error handling is inferred algebraic effects like in Koka[1]. This allows you to add a call to an error-throwing function 15 layers down the stack and it's automatically propagated into the type signatures of all functions up the stack (and you can see the inferred effects with a language server or other tooling, similar to Rust's inferred types).
Now, how do you define E4, E5 and E6? The "correct" way is to use sum types, i.e., `enum E4 {E1(E1), E2(E2)}`, `enum E5 {E1(E1), E3(E3)}` and `enum E6 {E1(E1), E2(E2), E3(E3)}` with the appropriate From traits. The problem is that this involves a ton of boilerplate even with thiserror handling some stuff like the From traits.
Since this is such a massive pain, Rust programs tend to instead either define a single error enum type that has all possible errors in the crate, or just use opaque errors like the anyhow crate. The downside is that these approaches lose type information: you no longer know that a function can't return some specific error (unless it returns no errors at all, which is rare), which is ultimately not so different from those languages where you have to guard against bizarre runtime errors.
Worse yet, if f1 has to be changed such that it returns 2 new errors, then you need to go through all error types in the call stack and flatten the new errors manually into E4, E5 and E6. If you don't flatten errors, then you end up rebuilding the call stack in error types, which is a whole different can of worms.
Algebraic effects just handle all of this more conveniently. That said, an effect system like Koka's isn't viable in a systems programming language like Rust, because optimizing user-defined effects is difficult. But you could have a special compiler-blessed effect for exceptions; algebraic checked exceptions, so to speak. Rust already does this with async.
Serde is maintained by dtolnay, who is a very influential figure in Rust mainly through his library development. Serde, syn, anyhow etc end up being pulled in as dependencies to nearly every Rust crate. If his account was compromised, the attack surface is essentially every single other Rust crate... not ideal
That locks users into an ecosystem that may never evolve, which can be fine but doesn't really solve one of the core issues the author was describing. It forces the ecosystem to depend on the oldest and most incumbent crates, rather than newer ones which might be better in some ways.
reply