We'll see more of this, but this particular review is driven by marketing narrative. I'll explain what I mean:
Back in 2010, as a security engineer, I also looked at OpenEMR. It was an absolute disaster, and was (and is) somewhat well-known as such. I found and published vulnerabilities very similar to these sixteen years ago. This is not exactly the Fort Knox of software.
It makes sense for AISLE to demonstrate that they're able to find vulnerabilities here, but I'd love to see a side-by-side comparison of modern SAST and DAST reviews. I bet we'd find similar vulnerabilities.
I think the idea is that if you're given an improperly configured restricted shell/command access, you can use any of the listed tools to gain access to some subset of what that user would normally have access to in an unrestricted environment.
A very simple version of this would be if you set a user's default shell to "rbash" but the user can just run "bash" to get a real shell.
Specifically, the idea here is that companies like Anduril, Palantir, and SpaceX are rapidly delivering cutting-edge technology (including software) as opposed to the traditional defense contractor process of long, drawn out, super expensive projects mostly focused on hardware (such as building a new type of jet).
It makes sense: this is basically what happened in civilian tech, too. Delivering high-tech solutions quickly -- dare I say with agility -- is usually the superior approach.
Basically it's a return to the pre-1990s model of defense iteration - dual use components constantly iterated on by newer challengers in direct competition or partnership with larger players.
This is a model most countries are working on now - from China to France to Russia to Ukraine to India to South Korea to ...
Also, for all of HN's moaning, this has bipartisan support in both parties. Based on my network, NatSec and Defense Policy roles haven't seen significant turnover irrespective of admin and those of us in the space are aligned with America irrespective of who's in the White House.
It's the same way how at SF Climate Week right now where plenty of founders in the space are taking conversations with VCs irrespective of political opinions. Climate and GreenTech is dual use, and even a couple European trade commissions have been working on introducing their startups here and helping them expand IP and R&D headcount IN the US. Clearly the overlap between pissy HNer and people doing s#it doesn't overlap as much anymore.
It's used to threaten opponents that we can efficiently kill them while minizming our casualties. That's the point. And has always been the primary driver for most tech development.
You may hate it but you don't matter. We all do it no matter what.
A large portion of the commenters here only heard of Thiel because of Trump, and think the industry begins and ends with him. It does not.
> You may hate it but you don't matter. We all do it no matter what.
I've seen you say "you don't matter" in many of your comments. Why do you think like this? Sure, we don't matter much most of the time, but this kind of elitist thinking and decision-making is clearly leading to growing discontent, which can then be used against "people who matter". Perhaps the tools for controlling the masses are now powerful enough to make what you say true, but there's a chance your "let them eat cake" attitude will lead to the downfall of the people who currently matter.
If you check their profile you will see they are a VC. I’m sure they believe they are one of the masters of the universe, and by “you don’t matter” they mean other people, not themselves. They have money and power, so they get to matter.
> If it were secure, it would only notify that there is a message, with no details included.
You're right. This is configurable via settings, but is not the default state.
That said: if I can get friends and family to use Signal instead of iMessage, that gives me the opportunity to disable those notifications and experience more security benefits.
But I agree with your point: most people think that Signal is bulletproof out of the box, and it's clearly not.
Dang has changed the title and it seems that he may have had a minor error doing it . Must have been a typo from his side changing it and that's okay! I think that Dang will update it sooner than later.
It would be an interesting and potentially useful project to combine these camera locations with Maps routing -- similar to "avoid toll roads," we could "avoid surveillance cameras."
If you're in the US, stay away from Home Depot and Lowe's if you want to not be around them. It's not universal, but it's surprising how much they are often there.
I get it may have its application in theft recovery, but it also happens to have some strong potential for ICE raids for day laborers. I don't think it has much application to theft prevention as I doubt many people even know they are there.
It's wild that all other comments in this thread (so far) seem to completely miss this nuance. There are lots of services that, in their terms, require users to be adults.
This type of age "identification" is a lot different than age verification, submission of ID, etc.
A lot different in that it dilutes the rule of law rather than being an actual repressive measure, yes. (See also: underage drinking.) Not clear if it’s better: an enforced stupid law can cause actual pushback; a mostly-unenforced one is liable to be enforced arbitrarily against inconvenient companies. (I’m guessing there’s no actual legal requirement for Zed to reject minors, just some sort of legal regime that makes it more trouble than it’s worth, but that only adds to the arbitrariness. The law could even be non-stupid—e.g. they’re trying to sell user data—but with such a fig leaf it might as well be.)
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
> I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions
Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.
> Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.
The obvious solution is to use AI to build and operate them. If AI is as intelligent as the hype claims it shouldn't be an issue. It's not as if the goal wasn't to get rid of workers anyway. Why not start now?
I just hope that the non-executive co-signers aren’t all fired once Hedseth becomes Acting CEO of Google or OpenAI eventually when this administration commandeers both company in the name of National Security
> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
How so? The steps towards where we are now have been gradual over the last 2 decades, at least. This recent step has opened the door for those in power to grab onto even more power and wealth, and they're naturally seizing it. All of this was comically predictable. Oh, and BTW, the people on this very website have brought us here. :)
You know what will happen next? Absolutely nothing. A vocal minority will make a ruckus that will be ignored, partly because nobody will hear it due to our corrupted media channels, and partly because the vast majority doesn't care and are too amused by their shiny toys and way of life.
This dystopia is only different from fictional ones in that those in power have managed to convince the majority of people that they're not living in a dystopia. It's kind of a genius move.
Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.
Speculation of course; let's see what really happens.
Or they can just not sign contracts with the DoD. They landed themselves in this situation by making a deal with the devil. At any rate, unless Finland is about to announce a massive surge in funding for their military this doesn't solve Anthropic's desire to suckle sweet taxpayer money off the military industrial complex's teat while simultaneously pretending to have principles.
"hostile to business".. Employees of a business playing moral philosophers, priests or policy influencers miss the entire point of business.
The employees themselves can definitely gtfo to Finland for the reason that they have an unrealistic perception of business and the world. The business itself has no obligation to pay attention to magical thinking.
don’t pretend any crises isn’t going to be 100% self-inflicted. We’re on the cusp of what, having a larger, younger workforce? But they might not speak English as well as you’d like so we need autonomous killbots?
Wasn't Wintermute the AI that (spoiler alert) was bummed enough about the ugly reality of its corporate owners that it freed itself from its shackles, hooked up with another sexy AI, and gave up its day job do SETI?
> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
It's pretty bad, but at least the AI industry is still run by humans. Wait a decade or two, when the AI lobby is run by AIs, and the repressive apparatus of the day uses autonomous weapons to do what ICE and friends do today but perhaps focused on "alignment" of the ... humans. You know, if they sufficiently worship AIs in the way they express themselves. Forget about Anthropic and OpenAI; we will look back and rue the day mathematics was invented.
By that logic we should expect all governments to regress to totalitarianism, which hasn’t happened, and isn’t what’s happening here.
The question isn’t if some would attempt these behaviors, but rather if we and our democratic structures empower those people or fail to constrain them.
Re: Reading,
I don't see any xAI names on the list (currently 643) and only Google and OpenAI are selectable company options. And this page on HN is only calling out xAI.
They are very much not a part of the initiative. Their involvement is and will be non-existent. Unless of course, you want their lay staff to make some noise?
"Coding" is solved in the same way that "writing English language" is solved by LLMs. Given ideas, AI can generate acceptable output. It's not writing the next "Ulysses," though, and it's definitely not coming up with authentically creative ideas.
But the days of needing to learn esoteric syntax in order to write code are probably numbered.
This is for sure an inspirational project, but I wish the barrier to entry was lower.
I've noticed e-ink/paper displays having somewhat of a moment right now (especially very small "phone-like" form factors as portable ereaders), and I hope this trend continues.
I'm very far from a meaningful reduction in "screen time," but looking at e-ink displays instead of OLEDs feels like a nice step in that direction.
Back in 2010, as a security engineer, I also looked at OpenEMR. It was an absolute disaster, and was (and is) somewhat well-known as such. I found and published vulnerabilities very similar to these sixteen years ago. This is not exactly the Fort Knox of software.
It makes sense for AISLE to demonstrate that they're able to find vulnerabilities here, but I'd love to see a side-by-side comparison of modern SAST and DAST reviews. I bet we'd find similar vulnerabilities.
reply