I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.
As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.
Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."
There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.
Yea, its fundamentally an issue of asymmetric economics.
Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that
But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.
In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.
There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!
That sounds like a great idea. I'd love to be able to contribute the remainder of my monthly AI subscriptions for something like this, especially since some of them bill and refresh their quotas by calendar month.
Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same?
I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week.
I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that.
As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.
I think it makes it all the more apparent that writing EAL4 code with as little design competence as possible was taking advantage of some strange scarcity economics.. It's now even easier to make something with endless technical debt and security vs backwards compatibility liability but is anyone going to keep paying for things that aren't correct and to the point if some market participants structure their agent usage toward verifiable quality and don't actually have more cost any more?
The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team.
If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored. After all, even open-source software is private until it is released.
Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way.
> Closed source software won't receive any reports
Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.
That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.
That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers
> Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates
You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.
Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.
But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.
It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.
It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.
Some users might be tech sensitive and have the capacity to check the codebase
If a company want to use your platform, it can run an audit with its own staff
These are people really concerned about the code, not "good samaritans"
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals
I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
> Closed source software won't receive any reports, but it will be exploited with AI.
This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.
Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.
We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.
"Security through obscurity" is a term popularized entirely by the long-standing consensus among security researchers and any expert not being paid to say otherwise that this is a bad idea that doesn't work
absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story
I mean-- to an LLM is there really any difference between the actual source and disassembled source? Informative names and comments probably help them too, but it's not clear that they're necessary.
Which models have you had good luck with when working with ghidra?
I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
> Closed source software won't receive any reports, but it will be exploited with AI
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.
It's SaaS though. You don't have access to the binary to decompile. There's only so much you can reverse-engineer through public URLs and APIs, especially if the SaaS uses any form of automatic detection of bot traffic.
Thanks you. This is what the parent post was trying to say. Don't know why it is down-voted. AI or not, if the API end points are well secured, for example use uuid-v7, then their is little that the ai can gain from just these points.
The opposite is true. Open source barely matters to attackers, especially ones that can be automated. It mostly enables more people (or agents, or people with agents) to notice and fix your vulnerabilities. Secrecy and other asymmetries in the information landscape disproportionately benefit attackers, and the oft-repeated corporate claim that proprietary software is more secure is summarily discounted by most cybersecurity professionals, whether in industry or academic research. This is also seldom the motivation for making products proprietary, but it's more PR-friendly to claim that closing your source code is for security reasons than it is to say that it's for competitive advantage or control over your customers
Most or all existing solutions are universal (not just reverse geocoding) and rely on database. The purpose of this project is to make it super fast to do one thing. The result is 100x - 1000x speed of Pelias and other universal tools like that.
Self-hosted reverse geocoder with sub-millisecond query latency. C++ builder parses OSM PBF files into a compact binary index using S2 geometry cells. Rust server memory-maps the index and serves a Nominatim-compatible API. Docker support with automatic HTTPS.
It took about 8-10 hours for me on a 192GB Hetzner cloud machine. The resulting index was just 18GB, so once the index is created it's really efficient and you can easily run it on a small VM.
Russia has been slowly cracking down on popular communication and media platforms. First they slow down connection to unusable speeds. This happened to YouTube at some point last year. At first they even said that it's something wrong with Google and it's not them. I think the intention is to slowly get people off the platform without completely blocking it. Then eventually they block access completely. Same happened to messaging apps, like WhatsApp and Telegram. Telegram is still working for messaging, but not calls. It's kind of funny because Telegram is used by Russian military to coordinate a lot of things, so they complain a lot about the block.
I have family in Russia and it's a sad state of affairs. Our ability to communicate with them is slowly degrading to the point where now I am looking into self-hosted communications.
To my surprise, even sophisticated means of traffic masking like amnezia and vxray get disrupted frequently, requiring hopping around self hosted solutions and updating ones setup periodically. That's waaay beyond what most people are capable of. I am fortunate to have some tech worker acquaintance who live next to my family members, otherwise there'd be no way for me to for example guide them through setup and re-configuration remotely. Still, this setup gets disrupted every month or so requiring manual intervention.
Try to get a middle hop somewhere at a russian datacenter. Sometimes these have DPI censorship boxes disabled (?) -- I know one that lets me forward simple Wireguard from mobile routers to a EU server with a few SNAT/DNAT rules, even though ordinarily that would get blocked at first sight.
(Sadly, it's just Mikrotik gear that can't use any fancy censorship evasion protocols).
I would say they are trying to block every public VPN, and if some VPN tried to hide behind CloudFlare's backs thinking that they took all CloudFlare sites as hostages, then whole CloudFlare is nuked, and hostages did not save VPN from blocking.
I have a similar situation and Amnezia (either in WG mode or Xray mode) works well with a self-hosted server. Also SSH tunnel as proxy so far also works.
I'm considering even creating a dial-up (yes, V.34 modem!) line somewhere near to Russia, to offer a side channel with text browsing, news, IRC and email. For when things get really, really bad (they will ...)
Before you ask: yes, dialup works on modern networks if the codec is G.711 (uncompressed). Most public phone network is this way because fax is a thing, but some bulk carriers or some enterprises use compressed codecs.
Nationalistic flamewar is not allowed on Hacker News, regardless of nation. Personal attacks aren't allowed either. We ban accounts that post like this, so please don't.
I'm sure you have good reason to feel the way you do, but please, no more of this here.
Edit: you've unfortunately been breaking the site guidelines in other places as well, and we've already warned you once. If you'd please review the guidelines and stick to them when posting here, we'd appreciate it.
I feel your take will be taken down soon as it’s not the place for such discussion.
Just food for thought: what makes you to feel so entitled to judge people place of living? You know how we are not choosing where to be born and our mobility is restricted a lot of times? Is your separation for man and women comes from religion or other bias to categorize them?
This is the literal job of state security. Besides, most of those $#@$###s get in not by pretending they've been persecuted, but simply because they already have deep cover and hold passports of other countries.
And I sincerely hope that you will never have to know what it's like to flee your own country, first hand. Peace.
Oh look at you, how easy for you to demand of others that they put themselves in danger before you deem them worthy of protection. "You must have proof you've been arrested by Putin's police before we let you in here!". So they must risk the chance of immediate imprisonment in a freezing Siberian dungeon before you open your generous doors...
And graduates working for the Belarussian state, why is that acceptable and not considered as "conspiring with the war criminals" in your eyes? What other barriers are there that you have in your mind we don't know about, for someone who's worked as expected, and fled the country afterwards?
Fuck you. I have many people here (many of them queer) who had to leave everything and become forced emigres or asylum seekers.
Have a bit of compassion, would ya?
My childhood crush is in Ukraine (I mean, he's Ukrainian), my dear friend (Ukrainian) had to leave everything and seek refuge in NL. My friends (Russian) are under a constant threat of getting imprisoned for 10+ years because they still help support queer and trans people in Russia.
Compared to them I feel very privileged, because I was able to GTFO on my own. But if you think that all Russian citizens must be deported, you're either a troll or a madman. Besides, this is exactly what, for example, Stalin did to Chechens, or think about what the USA did to the Japanese.
Did it help someone? No. Did it ruin millions of lives? You fucking guess.
upd: made it all clearer, and sorry for all the profanity
>It's kind of funny because Telegram is used by Russian military to coordinate a lot of things, so they complain a lot about the block.
If that's true, then it was really stupid of them to allow things to get to that point. Look at the US -- they had no tolerance for a major social media app (TikTok) to be outside their own control, and they weren't even in a major war at the time. It seems obvious that if you ARE in a major war, you wouldn't want your main social media and messaging app to be under the control of somebody (Pavel Durov) who was recently arrested by a member (France) of the military alliance you're fighting against (NATO), when it is unclear what deal he may have made with that government to be released from prison. It seems obvious to suspect that the price of his freedom may have been a backdoor that allows the opposing military to read all the messages your own people are sending.
The real failure of Russia's is that, unlike the US, they have been systematically unable to keep its own top tech talent supportive of their own government. The top US tech companies have been only too eager to do almost anything their government asks of them, with only some rare and tepid pushback (such as that by Anthropic recently), that seems to get severely punished when it does happen. So there has been no need for the US government to go to the extents that Russia is going to now, simply because they were able to coopt their top talent into working for and with the state (with some rare exceptions like Snowden, and I'd say the "damage" from that has been pretty successfully contained).
The Chinese government may have had some issues with that as well, considering what happened with Jack Ma (though I don't know much about it).
> unable to keep its own top tech talent supportive of their own government
Government did much to turn them away. And with regards to Makh messanger. Patriotic tech talents are supposed to be interested in Elbrus 2000 PC and Aurora mobile OS. Does Makh messenger work on something Russian? No, Makh does not work on anything Russian. So what makes Makh Russian? We don't get it. It's some another Russia that we don't belong to. Our Russia is Elbrus 2000 PC and Aurora mobile OS. And software from Astra group. People behind Elbrus 2000 support orthodoxal christianity, and people behind Astra group are for great Soviet past. They call one of Astra Linux releases "Leningrad". The proper name of what is currently known as "Saint-Petersburg."
Makh is from some commercial group that does not care about our values. Virtually openly violates traditional values. They are from VK group, and VK hosts VKFest, an open air for youth with rotten words, furnication songs, all that stuff.
Our Russia and their Russia don't mix like water and oil.
For military there is another communication network called Свод (Svod, Arch). It was 4 years late to the party, but at least goes in now.
> It's kind of funny because Telegram is used by Russian military to coordinate a lot of things, so they complain a lot about the block.
This plus the starlink cutoff blinded them so badly Ukraine was able to counterattack and retake a bit of area north of Huliaipole, with armored vehicles (which normally attract immediate drone response these days) last I checked operations are still ongoing, so it’ll be a bit before we know the extent of what they were able to do.
That might satisfy message-privacy and connectivity, but it seems it'd be vulnerable when it comes to identity-privacy and detection.
I suppose you could use an LLM on each end to write superficially plausible messages and use ~sten~ steganography, although then there's still the problem of "Weird, this user types at 500WPM without sleeping."
YouTube is easily accelerated via DPI bypass. It used to work with CloudFlare too, but not anymore, so CloudFlare is banned more hard than YouTube. With DPI bypass YouTube is very fast.
While 7700 per hour sounds big, pretty much any dinky server can handle it. So I don't think it's a matter of DDoS. At this point it's just... odd behaviour.
especially for a txt file. I don't know anything really about webdev but I'm pretty sure serving up 7700 plaintext files with roughly 10 lines each an hour isn't that demanding
I think it's really poor argument that AGI won't happen because model doesn't understand physical world. That can be trained the same way everything else is.
I think the biggest issue we currently have is with proper memory. But even that is because it's not feasible to post-train an individual model on its experiences at scale. It's not a fundamental architectural limitation.
When people move the goal posts for AGI toward a physical state, they are usually doing it so they can continue to raise more funding rounds at a higher valuation. Not saying the author is doing that.
I recommend checking history of deregulation of agricultural industry in New Zealand. It didn't lose the industry. Actually the opposite happened.
Persistent government subsidies are almost never a good idea long term. I understand that some temporary support might make sense in some cases, but not permanent one. It prevents innovation and optimization. And in the long run it usually makes more damage.
Having been in the NZ ag tech industry for the last 25+ years, US subsidies and tarrifs drove a lot of innovation in NZ (also Europe) and then US manufacturers in the spaces I've been in have pretty much collapsed when faced with better tech as farmers switched to using our ( or the European) tech.
A lot of meat cutting (and packaging) robotics and dairy automation are the flashy ones. Softer tech like crop, orchard management and cultivar creation as well as stock breeding/selection or logistics all of which came a long way. The development of uses for byproducts i.e. chemical refineries to change milk into something like protein or milk powder and use the secondary products from those processes to produce alcohols or fertilizer.
It would appear that to remain competitive they had massive consolidation, and with that an increase in animal density leading to major issues with water pollution.
Downvoting without engaging in a discussion kind of directly violates both the spirit and rules around here.
I've posted pretty solid evidence that dregulation, did not, in fact improve the agricultural situation for New Zealand. It absolutely made a subset of corporations and mega-farmers extremely rich at the expense of the natural resources the rest of the country shares. Would LOVE to hear the arguments about how that's a good thing for the people of New Zealand or our planet as a whole.
But then again, that would require thoughtful discourse...
Just to expand on this idea with more historical context: part of the reason agriculture is regulated like it is in the US is because it used to be much more deregulated. And then speculation and profiteering in agriculture in the 1920s contributed to the great depression and caused the dust bowl. Then, it became a national food security issue. The New Deal is where a lot of the regulation and subsidies originate, but we didn't just do it for kicks. We have, actually, tried the alternative, and it was a disaster.
Because it goes against the urban popular group think. "Blue States subsidizing Red States" "NZ did it, so US can to"
Provide any real or partial claims this isn't the whole story and it's difficult to change your mind on something that is fundamental to your beliefs. So downvote and move to the next post that validates your beliefs. Happens to everyone including me.
reply