@janoelze -- that was my thought too, though less so that they wouldn't share a claim of not being notified at all with a third party, but more that those kind of things need to go through legal/comms/etc not whoever runs the security mailbox. if the person running the email box is not the CISO, surely they at least need the CISOs approval to say something beyond a thank you or followup questions? (and if they are the CISO, then they have bigger things to worry about then replying...)
(technically, I guess that doesn't prove anything other than it is in my Sent folder? it has a message ID but I guess only the purelymail admin could confirm that)
In any event, this should never have required an outside reminder. The indexing issue may be something non obvious. But the core decision not to use signed/expiring URLs is nothing less than good old security by obscurity.
I've contacted fiverr before about obvious fraud being conducted through their platform, and they just sent me in endless loops of "open a ticket". "No, e-mail us about it." "No, e-mail us at our security contact about it." Crickets, and then a response saying to please open a ticket.
Basically, they aren't set up for anyone to actually contact them and expect a resolution.
They may be part of it, but as a publicly traded company, there's got to be a at least a few people there with a fancy pedigree (not that that actually means they are good at their job or care). But if such a test existed, they presumably would have passed it.
They also have an ISO 27001 certificate (they try to claim a bunch of AWSs certs by proxy on their security page, which is ironic as they say AWS stores most of their data while apparently all uploads are on this).
A while ago I had a customer come to me who had a simple Shopify site and fell for a phishing type of attack where someone simply had an email like "shopify_security at gmail" and kept telling her she needed to apply all kinds of changes. They laundered the payments through Fiverr.
Then they would install WordPress plugins to make the site worse and claim even more "work" was needed.
I documented the entire thing, including my own credentials, and sent it off to Fiverr. Fiverr's response was everything was fine and there was nothing they could do about it, even though it was obvious fraud.
Google never did anything about it either, nor did Shopify.
Given how they handled such a minor situation like that... I guess it shouldn't be surprising they're just asleep at the switch for a major one like this.
They probably wouldn't act immediately as there's no way for them to enable signing without breaking their client's site. The only cleanup you could do without that would be having google pull that subdomain I guess?
(Fiverr itself uses Bugcrowd but is private, having to first email their SOC as I did.)
Everyone is acting like this is obviously wrong, and they clearly should have communicated the change and made it visible in the exclusion settings.
However, there is a very good reason for not backing up what is in effect network attached storage. Particularly for OneDrive, as it often adds company SharePoint sites you open files from as mountpoints under your OneDrive folder (business OneDrive is basically a personal Sharepoint site under the hood). Trying to back them up would result in downloading potentially hundreds of gigabytes of files to the desktop only to them reupload them to OneDrive. That would also likely trigger data exfiltration flags at your corporate IT.
A Dropbox/OneDrive/Drive/etc folder is a network mount point by another name. (Many of them are not implemented as FUSE mounts or equivalent OS API, not folders on disk.) It's fundamentally reasonable for software that promises backing up the local disk not to backup whatever network drives you happen to have signed in/mounted.
Surely at least part of the issue here is that even an LLM operates in two digit tokens per second, not to mention extra tokens for "thinking/reasoning" mode, while a real autopilot probably has response times in tens of milliseconds. Plus the network latency vs a local LLM.
> Who uses 11ty? NASA, CERN, the TC39 committee, W3C, Google, Microsoft, Mozilla, Apache, freeCodeCamp, to name a few.
> Imagine if Build Awesome actually reached out to people who regularly make static sites. You know, the userbases on NeoCities or MelonLand or 32-bit Cafe?
One minute you are saying large companies use the product, the next that it was always for hobbyists and shouldn't target corporate features?
> In truth, I myself have started a business that has a near identical concept to Build Awesome. Berry House is my independent web studio
> The difference is though that my model is pay-what-you-can, or pro bono. I developed Calgary Groups for a client and charged $5/hour for my dev work.
That is not a business -- no profit motive. (Working less than minimum wage, even.) Not a good benchmark for comparing what an actual business like Font Awesome should do.
> One minute you are saying large companies use the product, the next that it was always for hobbyists and shouldn't target corporate features?
You are conflating 11ty with Build Awesome (pro)
> That is not a business -- no profit motive.
It is most definitely a business, even if you don't think it will make a lot of money. Also the whole point of comparison is claiming that people will not pay that much money for Build Awesome.
Everyone is commenting that this doesn't count because they pointed it at the specific files that Mythos already found vulnerable.
But sometimes you do know where vulnerabilities are and still don't know what they are. For example, an update may be released in beta changing the part of the Mac or Windows kernel or some app, but they haven't published the CVE yet. If locally runnable (even with significant compute costs) LLMs can find and exploit it based on either the location of the changed file or the actual diff of the compiled output, we could see exploits before the update ever went to production?
I doubt that. Someone who doesn't like reading wouldn't think of "spoiling a book" as a prank category that comes to mind or understands it to be a serious upset rather than just slightly annoying. Also, they'd likely feel that going to a bookstore and shouting things relating to a book serious is "cringe" or whatever you want to call it, if they aren't the type to even go to a bookstore in the first place.
All it takes is one person to go "I did this" and then the others have a good troll/joke to use. Doesn't take a lot of effort and people were more outgoing back then.
Because if you didn’t already know that, like an immature deprived and desperate kid, being able to easily find out is really really bad..
Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?
No, because that would indicate there should be some sort of regulatory standard for what does/does not constitute "lazy engineering". Creating this standard in turn creates regulatory/compliance overhead for every software engineering organization. This in turn slows everything right down and destroys the startup ethos. "Move fast and break things" is a thing for a reason. The whole point of the free market is to avoid this kind of burdensome regulation at all costs.
If customers want to buy "lazily-engineered" products, from where do you derive the authority to tell them they can't?
If airplanes used this logic, likely at least hundreds more would have died over the last decades. Accident rates are even going up, because of logic like yours. Yeah planes are fine most of the time, when the long tail involves safety concerns (that wouldn’t have otherwise happened) making money on people using your product becomes unethical without mutually agreed upon safety regulation, ideally motivated by voters instead of special interest groups
That implies that it is already illegal to provide this information. But is it? If a human did so with intent to further a crime, it would be conspiracy. But if you were discussing it without such intent (e.x. red teaming/creating scenarios with someone working in chemistry or law enforcement), it isn't. An AI has no intent when it answers questions, so it is not clear how it could count as conspiracy. Calling it "lazy engineering" implies that there was a duty to prevent that info from being released in the first place.
Very simply, if you provide a service for money you have a duty to ensure that service is safe. There’s a reason you have to sign a waiver when you jump on a trampoline, but companies are so rich the court cases have become parking tickets..
It went way beyond that, neurotoxins such as vx are heavy and linger around for a long time, just having a small amount of it placed in any metro (while trying to stay alive yourself) means the deaths of thousands of people. I am not even sure if it's legal to mention some of the uncategorized chemical solutions that it either hallucinated or figured out from relative knowledge.
reply