It is definitely not appropriate. If you break the chop sticks and use them correctly your fingers will never touch the surface where there are splinters.
I always do it under the table; something I instinctively do without ever being told to. Now I wonder if I might have picked up on nonverbal cues at some point in the past. If I were someplace where chopsticks were the norm, I would probably just carry my own as I find the disposable wooden ones very off putting. I have to wonder if there is a rule about using your own chopsticks though.
The exact quote is "Thanks for the submission! We have reviewed your report and validated your findings. After internally assessing your report based on factors including the complexity of successfully exploiting the vulnerability, the potential data and information exposure, as well as the systems and users that would be impacted, we have determined that they do not present a significant security risk to be eligible under our rewards structure." The funny thing is, they actually gave me $500 and a lifetime GitHub Pro for the submission.
Tangential, but that's quite interesting, I had no idea you could get GitHub Pro for life, and certainly not through something as "accessible" as bug bounties.
> an LLM can ingest unstructured data and turn it into a feed.
An LLM can try to do that, yes. But LLMs are lossy compression. RSS feeds are accurate, predictable, and follow a pre-defined structure. Using LLMs to ingest data which can easily be turned into an parseable data structure seems strange: use the LLM to do the "next part" of the formula (comprehension, decision making, etc)
I mean that your RSS feed can basically be "Go to https://techcrunch.com/latest/ and use each non-video item as a feed item" or "Go to x.com/some_user and make each tweet a feed item", and the LLM can do a perfect extraction of links from html response blobs.
The only thing you have to do is ensure it can reliably get the response html. Maybe MCP browser + proxy or mirror to seem more human.
I built this for myself. The idea is that each feed is a url + title + a prompt to tell the LLM how to extract the links you want so that it generalizes over all websites.
And each feed item is a canonicalized url + title + a local copy of the content at that url which is an improvement over RSS since so many RSS feeds don't even contain the content.
Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free.
I would be more satisfied if they gave a proper explanation of what these could have lead to rather than being "well maybe 0.001% chance to exploit this". They did vaguely go over how "two" exploits managed to drop a file, but how impactful is that? Dropping a file in abcd with custom contents in some folder relative to the user profile is not that impactful other than corrupting data or poisoning cache, injecting some javascript. Now reading session data from other sites, that I would find interesting.
You should generally assume that in a web browser any memory corruption bug can, when combined with enough other bugs and a lot of clever engineering, be turned into arbitrary code execution on your computer.
The most important bit being the difficulty, AI finding 21 easily exploitable bugs is a lot more interesting than 21 that you need all the planets to align to work.
Just wait until one of these companies demands an email from the registered email address of your account!
reply