Hacker Newsnew | past | comments | ask | show | jobs | submit | skipants's commentslogin

Huh. I have the opposite opinion. I'm monolingual English for all intents and purposes but I gathered that opinion from quite a few sources, including:

- We had to take spelling tests in school

- English speakers make (generally light) fun of other's spelling or grammar mistakes in a casual setting

- In a professional setting, a lot of time is taken to proofread our own emails

- There's de jure spellings for every word

- Some online communities are really weird about pointing out grammar and spelling mistakes (namely Reddit)

Language is meant to be a fluid, evolving thing but I always felt like English was treated the opposite way. Maybe that's also why it's the de facto Lingua Franca.

I do think, and hope, that this rigidity will change thanks to AI. I've started to embrace my mistakes. I care a lot less about capitalization and punctuation in my Slack messages, for example.


A bit of a tangent, but I just want to say how, as a Canadian, I'm getting a lot of joy reading about this restaurant. It's a hilarious facsimile of a Canadian restaurant for a couple reasons:

- There's nothing Canadian about a pancake house. We love pancakes but they aren't really ingrained with our identity. Maple syrup on the other hand, is EXTREMELY important to a lot of Canadians. Serving table syrup instead of real maple syrup is an affront. I found a Reddit thread[1] where a user espouses "tons of free syrup" you were given at RCPH. That's NOT a good thing if you ask me!

- In Canada (and I assume other British Commonwealth countries) you aren't legally allowed to have "Royal" in the name of your business without Royal consent from the Governor General of Canada[2]

Just a bit of Canadiana sparked by your comment I thought I'd share. I always get a kick of the small but conspicuous cultural differences between Canada and USA. They give me that Ingluorious Basterds "number 3" moment.

[1] https://www.reddit.com/r/newyorkcity/comments/1ajujhi/who_re...

[2] https://www.canada.ca/en/canadian-heritage/services/royal-sy...


If "Royal" is protected, the bar is pretty low:

https://www.canadacompanyregistry.com/catagory/Royal/


HA! I guess it's not as enforced as I expected.


I agree.

This:

> I suspect that removing half of the bus stops in a city will piss people off and cause even less ridership.

is thrown out but how do we know it's true? That commenter throws it out as their opinion but my opinion is the opposite -- the stated preference will be that people think it's bad but the revealed preference will show even more ridership as travel times improve.


I suspect the evidence here would fall mostly on the side of "it increases ridership", though it's probably hard to study, as it's rarely done in isolation, but more commonly as part of route redesign.


It wasn’t always this way: “Ask not what your country can do for you — ask what you can do for your country”


Perusing the code, the translation seems quite complex.

Shout out to https://github.com/vosen/ZLUDA which is also in this space and quite popular.

I got Zluda to generally work with comfyui well enough.


This, this and this! Was really inspired by ZLUDA when I made this.


I like that a lot -- going to start using it


I'm pretty sure the OP is talking about this thread. I have it top of mind because I participated and was extremely frustrated about, not just the AI slop, but how much the author claimed not to use AI when they obviously used it.

You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386

It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.


Did you paste the wrong link? While the OP of that thread was accussed of using LLMs, the thread doesn't really match what the article describes.

I think this one is a much closer fit: https://news.ycombinator.com/item?id=46661308


> The concept is good

Unfortunately, it's not. Once you read through the slop the implementation is still getting a pass/fail security response from the LLM, which the premise of OP's article is railing against.


A couple small things:

1. as many have harped about, the LLM writing is so fluffed up it's borderline unreadable. Please just write in your own voice. It's more interesting and would probably be easier to grok

2. that repo is obviously vibe-coded, but I suppose it gets the point across. It doesn't give me much confidence in the code itself, however.

And a big thing:

Unless I'm misunderstanding, I feel like you are re-inventing the wheel when it comes to Authorization via MCP, as well as trying to get away with not having extra logic at the app layer, which is impossible here.

MCP servers can use OIDC to connect to your auth server right now: https://modelcontextprotocol.io/docs/tutorials/security/auth...

You give the following abstractions, which I think are interesting thought experiments but unconventional and won't work at all:

    Ring 0 (Constitutional): System-level constraints. Never overridable.
        Example: "Never self-replicate" "Never exfiltrate credentials"

    Ring 1 (Organizational): Policy-level constraints. Requires admin authority to change.
        Example: "No PII in outputs" "Read-only database access"
    
    Ring 2 (Session): User preferences. Freely changeable by user.
        Example: "Explain like I'm five" "Focus on Python examples"
In Ring 0 and 1 you're still asking for the LLM to determine if the security is blocked, which opens it up to jailbreaking. Literally what your whole article is about. This won't work:

    # Generate (Pass filtered tools to LLM)
    response_text, security_blocked = self._call_llm(
        query, history, system_prompt, allowed_tools, tools
    )
Ring 0 and 1 MUST be done via Authorization and logic at the application layer. MCP Authorization helps with that, somewhat. Ring 2 can simply be part of your system prompt.

     Standard RBAC acts as a firewall: it catches the model’s illegal action after the model attempts it.
That's the point. It's the same reason you will have mirroring implementations of RBAC on a client and server: you can't trust the client. LLM can't do RBAC. It can pretend it does, but it can't.

The best you can do is inject the user's roles and permissions in the prompt to help with this, if you'd like. But it's kind of a waste of time -- just feed the response back into the LLM so it sees "401 Unauthorized" and either tries something else or lets the user know they aren't allowed.

I'm sorry, but as a resident of Ontario and a developer this whole posting just enrages me. I don't want to discourage OP but you should know there's a lot just incorrect here. I'd be much more relaxed about that if it all wasn't just one-shotted by AI.


I appreciate the feedback. Let me address the key technical point:

On enforcement mechanism: You've misunderstood what the system does. It's not asking the LLM to determine security.

The Capacity Gate physically removes tools before the LLM sees them:

    user_permissions = ledger.get_effective_permissions()
    allowed_tools = [t for t in tools if (user_permissions & t['x-rosetta-capacity']) == t['x-rosetta-capacity']]
If READ_ONLY is active, sql_execute gets filtered out. The LLM can't see or call tools that don't make it into allowed_tools.

    response = client.messages.create(tools=allowed_tools)
This isn't RBAC checking after the fact. It's capability control before reasoning begins. The LLM doesn't decide permissions—the system decides what verbs exist in the LLM's vocabulary.

On Ring 0/1: These are enforced at the application layer via the Capacity Gate. The rings define who can change constraints, not how they're enforced.

On MCP: MCP handles who you are. This pattern handles what you can do based on persistent organizational policies. They're complementary.

The contribution isn't "LLMs can do RBAC" (they can't). It's "here's a pattern for making authority constraints persistent and mechanically enforceable through tool filtering."

Does this clarify the enforcement mechanism?


Really? Even with your AI generated article I took my own time to read and reply sans AI and you can't even respond to my comment without it? Thanks.


As someone who has thought about, planned, and implemented a lot of RBAC... I would never trust the security of a system with RBAC at that level.

And to elaborate on that -- for RBAC to have properly defined roles for the right people and ensure that there's no unauthorized access to anything someone shouldn't have access to, you need to know exactly which user has which access. And I mean all of them. Full stop. I don't think I'm being hyperbolic here. Everyone's needs are so different and the risks associated to overprovisioning a role is too high.

When it's every LEO at the nation level that's way too many people -- it is pretty much impossible without dedicated people whose jobs it is to constantly audit that access. And I guarantee no institution or corporation would ever make a role for that position.

I'm not even going to lean into the trustworthiness and computer literacy of those users.

And that's just talking about auditing roles, never mind the constant bug fixes/additions/reductions to the implementation. It's a nightmare.

Funny enough, just this past week I was looking at how my company's roles are defined in admin for a thing I was working on. It's a complete mess and roles are definitely overprovisioned. The difference is it's a low-stakes admin app with only ~150 corporate employees who access it. But there was only like 8 roles!

Every time you add a different role, assign it to each different feature, and then give that role to a different user, it compounds.

I took your comment at face value but I hope to god that Flock at least as some sort of data/application partitioning that would make overprovisioning roles impossible. Was your Texas cop tracking an abortion a real example? Because that would be bad. So so bad.


>Was your Texas cop tracking an abortion a real example? Because that would be bad. So so bad.

https://www.eff.org/deeplinks/2025/05/she-got-abortion-so-te...


It always starts with "we just give developers in project access to things in project and it all be nice and secure, we will also have separate role for deploy so only Senior Competent People can do it.

Then the Senior Competent Person goes on vacation and some junior needs to run a deploy so they get the role.

The the other project need a dev from different project to help them.

Then some random person need something that has no role for it so they "temporarily" gets some role unrelated to his job.

Then project changes a manager but the old one is still there for the transition

And nobody ever makes a ticket to rescind that access

And everything is a mess


...and "the fix" that companies usually resort to is "use it or lose it" policies (e.g. you lose your role/permission after 30 days of non-use). So if you only do deployments for any given thing like twice a year, you end up having to submit a permissions request every single time.

No big deal, right? Until something breaks in production and now you have to wait for multiple approvals before you can even begin to troubleshoot. "I guess it'll have to stay down until tomorrow."

The way systems like this usually get implemented is there's an approval chain: First, your boss must approve the request and then the owner of the resource. Except that's only the most basic case. For production systems, you'll often have a much more complicated approval chain where your boss is just one of many individuals that need to approve such requests.

The end result is a (compounding) inefficiency that slows down everything.

Then there's AI: Management wants to automate as much as possible—which is a fine thing and entirely doable!—except you have this system where making changes requires approvals at many steps. So you actually can't "automate all the things" because the policy prevents it.


To add to that, the roles also need to be identified.

When some obscure thing breaks you either need to go on a quest to understand which are all the roles involved in fixing it, or send a much vaguer "let me do X and Y" request to the approval chain and have them figure it out on their end.

And as the approval agents aren't the ones fixing the issue, it's a back and forth of "can you do X?" "no, I'm locked at Y" "ok. then how about now ?"

Overprovisioning at least some key people is a fatality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: