Hi HN, I built Memrail, an open-source governance layer for OpenClaw workflows.
The recurring issue I saw: when agents write directly into memory/tasks, quality drifts quickly. It's hard to answer what changed, who approved it, and how to roll it back safely.
Memrail treats writes like pull requests:
- dry-run first
- diff preview
- human approve/reject
- commit
- audit trail + undo
Current surface:
- `/changes`: review inbox (commit/reject/undo)
- `/tasks`: execution workspace
- `/knowledge`: governed knowledge CRUD
Stack:
- FastAPI + SQLAlchemy
- Next.js
- SQLite default, PostgreSQL optional
Repo:
https://github.com/zhuamber370/memrail
I would value feedback on:
1) Where this governance gate should sit in an agent stack
2) Which diff/audit details are non-negotiable for real ops
3) What would block you from trying this
Memrail is an open-source governance layer for OpenClaw agent writes: dry-run -> diff preview -> human approve/reject -> commit -> audit (+ undo).
I’d love feedback on: 1) where this governance gate should sit in an agent stack 2) which diff/audit fields are non-negotiable 3) what would block production adoption
reply