i am working on an offline weights harness for non-technical people, writers mainly. it's designed to work forever but also be adaptable as more weights get released etc.
it enforces very few paradigms, runs in the browser, and allows users to view and edit agent config files within the UI.
it's kind of a nightmare to try to figure out how to do this appropriately, but it's an interesting challenge and i have seen very few (~0?) projects with an approach like this ...
all the offline harnesses are optimized towards coding, vs. general text manipulation aka "writing."
True, though I have found that forcing (I use an agent skill to do this) an LLM's agent to document the reasoning behind each "decision" it makes seems to lead to better decision-making. Or at least, more justifiable decisions (even if the justification is bad).
This is a genuine question, and I will be honest: I do not really dislike JS. I even worked on large typescript projects and appreciated it.
What I do not like is the strange mix of technologies you have to cope with in order to work with Python on the web: your project is often a mix of python / html / css / react / js / node.
Many very nice frameworks try to abstract this and present you only the python side; but they rely on this stack internally.
Once you want to reach complex use cases (such as a refresh at reasonable rate), you will have to "open the engine" and enter into this mix.
> where you fully give in to the vibes, embrace exponentials, and forget that the code even exists [...] It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
So clearly we need a term for what happens when experienced, professional software engineers use LLM tooling as part of a responsible development process, taking full advantage of their existing expertise and with a goal to produce good, reliable software.
"Agentic engineering" is a good candidate for that.
> as part of a responsible development process, taking full advantage of their existing expertise and with a goal to produce good, reliable software
Its shifted so much for me. I used to think that I had a solemn duty to read every line and understand it, or to write all the test cases. Then I started noticing that tools like CodeRabbit, or Cursor would find things in my code that I would rarely find myself.
I think right now, its shifted my perception of my role to one where I am responsible for "tilting" the agentic coding loop; ultimately the goal is a matter of ensuring the agent learns from its mistakes, self-organize and embrace a spirit of Kaizen.
Btw thank you for your work on Django, last 20 years with it were life changing (I did .NET before).
Yeah, Andrej thinks "agentic engineering" is a good candidate too. Note that he's not claiming to have coined it there:
> Many people have tried to come up with a better name for this to differentiate it from vibe coding, personally my current favorite "agentic engineering"
Andrej posted that on 4th of February. I first saw the term "agentic engineering" used by OpenClaw creator Peter Steinberger in October 2025 (a month before he wrote the first line of code for OpenClaw) https://steipete.me/posts/just-talk-to-it
Originally a metric to drive prioritization and constraints on the project (amongst others, like zero deps, etc) -- but clearly ended up getting abused
it enforces very few paradigms, runs in the browser, and allows users to view and edit agent config files within the UI.
it's kind of a nightmare to try to figure out how to do this appropriately, but it's an interesting challenge and i have seen very few (~0?) projects with an approach like this ...
all the offline harnesses are optimized towards coding, vs. general text manipulation aka "writing."
hoping to publish v0.1.0 by the end of may.
reply