I wonder if the child safety section "leaks" behavior into other risky topics, like malware analysis. I see overlap in how the reports mention that once the safety has been tripped it becomes even more reluctant to work, which seems to match the instructions here for child safety.
That's not how I interpreted it as being in this instance, but it could certainly be that way.
I guess that'd be like keeping all correspondence in a shoe box (to be reviewed later -- or maybe never), or maybe the automated recording of my phone calls with others (which is completely legal where I am; I don't even have to tell them).
And I suppose whether I felt that would be creepy or not depends a lot upon intent, and consent.
If the intent were pure and good, and the consent both informed and granted, then I'd have no problem with any of this at all -- whether a shoebox, a tape recorder, or a bot is involved in taking the notes.
I called my parents, told them about the idea, they never even had Telegram before we started this project but they especially joined when they learnt that I was trying to build a family history. They are native Nepalese speakers therefore the system promptensured that the bot always responds to their questions and answers in Nepalese.
It is really easy to way over think, or over feel, AI.
Sometimes it's just a really good interface that matches the task well.
Think of all the people that still avoided getting a computer a decade or two ago, because "online" was so unnatural and creepy to them. Obviously, the internet had and has those places. And frankly a lot of social media still is.
But it can also just be wikipedia, making flight reservations, etc. When that is all it is doing, what you want it to do, that is all it is.
An automated language interface can just be a really good note collector/collator.
Personally, I look forward to the wise, well dressed, well spoken, waist-up robot bartenders we have been promised by movies for decades. Not creepy at all!
True, it can't compete with AWS/GCP/Azure if you're large scale. But most of us are not large scale, we just need a no frills experience instead of dealing with 27 nested panels just to spin up a VM.
And they're as deterministic as as the underlying thing they're abstracting... which is kinda what makes an abstraction an abstraction.
I get that people love saying LLMs are just compilers from human language to $OUTPUT_FORMAT but... they simply are not except in a stretchy metaphorical sense.
That's only true if you reduce the definition of "compiler" to a narrow `f = In -> Out`. But that is _not_ a compiler. We have a word for that: function. And in LLM's case an impure one.
> this project, even if somewhat spaghettified, will likely take orders of magnitude less time to perfect than it would for someone to create the whole thing from scratch without AI
That's a big leap of faith and... kinda contradicts the article as I understood it.
My experience is entirely opposite (and matches my understanding of the article): vibing from the start makes you take orders of magnitude more time to perfect. AI is a multiplier as an assistant, but a divisor as an engineer.
Both of these are not really the right way to use AI to code with. There are two basic ways to code with AI that work:
1. Autocomplete. Pretty simple; you only accept auto-completes you actually want, as you manually write code.
2. Software engineering design and implementation workflow. The AI makes a plan, with tasks. It commits those plans to files. It starts sub-agents to tackle the tasks. The subagents create tests to validate the code, then writes code to pass the tests. The subagents finish their tasks, and the AI agent does a review of the work to see if it's accurate. Multiple passes find more bugs and fix them in a loop, until there is nothing left to fix.
I'm amazed that nobody thinks the latter is a real thing that works, when Claude fucking Code has been produced this way for like 6 months. There's tens of thousands of people using this completely vibe-coded software. It's not a hoax.
I have worked at companies from startups to fortune 500. They all have garbage code. Who cares? It works anyway. The world is held together with duct tape, and it's unreasonably effective. I don't believe "code quality" can be measured by how it looks. The only meaningful measure of its quality is whether it runs and solves a user's problem.
Get the best programmer in the world. Have them write the most perfect source code in the world. In 10 years, it has to be completely rewritten. Why? The designer chose some advanced design that is conceptually superior, but did not survive the normal and constant churn of advancing technology. Compare that to some junior sysadmin writing a solution in Perl 5.x. It works 30 years later. Everyone would say the Perl solution was of inferior quality, yet it provides 3x more value.
I hear you about "it just works" mattering infinitely more than some arbitrary code quality metric
but I'm not judging Claude Code by how it looks. I kinda like the aesthetics. I'm talking about how slow, resource hungry and finnicky/flickery it is. it's objectively sloppy
> when Claude fucking Code has been produced this way for like 6 months
And people can look at the results (illegally) because that whole bunch of code has been leaked. Let's just say it's not looking good. These are the folks who actually made and trained Claude to begin with, they know the model more than anyone else, and the code is still absolute garbage tier by sensible human-written code quality standards.
Human code quality standards are built around the knowledge that humans prefer polished products that work consistently. You can get away without code quality in the short term, especially if you have no real competitors - to a lot of people, there just aren't any models other than Anthropic's which are particularly useful for software development. But in the long term it gets you into a poor quality trap that's often impossible to escape without starting over from scratch.
(Anthropic, of course, believes that advances in AI capability over the next few years will so radically reshape society that there's no point worrying about the long term.)
reply