It would be nice to start with what this actually is from the user’s point of view.
Forking, paths, JSON, decentralized, encryption, key rotation, etc and I still have no idea why I would bother and who else could use it (a decentralized social network is only so much fun if you are the only one on it).
I can think of at least a couple of dozen fairly technical friends who'd be capable enough to set this up themselves, and who're at least adjacently interested in recreational paranoia. And probably another dozen or two who're definitely into recreational (or possibly delusional and/or fully deserved paranoia) who'd be willing to learn or get help setting this up.
Right now, those circles of friends are _reasonable_ well served with some combination of Mastodon (effectively zero security but with decent findability) and Signal (much more limited mostly to only people you'd be OK with having your phone number).
I will definitely take this for a spin, and start having discussions with particular groups of friends to see it I get any traction.
Yeah, my colleague recently said "hey I've burnt through $200 in Claude in 3 days". And he was prompting. Max 8hrs/day Imagine what would happen if AI was prompting.
As I like this allegory really much, AI is (or should be) like and exoskeleton, should help people do things. If you step out of your car putting it first in drive mode, and going to sleep, next day it will be farther, but the question is, is it still on road
Agreed. The spec file is context. Writing acceptance criteria before you prompt provides the context the agent needs to not go off in the wrong direction. Human leverage just moved up and the plan/spec is the most important step.
Parallelism on top of bad context just gets you more wrong answers faster
Sorry but isn't the bottleneck then simply to do even relevant things? Like how much of a qualified backlog do you have that your pipeline does not run dry?
The cognitive architecture, so to speak, for the LLM can make a huge difference - triggers and skills go a long way when combined with shell scripts that dual-write.
Reminds me of when I was looking for Obsidian note management workflows and every single person who posted about theirs used it to take notes on... note taking workflows.
Ownership and responsibility are useless when a YouTuber tells it to their million followers that GitHub contributions are valued by companies and this is how you can create a pull request with AI in three minutes, and you get hundred low value noise PRs opened by university students from the other side of the globe. It’s Hacktoberfest on steroids.
There's a difference between "I've read a LGPL code once, maybe I could do something similar" and "I've been reading this LGPL code for 12 years and now I'm going to do exactly the same thing".
It’s nice to have a break from AI FUD. It reminds me of a time when I could browse HN without getting anxiety immediately, because nowadays you can’t open a comment section without finding a comment about how you ngmi.
Man... I spent the last 6 months writing code using voice chat with multiple concurrent Claude code agents using an orchestration system because I felt like that was the new required skill set.
In the past few weeks I've started opening neovim again and just writing code. It's still 50/50 with a Claude code instance, but fuck I don't feel a big productivity difference.
I just write my own code and then ask AI to find any issues and correct them if I feel it is good advice. What AI is amazing at is writing most of my test cases. Saves me a lot of time.
To be fair, many human tests I've read do similar.
Especially when folks are trying to push % based test metrics and have types ( and thus they tests assert types where the types can't really be wrong ).
I use AI to write tests. Many of them the e2e fell into the pointless niche, but I was able to scope my API tests well enough to get very high hit rate.
The value of said API tests aren't unlimited. If I had to hand roll them, I'm not sure I would have written as many, but they test a multitude of 400, 401, 402, 403, and 404s, and the tests themselves have absolutely caught issues such as validator not mounting correctly, or the wrong error status code due to check ordering.
It's good at writing/updating tedious test cases and fixtures when you're directing it more closely. But yes, it's not as great at coming up with what to test in the first place.
The assertion here is not about implementation logic. GP presumably has in mind unit tests, specifically in a framework where the test logic is implemented with such assertions. (For the Python ecosystem, pytest is pretty much standard, and works that way.)
Majority of data in typical message-passing plumbing code are a combination of opaque IDs, nominal strings, few enums, and floats. It's mostly OK for these cases, I have found. Esp. in typed languages.
Right. If AI actually made you more productive, there would be more good software around, and we wouldn't have the METR study showing it makes you 20% slower.
AI delivers the feeling of productivity and the ability to make endless PoCs. For some tasks it's actually good, of course, but writing high quality software by itself isn't one.
Ah, yes. LLM-assisted development. That thing that is not at all changing, that thing that different people aren’t doing differently, and that thing that some people aren’t definitely way better at than others.
I swear that some supposedly “smart” people on this website throw their ability to think critically out the window when they want to weigh in on the AI culture war.
B-but the study!
I can way with certainty that:
1. LLM-assisted development has gotten significantly, materially better in the past 12 months.
2. I would be incredibly skeptical of any study that’s been designed, executed, analysed, written about, published, snd talked about here, within that period of time.
This is the equivalent of a news headline stating with “science says…”.
Nobody is interested in your piece of anecdata and asserting that something has gotten better without doing any studies on it, is the exact opposite of critical thinking.
You are displaying the exact same thing that you were complaining about.
Really? The past two weeks I've been writing code with AI and feel a massive productivity difference, I ended up with 22k loc, which is probably around as many I'd have manually written for the featureset at hand, except it would have taken me months.
My work involves fixing/adding stuff in legacy systems. Most of the solutions AI comes up with are horrible. I've reverted back to putting problems on my whiteboard and just letting it percolate. I still let AI write most of the code once I know what I want. But I've stopped delegating any decision making to it.
Well at least for what I do, success depends on having lots of unit tests to lean on, regardless of whether it is new or existing code. AI plus a hallucination-free feedback loop has been a huge productivity boost for me, personally. Plus it’s an incentive to make lots of good tests (which AI is also good at)
> Your personal data will be processed and information from your device (cookies, unique identifiers, and other device data) may be stored by, accessed by and shared with 210 partners, or used specifically by this site.
I’m not sure it is just that, I don’t even see positions listed where I would like to work. For salary ranges, I see lower upper limits than my second best offer three n half years ago. Considering the high inflation, that’s crazy.
I would not mind switching but 1. I don’t see interesting positions 2. they don’t pay well, and only 3. they might not even want me.
It might also be just my niche, but finding a good position feels completely impossible for me.
I am doing cross platform mobile development and I’m wondering how I could transition into backend development or I started even considering the decentralized finance…
Yeah, I don't know if I'd call myself competent (I'm late intermediate/early senior. So the worst of the curve here). But there's a difference between "interviews have gotten a lot harder now" and "I can't even get a response back". It's far, far more in the latter.
My resume isn't bad on paper either. It's not FAANG coded, but it's decent experience.
Same experience, I heard about them in some podcasts, it all sounded good, performance, from the ground up, rust, collaboration, whatever.
Tried it, it was the worse experience I had with an editor since I started my career… then I tried it again because maybe they figured it out… it’s still bad.
I am not sure I have the same definition of “love my editor again” (from their landing page) as the Zed team… my definition is PG. I don’t see a reason that I need to be 18 to use a code editor.
Forking, paths, JSON, decentralized, encryption, key rotation, etc and I still have no idea why I would bother and who else could use it (a decentralized social network is only so much fun if you are the only one on it).