Maine has multiple distinct accents, though like the parent said, it's not worth making the distinction unless it's for a project like this.
In southern Maine, the accent is moderate and is more of a general northern New England accent. Yahd = yard, that kind of thing.
The iconic Maine accent is the Downeast accent and is still kicking up/down there. It's kind of nasally and has a lilt to it. You have to dig through a morass of influencer content on youtube to find an authentic example of it, but this is a good one: https://www.youtube.com/watch?v=FZDpx1aLovc
But there are a number of different accents throughout Maine. My favorite without a doubt is the accent in way northern Maine, from the Allagash Valley. It's just a pleasant accent. This is a good example: https://soundcloud.com/mpbn/troy-jackson-allagash-logging
Just an update to this. I realize I misspoke and misnamed where that last accent comes from. It's too late for me to edit the original comment, so I'm just going to drop it in here. There's no such thing as the Allagash Valley, only the St. John Valley.
> You could say many of the broad and widespread mental issues we have in the US is the result of automobiles leading to suburbanization and thus isolation of people.
Yes, you could say that, though I'm not sure who would actually say that seriously.
Respectfully, without judgement, your perspective may be wildly skewed because you’re American (going by your post history). I suspect the negative externalities in a society built around cars don’t register with you because to you it is the normal state of the world. As a Dutchman, I grew up in a built world that is based around the human scale and to me your parent’s claim comes across as astonishingly obvious.
I didn't really say what my perspective is on whether the suburbs are good or bad or cars are good or bad. I think there are plenty of reasonable arguments as to whether they are or not. What I am dubious about is that they are somehow the source of some hand-wavy "widespread" mental health issue in America.
I wouldn't be surprised if it contributed significantly because of the lack of (access to) third places [0] it breeds, but that is conjecture on my part, so fair enough.
I would be hesitant to draw that correlation. IMO cars give you more access to third places, not less. With a car one can cover far more ground in a given 30 min drive after rush hour died down probably in every city in the world, than what one can cover in 30 mins walk and transit ride (especially when transit schedules might favor a commute into the central part of town vs some off peak trip to a random corner of town).
Say what you will about the ills of the car, but it takes a lot of specific context for them to emerge as the worst option of transport from an individual perspective. Really most of the cars ills are from their collective harms, something most can't appreciate as a tragedy of the commons sort of failing.
Yes, cars mean you can cover more ground in 30 minutes, but they also push EVERYTHING further apart. And what about parking? I can get very far on foot, by bike, or by train in 30 minutes, especially in an environment that hasn't been made artificially sparse by accomodating cars.
There's no shortage of third places in the American suburbs, you just have to drive to them. I'm sympathetic to the argument that walkable third places are better third places because I lived car-free in New York City for a decade and enjoyed many of them. But living in the suburbs or exurbs doesn't inherently mean you don't have access to shared communal spaces.
If I believed there is a crisis of isolation in the United States and degradation of community, I would first focus on more recent technologies, say ones introduced around 2007, than on technologies introduced in the early 1900s.
If anything, the golden age of third places coincided with the golden age of suburbanization, which was obviously heavily car dependent. Their death almost certainly has more to do with financialization making it harder for small businesses to stay afloat, a drop in demand due to competition for attention, and decreasing work-life balance eroding people's ability to socialize.
In my grandfather's day, one income was enough to support a household, and there was less free work being done on the job, which meant fewer hours and being less drained at the end of the day. And yes, people spent less time commuting, meaning they had more time and energy for socializing after work. But communities were also more decentralized, and population centers had fewer people in general. A big part of the problem is that modern cities can be massive, and invariably funnel people to a handful of work districts, which just doesn't scale. When you double the distance to the CBD, you quadruple the number of people coming in (give or take, it's not exact because we tend to increase density close to the CBD as a response to this). Take it from someone who's lived in a place where cars aren't really necessary, the logistics of urbanization are still a crap experience when you're crammed into a train carriage during rush hour. It's common for people to commute for 90 minutes on public transport in Asian megacities, for example.
The Netherlands has 513 cars per 1000 people compared to the US rate of 779. A significant difference, certainly, and it's plausible that there's a threshold effect where a society built around 50% more cars faces unique problems. But this doesn't at all seem consistent with the original idea that automobile technology itself is bad.
Car ownership is not a good proxy for how important cars are to living well in a particular place, when the places you're comparing have completely different design philosophies. If you look at how many trips the average Dutch car owner takes by car vs. how many trips the average American car owner takes by car, I guarantee you there will be a much larger difference.
I'm also not sure that anyone was claiming automobile technology itself was bad, just that in many places at many times it has been used in suboptimal and harmful ways.
I definitely agree that merely having automobiles doesn't require adopting characteristically American urban design philosophy, and that this philosophy isn't very compatible with dense walkable urbanism. But I don't see how to interpret
> The upsides of automobiles generally all exist outside of the 'personal automobile', i.e. logistics. These upsides and downsides don't need to coexist. We could reap the benefits without needing to suffer for it, but here we are.
other than as a claim we should not have personal automobiles.
You might think so, but a flat number comparison doesn't do justice to the vast differences in urban planning. Check out this video, it describes Dutch urban planning pretty well: https://www.youtube.com/watch?v=d8RRE2rDw4k
I suppose in the Netherlands they use carts and horses to stock up the supermarket? To transport coal to the powerplant (or the wind turbine blades to where the wind turbine will be built)? Surely a bicycle isn't enough for that.
You might be only talking about personal cars, but you've got to remember that trucks share the same infrastructure cars use. Modern city wealth wouldn't be possible without engined vehicles driving on roads (maybe if you went really crazy with rail that could be exception). You take away personal cars and either the infrastructure stays or your city wouldn't be possible anymore either.
But even beyond that - personal cars provide a level of freedom and capability to the general population that no other technology can match. Trains suck, buses suck, passenger ships suck, planes are uncomfortable (but otherwise pretty good). Bikes don't work with long distances, multiple people, the infirm, winter (riding in the winter is a great way to get injured, two-wheeled vehicles don't do well with ice), bad weather, if you need to be presentable when you arrive. Oh, and bikes get stolen. Constantly.
There's a lot of people in this comment thread interpreting the post's analogy as "ban all cars forever" rather than "consider how to use them as part of a wider societal strategy that makes places better for everyone".
You can implement all kinds of transport badly. Trains can suck if they don't take you where you want to go, bicycles suck if wherever you live doesn't provide acceptable parking methods.
Cars are great in a vacuum, but once a city decides it's going all in on cars and bulldozes the place, they provide problems for anyone else. Buses will suck because they're stuck in traffic and walking will suck when you're getting around on the side of 3 lane highways or vast surface parking lots. Most importantly, driving will suck, because everyone has to drive everywhere, and that creates more traffic for the rest of us. You get in a doom loop where you build more lanes, which drives more vehicle traffic. If you make the alternatives more viable, people take up those alternatives and vehicle traffic eases.
It seems like a hard argument to make that bikes can suck more than cars because of parking. As a bicycle enthusiast, I can provide you with some better reasons. You'll get rained on. You'll get sweaty. The helmet will mess up your fancy hair. You can't go as fast.
Parking is one of the biggest upsides of bikes IMO.
The point I was engaging with was how urban spaces can discourage certain kinds of transport users if their needs haven't been considered. If you get to your destination and have to hunt for a nearby fence post to lock your bike to, that's a bit of friction that makes me less willing to cycle. If I know there's a nice safe, quiet route for me to take, and a sturdy rack at my favourite cafe, it's a much easier decision.
Parking is one of the biggest downsides of bikes IMO.
Bikes are great, I ride mine whenever I can. But most places lack secure bike parking and the police don't take bike theft seriously. So sometimes I drive my car even to places where I could easily ride a bike just because I'm confident the car will still be there when I get out.
Yeah, that's a real problem. For practical urban riding, I use a beater fixie that I can replace for less than a car payment. I've had a few stolen, but that's across decades. This is probably highly dependent on your particular location. But I've also had cars broken in to.
Replacing the bike is actually a lot easier than getting the windows fixed IME.
Fwiw the only place I had a bike stolen was the secured underground garage in my apartment complex. Never had issues just parking it out front while running errands or other such stuff, or parking outside work during the day. I'd figure foot traffic would keep angle grinding down. I've personally not seen angle grinding done that brazenly before, seems liable overnight though where the thief has time to work and the assumption no one is awake to hear the grinder (such as what happened in the case of my apartment).
If I can't find a good spot to actually lock up the bike though I will just bring it in to wherever I'm going. Shops or restaurants don't seem to care if a bike is parked in the corner and you can thread your ulock through the wheels and make it useless to ride off with.
By that point there will be more infrastructure like more racks (and eyes on street as a result). Chances are you will be the only one doing this. But again if 10 people start doing it at once, awesome stuff for your city is coming I'm sure.
> Parking is one of the biggest upsides of bikes IMO.
I think that's true at the moment, but only because there's so little demand for it. You can always find a sign post or something because no one else is snatching them up.
At the end of the day bikes are still private vehicles and, though they're smaller than cars, they aren't that small and the infrastructure to secure them (which is integrated into cars) isn't small either. So you get the same problem writ small.
Writ very small, though. You can easily fit a dozen bikes into the space of one parking spot, if not more (double-decker racks exist!), and it is a lot easier to contrive a spot for your bike in the absence of bike racks than it is to park a car when there's no parking.
Heck—if you have a car & your building doesn't have parking, you're basically screwed. If you have a bike & it doesn't have a bike rack, you can just carry it up & put it on your balcony. At that point, I don't think you can really compare the two.
Buses are only workable because of cars. We build roads for cars first and trucks second. Buses are at most 3rd in the list and getting to use them is an incidental side benefit.
No one builds enough roads for buses. They have to use the roads built for cars.
Horse-drawn busses predate private automobiles by almost a hundred years.[0] The movement to pave roads was started by bicyclists decades before the rise of the automobile.[1] Cars usurped preexisting infrastructure and drove out other road users, like trolleybusses and streetcars.
We have so thoroughly remade society in the service of cars that it can be difficult to recognize any possible alternative.
Paved roads have been
around for thousands of years longer than the bicycle.
> Horse-drawn busses predate private automobiles by almost a hundred years.
And they used roads that already existed for transit and transport. People have always built roads.
> Cars usurped preexisting infrastructure and drove out other road users, like trolleybusses and streetcars.
This is some significant historical revisionism. You’re making it sound like all the roads were built for buses and streetcars.
The good roads movement is certainly interesting history. But I don’t think it changes the reality that buses are only workable because they are mostly piggybacking on infrastructure buit for other vehicles.
Of course, that’s rather the point of roads, that they are infrastructure that benefits many forms of transit and transportation.
That’s cool but one counterexample does not negate the general trend. Most places have few dedicated bus lanes. Most cities have approximately zero dedicated bus roads.
Even the cited system seems to be limited and exists to connect with trains as well as buses that use normal streets. Wikipedia says that they chose buses for this expansion instead of trains specifically because there was already a strong bus system, which uses the same city streets as cars and trucks.
Sure, industrial scale transport and personal transport share a rolling platform with an engine, but they're different platforms with different requirements, different economics and different lifecycles.
However, you're making my point for me. If you fail to invest in good public transport it will suck. That is downstream from designing your society around cars instead of transportation for everyone. Bikes do not work for extremely long distances (although school children here will happily pedal 10km to school and back on the daily), but those long distances are a requirement precisely because infrastructure is designed around cars. Even so you can take bicycles on trains and use them for last mile transport at your destination, or store a bicycle at your destination train station (most have lockers or guarded storage) if it's a commute.
Regarding bad weather; if winter is bad enough for bicycles to fail, then certainly it is not safe to drive either, and lethality is orders of magnitude higher. Generally though people here ride bike paths that are shovelled and brined just as the roadways are.
Bikes have their own infrastructure that they do not share with trucks. It is for human beings only.
> Regarding bad weather; if winter is bad enough for bicycles to fail, then certainly it is not safe to drive either
This is a big claim with no justification.
Cars have dynamic traction control, internal temperature control, etc. You may get frost bite on your bicycle, but almost certainly not in your car. Having four wide wheels makes the vehicle radically more stable.
Add seat belts, air bags, etc. cars have far more safety features than a bike can.
Of course, cars go faster and going faster increases lethality at the limit. No argument there, far more people die in cars in general. But specifically concerning weather, cars allow people to do many things that a bicycle cannot.
Not to mention general comfort. Being in a bike in a snow storm is very unpleasant!
There’s probably very little weather that is safe for cars but unsafe for bikes. Uncomfortable, yes, possibly extremely so. But you can bike in a downpour so severe that it’s unsafe to drive specifically because you’re not in a 2 ton deaths machine.
Maybe a severe enough snow storm? Even then we’re in Goldilocks territory for the storm to be unsafe for bikes but safe(ish) for cars.
The biggest factor is that people simply will not get on their bikes in severe enough weather. At least not in most places. Maybe in the Netherlands they’ll bike in a blizzard.
Safe for cars/bikes, or the passengers vs the bicyclist?
Hail comes to mind. Lightning possibly (I believe cars are much better insulated against lighting strikes). High winds could easily push bikes around / knock them over where cars just keep going.
We drove our van through a forest fire (Cedar Creek Fire - a BIG one) and got a bit of smoke, but otherwise, just fine. No way would I have attempted that on a bike - the increased aerobic activity alone (to say nothing of embers / ashes / etc) would have probably caused crazy amounts of smoke inhalation / death.
And there is a reason drivers hate SOME bikers - here in CA, many simply refuse to follow the rules of the road. My light turns green, and 5 seconds later, some biker comes rolling along in the perpendicular direction - I almost hit him. This kind of stuff happens over and over. I am very fond of bikers when they follow the rules - I bike sometimes too.
>No way would I have attempted that on a bike - the increased aerobic activity alone (to say nothing of embers / ashes / etc) would have probably caused crazy amounts of smoke inhalation / death.
Riding a bicycle while wearing an unpowered respirator/face mask is surprisingly easy, especially if it has an exhalation value. It does restrict breathing somewhat, but breathing isn't usually the bottleneck when you're cycling. This might even be the optimal way to escape a fire if the roads are congested.
> There’s probably very little weather that is safe for cars but unsafe for bikes.
Any weather where the wind is >15mph will be safer in a car. Hail. 100 F days. Thunderstorms. I love walking and public transportation but holy hell the thought of biking in some of our Texas weather is horrifying.
Not to mention that my 6yo and 9yo are much safer in my car than cycling through inclement weather! Not everyone is a single individual with no children! Holy hell, the trip from a kid's bday party to my house two weekends ago would've been deadly for my kids, but in a car, the weather wasn't an issue.
> Regarding bad weather; if winter is bad enough for bicycles to fail, then certainly it is not safe to drive either, and lethality is orders of magnitude higher. Generally though people here ride bike paths that are shovelled and brined just as the roadways are.
Extreme hot weather and pollution are both a much bigger health risk for bikes than cars.
> industrial scale transport and personal transport share a rolling platform with an engine, but they're different platforms with different requirements, different economics and different lifecycles.
What does this mean? This feels a bit like a distinction without a difference, as the infrastructure built is shared by both.
> although school children here will happily pedal 10km to school and back on the daily
How flat is it there? I can’t imagine a typical kid biking 10km each way around me. I feel like the average kid at my kids’ school would take 45 minutes or more to bike that distance.
>What does this mean? This feels a bit like a distinction without a difference, as the infrastructure built is shared by both.
I guess I wasn't clear in implying my doubts as to whether that's a hard requirement. Trucks are much larger and heavier which takes its toll on the road surface itself. They don't need access to suburban environments. Even in the inner city here trucks are banned outside of loading and unloading hours to foster a walk-able environment. So yes, in part they do, but it's not that black and white.
>How flat is it there? I can’t imagine a typical kid biking 10km each way around me. I feel like the average kid at my kids’ school would take 45 minutes or more to bike that distance.
Famously pretty flat, but with e-bikes gaining ground, elevation changes don't make much of a difference anymore. And yeah a 45 minute commute by bike is not unusual, but remember, we have the safe infrastructure for it. Kids bike in from villages surrounding towns and cites.
> They don't need access to suburban environments.
How are suburban environments stocked then? Surely village grocery stores are not stocked with milk one bike load at a time.
> Even in the inner city here trucks are banned outside of loading and unloading hours to foster a walk-able environment.
Sure. But they use the same infrastructure. The fact that the vehicles are built for different purposes and may have different regulations doesn’t mean the cost of infrastructure isn’t shared. Pervasiveness of roads makes it easy for cars, trucks, ambulances, buses, and even bikes to get around more easily.
Just like the pervasiveness of the Internet make it easy to scroll TikTok, purchase goods from Amazon, and read books through Project Gutenberg, even though those are very different use cases.
That's a really rude and dismissive take - the impact of cars has been immense, in particular the ways in which they've been given primacy as a mode of transport and the ways in which that necessity has interacted with our laws and infrastructure development (sabotoging of public rail transport, parking regulations and the creation of car-dependent suburbia, pedestrian safety, highway projects decimating communities of color, etc. etc. etc.).
To blithely state that nobody could make such a claim seriously is an attitude which actually has a really fitting term: carbrained.
It's a turn of phrase. The belief isn't being called unserious. The holders of the belief are. It's the "white collar speak" approved way of saying those people are dumb or otherwise not worthy of consideration.
"I don't know anyone who seriously thinks that stone applied to fibrous asphalt is not a fine roofing material"
"I do not know anyone who seriously thinks that 4000kcal/day is healthy in normal circumstances"
"I don't know anyone who seriously thinks that women are incapable of working outside the home"
"I do not know anyone who seriously thinks a bright red suit is appropriate for a funeral"
And on and on and on.
But we both already knew that. So if you're gonna be obtuse and not understand it I'm gonna be obtuse and explain it.
I don't know anyone who seriously thinks that one could just say "I don't know anyone who seriously thinks" something, and that would constitute a persuasive argument. :)
On top of that, the APIs/Tools/Function Calls into the real world don't exist yet. But consumer products are going to start eventually exposing functionality to these LLMs. By that time, I wonder if we'll all have an edge-inference box sitting in every one of our houses that we buy from a consumer products company like Apple or from Amazon, or directly from OpenAI or Anthropic. These little brains will be the low latency central nervous system of a lot of things in our homes, and gateways to the larger models in the cloud. Or at least that's how I imagine it sorting out in the future.
Previous generations of technological change of the calibre we are told AI will be also required major changes to the real world and new products to be built: new cell towers had to be constructed, fibre cables laid, data centers built, personal computers produced, warehouses established. And software needed to be fundamentally rewritten to support each of these generations too. And yet the companies doing that in those previous generations managed to produce huge profits significantly faster than Generative AI has.
That's my biggest concern with it, I don't see the business case closing anywhere, and without businesses that actually make money all the technology in the world doesn't actually do anything.
> And yet the companies doing that in those previous generations managed to produce huge profits significantly faster than Generative AI has.
Have you considered a simple answer to this inconsistency? The market and investors does not demand that these AI companies make a profit. The only reason companies are expected to make profits is because either those who own shares in the company expect it, or those willing to invest in a company expect it.
Comparing the IPO market today to the IPO market in the late 90s is not very instructive. You could have IPO'd a lemonade stand in 1998 and raised $10 million.
I'm using that only for AMZN because they seem to have made a choice to not turn a profit and instead to expand their business. The other companies I mentioned were directly profitable by this point in their respective revolutions, except for Amazon, where I'm using the IPO as proof that they had a sustainable business, even if it wasn't precisely profitable- they were generating enough cash to be profitable, they just chose to reinvest it into the business. I don't see any evidence that any of the major Generative AI companies are in that position or the position that Apple, Netscape, Motorola etc. were in.
And that's the weird one, all of the other examples I provided were booking real profits by this point in their technology cycle.
> The lead singer caught my eye and gave me a wide grin
Daft Punk doesn't have a singer and unless it was a very early show they wouldn't have seen them smile. Most big beat shows wouldn't have a dedicated vocalist. I'd guess Underworld or Prodigy, but lean toward Underworld.
I used Emacs for about a decade and then switched to VS Code about eight years ago. I was curious about the state of Claude Code integration with Emacs, so I installed it to try out a couple of the Claude packages. My old .emacs.d that I toiled many hours to build is somewhere on some old hard drive, so I decided to just use Claude code to configure Emacs from scratch with a set of sane defaults.
I proceeded to spend about 45 minutes configuring Emacs. Not because Claude struggled with it, but because Claude was amazing at it and I just kept pushing it well beyond sane default territory. It was weirdly enthralling to have Claude nail customizations that I wouldn't have even bothered trying back in the day due to my poor elisp skills. It was a genuinely fun little exercise. But I went back to VS Code.
Came to post exactly this, except it’s got me using emacs again. I led myself into some mild psychosis where I attempted to mimic the Acme editor’s windowing system, but I recovered
Yeah, and all the little quirks here and there I had with emacs or things that I wish I had in workflow, I can just fix/have it without worrying about spending too much time (except sometimes maybe). The full Emacs potential I felt I wasn't using, I'm doing it and now I finally get it why Emacs is so awesome.
E.g. I work on a huge monorepo at this new company, and Emacs TRAMP was super slow to work with. With help of Claude, I figured out what packages are making it worse, added some optimizations (Magit, Project Find File), hot-loaded caching to some heavyweight operations (e.g. listing all files in project) without making any changes to packages itself, and while listing files I added keybindings to my mini buffer map to quickly just add filters for subproject I'm on. Could have probably done all this earlier as well, but it was definitely going to take much longer as I was never deep into elisp ecosystem.
> Emacs TRAMP was super slow to work with. With help of Claude, I figured out [...]
Out of curiosity, did it advise you to configure auto-save and backup such that they write their files under ~/.emacs.d, rather than in the same directory alongside the (with Tramp, potentially remote) file they're about? Especially with vanilla Emacs, that's always the first place you want to look when you see freezes doing file operations on a remote host over a slow or flaky link.
I believe I first added that change to my .emacs in 2010 or 2011, and as far as I can recall, it was the only change I ever needed to make to address Tramp being slow sometimes.
Hooking up Emacs to ECA with emacs-eval MCP is fantastic - Claude can make changes in my active Emacs session, run the profiler, unload/reload things, log some computation or embark-export search results and show it in a buffer; It can play tetris and drive itself crazy with M-x doctor - it's complete and utter bonkers. I can tell it to make some face color brighter/darker on the spot, the other day I fixed a posframe scaling issue that bugged me for a long time - it's not even about "I don't know elisp", this specific thing requires you to sit down and calculate geometry of things - mechanical, boring stuff. AI did it in minutes. VS Code, IntelliJ, any other shit that has no Lisp REPL? What are you even talking about? It's like a different world.
I want to say this with the caveat that I am generally a person who always contends with the contradictions of living in a capitalist-imperialist country and my own distaste for it. So this doesn't come from a place of American exceptionalism writ large, but I am a firm believer the we did get this part right:
Public lands and culture of the ability to access wild places, whether for hunting, fishing, hiking, camping, and just generally an affordance of access to wilderness that is codified into the laws of the country. In Europe they have the concept of "Right to Roam" which is a powerful concept that I appreciate (and in ways is superior to our systems for just walking in the woods) but it is also fundamentally different than the almost legalistic systems we have in this country towards public lands.
My surface understanding of China is that there is no such broad remit given to the people of China and there aren't designated places where the people of China can just go and exist in wilderness. Such places might exist by convention but they don't have the sort of legal framework that we have in America to recreate in these places.
> As of 2022, the 42,826 protected areas covered 1,235,486 km2 (477,024 sq mi), or 13 percent of the land area of the United States.
Can you be more specific? China has areas of protected wilderness, and you can in fact go to many of them and be in nature. What's the practical difference?
Another comment said it, but that's basically land protected from most use, with some exceptions that are more akin to our national park system, right? I'm talking more about BLM lands in the west, or national forests in the east. Also, there are states with significant public lands holdings that are in the same spirit.
With our public lands, I can usually go to them anytime I want, I don't have to reserve anything. I can park my car, I can get out, and I can begin just walking into the woods or grasslands, sometimes on trail, sometimes off. I can basically camp wherever I want in many of these places. If there's a stream, I can fly fish. If it's hunting season, I can hunt. I can basically disappear into a place that feels wild for a bit.
I don't think that people who don't want to use these tools or clean old ways are incurious. But I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Not in the future, but now. It's just that the adoption of these tools isn't evenly distributed yet.
I think there's a place for thoughtful dialogue around what this means for software engineering, but I don't think that's going to change anything at this point. If developers just don't want to participate in this new world, for whatever reason, I'm not judging them, but also I don't think the genie is going back in the bottle. There will be no movement to organize labor to protect us and there be no deus ex machina that is going to reverse course on this stuff.
> I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point.
Yes. We are this generations highly skilled artisans, facing our own industrial revolution.
Just as the skilled textile workers and weavers of early 19’th century Britain were correct when they argued this new automated product was vastly inferior, it matters not at all. And just as they were also correct, that the government of the day was doing nothing to protect the lives and livelihoods of those who had spent decades mastering a difficult set of professional skills (the middle class of the day), the government of this day will also do nothing.
And it doesn’t end with “IT”; anything that can be turned into a factory process with our new “thinking engines” will be. Perhaps we can do better in society this time around. I am not hopeful.
I think the analogy is directionally good, but it short changes the abstract and recursive nature of software.
We were already writing code that was automating not only manual work but also simpler programs. LLMs essentially just move us one more (large) hop up the abstraction ladder. And yes I get that it’s a different type of hop (non-deterministic, extremely leaky, etc), but it’s still a hop.
So if the only thing you want to do is manually write code in the traditional way (perhaps with vim instead of IntelliJ) then yeah I think you’re cooked. On the other hand, if you are willing to work with LLM-assisted tooling and learn how to compensate for its shortcomings then I think you’ll have a bright future.
A new technology comes out — admittedly one that’s extraordinarily capable at some things — and suddenly conventional software engineering is “more or less obviated at this point”? I’m sorry, but that’s really fucking dumb. Do you think LLMs are actually intelligent? Do you think their capabilities exceed the quality of their training corpus? Is there no longer any need to think about new software paradigms, build new frameworks, study computer science, because the regurgitated statistical version of programming is entirely good enough? After all, what’s code but a bunch of boring glue and other crap that’s used to prop up a product idea until a few bucks can be extracted from it?
Of course, there’s nothing wiser than tying the entirety of your career to a $20/month subscription (that will jump 10x in price as soon as the market is captured).
Is writing solved because LLMs can make something decently readable? Why say anything at all when LLMs can glob your ideas into a glitzy article in a couple of seconds?
I swear, some people in this field see no value in their programming work — like they’ve been dying to be product managers their entire lives. It is honestly baffling to me. All I see is a future full of horrifying security holes, heisenbugs, and performance regressions that absolutely no one understands. The Idiocracy of software. Fuck!
> Is there no longer any need to think about new software paradigms, build new frameworks, study computer science, because the regurgitated statistical version of programming is entirely good enough?
All I'm saying is you're gonna have to figure out how to do this with an agent. It's not that I don't see value in the craft; it's just that value is less important. As far as the new paradigms, the new frameworks, new studies in computer science -- they still exist, it's just that they are going to focus on how to mitigate heisenbugs, performance regressions and security holes in agent written code. Who knows.. in five years most of the code written may not even be readable. I'm not saying it's going to be like that, but it's entirely possible.
In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself. And right now there is a category of engineers that exclusively use agents to create quality software and they are more efficient at it than anybody that just does it themselves. And that category is growing and growing every day.
I may be out a job in five years because all of this. But I am seeing where this is going and it's clear and so I'm going to have to change with it.
> you're gonna have to figure out how to do this with an agent
I'm really not, though, any more than I "had to" learn JavaScript 20 years ago or blockchains 5 years ago (neither of which I did). Hell, I still use Perl day-to-day.
Good for you! Most people will, though. If I hadn't learnt JavaScript, I couldn't work on a large chunk of the projects that put bread on the table for the past 5-10 years.
If most folks don't learn AI (or its shortcomings and practicalities of it), then they will not be as competitive in the job market. Corpos don't care about the flaws.
That's not really true yet; most of the companies are being a lot more hesitant than individual developers (the companies have legal and TCO worries the devs themselves don't). It's possible what you're describing will become true, but that's not the present
> In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself.
“When you're in Hollywood and you're a comedian, everybody wants you to do things besides comedy. They say, ‘OK, you're a stand-up comedian — can you act? Can you write? Write us a script?’ It's as though if I were a cook and I worked my ass off to become a good cook, they said, ‘All right, you're a cook — can you farm?’”
—Mitch Hedberg
Agentic programming isn’t engineering: it’s a weird form of management where your workers don’t grow or learn and nobody really understands the system you’re building. That sounds like a hellish, pointless career and it’s not what I got into the field to do. So no thanks: I’ll just keep doing the kind of monkey engineering I find invaluable. Especially while most available models are owned and trained by authoritarian, billionaire, misanthropic cultists.
Fortunately, I am not beholden to some AI-pilled corporation for salary.
So how will progress be made in our field, in your scenario of the future?
It's both an obvious consequence of how they operate, and an easily observable reality, that even the best models utterly fail at any task that is even slightly outside of the space spanned by their training set. In this future, software in 2022 was as good and as capable as it was ever going to get. In which case -- fuck, I had higher hopes for what computers and software would be able to achieve when I started in this business 20 years ago, than this sorry state of affairs. We were finally getting some traction on the idea that we urgently need to work on security, reliability, stability, etc., and suddenly we're all supposed to be excited about heading full speed the other way.
I'm using Claude every day, and it definitely makes me faster but.. I'm also able to give it a lot of very specific instructions and correct a lot of mistakes quickly because I look at the code and understand what it's doing; and I'm also asking it to write code in domains I understand. So I don't think these skills are obsolete at all. If anything, keeping them sharp is the only differentiator we have. "Agentic Engineering" is as much as joke as "Vibe Coding" is in my mind. The tools are powerful, but they don't make up for knowing how to code, and if you're just blindly trusting it it's going to end badly.
>I'm using Claude every day, and it definitely makes me faster but..
I see a lot of posts about this, and I see a lot studies, also on HN, that show that this isn't the case.
Now of the course the "this isn't the case" stuff is statistically, thus there can be individual developers whom are faster, but there can also be that an individual developer sometimes is faster and sometimes not but the times that they are faster are just so clearly faster that it sort of hides the times that they're not. Statistics of performance over a number of developers can flatten things out. But I don't know that is the case.
So my question for you, and everyone that claims it makes them so perceptively and clearly faster - how do you know? Given all the studies showing that it doesn't make you faster, how are you so sure it does?
It's incredibly frustrating arguing these same points, over and over, every time that this comes up. You're asking people who are experienced developers absolutely chewing through checklists and peeking at HN while compiling/procrastinating/eating a sandwich/waiting for a prompt to finish to not just explain but quantify what is plainly obvious to those people, every day. You want us to bring paper receipts, like we have some incentive to lie to you.
From our perspective, the gains are so obvious that it really does feel like you must just be doing something fundamentally wrong not to see the same wins.
So when someone says "I can't make it do the magic that you're seeing" it makes me wonder why you don't have a long list of projects that you've never gotten around to because life gets in the way.
Because... if you don't have that list, to us that translates as painfully incurious. It's inconceivable that you don't have such a list because just being a geek in this moment should be enough that you constantly notice things that you'd like to try. If you don't have that, it's like when someone tells you that they don't have an inner monologue. You don't love them any less, but it's very hard not to look at them a bit differently.
>It's incredibly frustrating arguing these same points, over and over,
quite frankly there seems to be something incredibly frustrating in your life going on, but I'm not sure that the underlying cause of whatever is weighing on your mind at the moment is that I asked "how do you know that what you are feeling is actually true, in comparison to what studies show should be true?" (rephrased, as not reasonable to quote whole post)
>From our perspective, the gains are so obvious that it really does feel like you must just be doing something fundamentally wrong not to see the same wins.
From my perspective, when I think i am experiencing something that data from multiple sources tell me is not what is actually happening I try to figure out how I can prove what I am experiencing, I reflect upon myself, have I somehow deluded myself? No? Then how do I prove it when analysis of many similar situations to my own show a different result?
You seem to think what I mean is people saying "Claude didn't help me, it wasn't worth it", no, just to clarify although I thought it was really clear, I am talking about numerous studies always being posted on HN so I'm sure you must have seen them where productivity gains from coding agents do not seem to actually show up in the work of those who use it. Studies conducted by third parties observing the work, not claims made by people performing the work.
I'm not going to go through the rest of your post, I get the urge to be insulting, especially as a stress release if you have a particularly bad time recently. But frankly, statistically speaking, my life is almost certainly significantly worse than yours, and for that reason, but not that reason alone, I will also quite confidently state without hardly any knowledge of you specifically but just my knowledge of my life and comparison of having met people throughout it, that my list dwarfs yours.
Putting it succinctly, these kind of conversations feel weird because it's like asking whether carpenters are faster using power tools or hand tools. If you've used power tools it's obvious they make work a lot faster. Maybe there were some studies around the time power tools were introduced looking at the productivity of carpenters, if those studies had results saying the productivity gains weren't obvious in the data that means you have a problem with your study and the data you have collected (which is totally understandable, measuring imprecise things like productivity accurately is really hard). You have to look at the evidence in front of you though, try telling the guy with a chainsaw that he's actually no more productive than he was when he was using an axe and he'll laugh at you.
This takes the cake for one of the strangest replies I've ever received on here.
I'm not sure how or indeed why you draw lines from what I said to my life situation... which is relevant how?
What I apparently did not do a good enough job of conveying is that those "data from multiple sources" get cited and then people immediately reply with "those are old studies". It's circular in the same way that arguing with anti-vax people is circular.
The difference is that unlike vaccines, it's very easy for someone to see how productive they are when using LLMs properly. It's not a subtle difference.
Hence the frustration with people who keep insisting that we're imagining our own productivity. It's not a good faith inquiry.
OK, glad to hear I was mistaken, but it certainly seemed like about halfway through your first response you went off the rails and decided to take my question as some sort of personal affront. It was not the strangest response I've had on HN, but one of the strangest. I could go through with a full analysis of why I thought "this guy is having problems", but that would take a long time and as you say you aren't I guess it isn't particularly useful.
I guess we aren't going to get anything meaningful between us on this subject, because you seem to think it is like arguing with an anti-vaxxer, which funny enough I thought the same thing,
So fine, you experience a gain, you just do, and it is so clear and evident you don't need to guard yourself against being deluded despite studies suggesting that gain is not there. That seems crazy to me, I would doubt and want to verify my gain if I read a study suggesting the gain was illusory. No meaningful convergence seems possible between needing verification and not needing verification.
I like remus' comment to your previous message; you're telling a guy with a chainsaw who is busy chopping down trees at lightning speed that he should stop and defend his daily experience against some studies that suggest tree chopping speeds are not what they seem.
At some point you just have to shrug and get back to work chopping down 3-5x more trees than you did last year.
For instance, there is a lot of evidence (and intuition, frankly) to the argument that while LLM increase superficial, short-term productivity, they also cause an extreme accumulation in technical debt that may more than wipe out any initial, fast progress down the line.
> It's incredibly frustrating arguing these same points, over and over, every time that this comes up. You're asking people who are experienced developers absolutely chewing through checklists and peeking at HN while compiling/procrastinating/eating a sandwich/waiting for a prompt to finish to not just explain but quantify what is plainly obvious to those people, every day. You want us to bring paper receipts, like we have some incentive to lie to you.
This puts what I have been feeling in the recent months into words pretty concisely!
Of course, I still have to pay attention to what AI is doing, and figure out ways how to automate more code checks, but the gradual trend in my own life is more AI, not less: https://blog.kronis.dev/blog/i-blew-through-24-million-token... (though letting it run unconstrained/unsupervised is a mess, I generally like to make Claude Code create a plan and iterate on it with Opus 4.6, then fire off a review, since getting the Max subscription I don't really need Cerebras or other providers, though I still appreciate them)
At the same time I've seen people get really bad results with AI, often on smaller models, or just expecting to give it vague instructions and get good results, with no automated linters or prebuild checks in place, or just copying snippets with no further context in some random chat session.
Who knows, maybe there's a learning curve and a certain mindset that you need to have to get a benefit from the technology, to where like 80% of developers will see marginal gains or even detriment, which will show up in most of the current studies. A bit like how for a while architecturally microservices and serverless were all the rage and most people did an absolutely shit job at implementing them, before (hopefully) enough collective wisdom was gained of HOW to use the technology and when.
Totally! Though I maintain that the only good aspect to microservices is that krazam video. You know the one.
I do get frustrated when I see people not using Plan steps, copy/pasting from web front-ends or expecting to one-shot their entire codebase from a single dense prompt. It's problematic because it's not immediately obvious whether someone is still arguing like it's late 2024, you know what I mean?
Also, speaking for myself I can't recommend that anyone use anything but Opus 4.5 right now. 4.6 has a larger context window, but it's crazy expensive when that context window gets actually used even while most agree that these models get dumber when they have a super-large context. 4.5 actually scores slightly better than 4.6 on agentic development, too! But using less powerful models is literally using tools that are much more likely to produce the sorts of results that skeptics think apply across the board.
Haven't looked into 4.5 vs 4.6 in depth (since the latter seems good for my needs), but
> but it's crazy expensive
was something I struggled with until just going for the Max subscription and cancelling my other ones.
I'm not sure what Anthropic is doing, but they're either making truckloads of money from those paying per-token (especially since you're not supposed to use subscriptions for server use cases --> devs can use Claude Code, but not code review bots etc.), or heavily subsidizing subscriptions.
100 USD is worth it for me, I've only hit the 5 hour limits a few times, and haven't hit 100% of the weekly limits once. I fear to think how much comparable usage with any of the Opus models would have been, if I were to pay per token - even Sonnet could get similarly expensive.
I don't get/like/want Claude Code. I do everything in Cursor, and I am very happy. I recommend it! And there's no time-based limits. You get deeply discounted API calls included in your monthly subscription, and then overage is billed at the same discounted rate. It's essentially committing to an "at least" amount per month in exchange for a preferred rate.
I have a USD$200/month Cursor plan, and I do hundreds of hours worth of Opus 4.5 prompting with it every month. I tend to pay $250-300 a month after overages, and I consider myself a heavy user. During Opus 4.1 days, one month I paid $700. 4.5 got substantially cheaper and smarter, and I consider that the real moment agentic coding got real.
I don't know your financial situation and I recognize that $300/month is more than much of the world makes in a month. I am just saying that for me, what I'm working on is important enough that I am absolutely willing to pay a premium for access to the best tooling available, because every dollar I spend represents literally an hour of my time. Maybe more? It's so incredibly cheap compared to hiring an unreliable human who needs to sleep.
You can't pay someone $3600/year to lick stamps, much less pair program application development.
That's pretty cool! I haven't really been a heavy user of Cursor, but found Cline/RooCode/KiloCode in VSC to be pretty good, while letting me preserve my existing setup and also easily switch between multiple providers, sometimes in the middle of some work, to let another model check the output of the first one!
I think most I ever spent per month was 300 USD, but I had to cut down on that and Anthropic's subscription being way more affordable than paying per token (alongside GitHub Copilot, which also has multiple model support and pretty generous limits alongside unlimited autocomplete), since I'm also helping a friend with expenses during their chemo and some other friends with some meds and stuff, even though policemen and teachers and others have way worse financial circumstances than software devs in Latvia, the economy here doesn't give that much breathing room for that kind of thing.
Oh for a while I was also using Cerebras Code which gives you really generous token limits (like 24M per day on the 50 USD per month tier), though the GLM 4.7 model I tried out still made me go back and work on fixing its output more often than I'd like. Eventually I kinda settled on SOTA.
That said, I do remember a post here on HN where some founders were thinking whether they should throw something like over 1000 USD at Anthropic (the API variety) per month and they realized that for them that amount of money was totally reasonable, compared to getting some junior devs or whatever.
I read that same post, and for me it wasn't just something I remember; it had a profound impact on how I came to be typing at you casually about how I have spent up to $700 a month on Opus tokens in Cursor (which absolutely lets you switch between providers... I just really like Opus 4.5!)
To me, all of the switching between dev environments + all of the time spent undoing errors causes by less powerful models has a huge time cost; not to be cliche that means it's very expensive to use error prone models and obsess over trying all of the new half-baked things (I've never even heard of most of the stuff you mentioned, lol). Like, if I spend an hour of my time mucking around with some tool, that's a good chunk of the $200/month I commit to Cursor.
Anyhow, at the real risk of sounding like an unpaid Cursor salesman, IMO it's worth every penny. For me, the jury is still out on whether people find Opus 4.6's 5x context to be valuable enough to pay significantly more for it over 4.5, which again is rated as being slightly better at agentic coding than 4.6. Since agentic coding is what I do....
I'm a principal engineer, been working on the same set of codebases for almost 10 years. I handle the 20% or so of my time that constitutes inbound faster than ever and I know because that inbound volume has clearly increased and yet I have, for the first time ever, begun chipping away at the "nice to have" backlog. My biggest time sink now is interviewing and code reviews -- the latter being directly proportional to the velocity increase across the teams I work with. Actually that's my biggest concern -- we are approaching a breaking point for code review volume.
Sorry I don't have DX stats or token usage stats I can share, but based on the directives from on high, those stats are highly correlated (in the positive).
Assuming inbound volume clearly increased is something like we've been handling more tickets than ever before over the last few quarters or something like that.
I've read this code review thing before, and this tends to go into these studies suggesting that the whole process is taking the same amount of time, but for that to be the case the code reviews would have to take longer on the individual code review level and for you it is just volume increase because of increased tickets being pushed through.
Is there anything about your ticketing strategy? For example do you make your tickets much more atomic than lots of teams who say they do but then end up with things that could be split up into two or three tickets? How much time do spend on preparing tickets for ready for development / ready for AI?
Just trying to identify behavioral patterns in your successful usage that would explain the success. Given the example of throughput of tickets over long time I suppose we can assume that the gain is not illusory.
> everyone that claims it makes them so perceptively and clearly faster - how do you know?
For me, AI tools act like supercharged code search and auto complete. I have been able to make changes in complex components that I have rarely worked on. It saved me a week of effort to find the exact API calls that will do what I needed. The AI tool wrote the code and I only had to act as a reviewer. Of course I am familiar with the entire project and I knew the shape of the code to expect. But it saved me from digging out the exact details.
Fair question! I've wondered that myself, there is always the possibility that the productivity gain is in my head. I'm not AI pilled, if these things disappeared tommorow I would probably just shrug, I'm just trying to keep up to date.
Where I find it makes me faster is in dealing with writing low value code that's repetitive, which I might normally procrastinate. Like, the thing I'm working on is a data editor that generates a lot of fields, so having it churn out a lot of samey react code is useful to me in that context. There's already an obvious pattern for the tools to follow.
I also find it useful for "rubber ducking". Bouncing ideas I might previously have bugged a colleague about.
By faster I'm not suggesting a fanciful number for me. Maybe like 10 to 20 percent if I were to guess.
> I see a lot of posts about this, and I see a lot studies, also on HN, that show that this isn't the case.
Most of these studies were done one or more years ago, and predate the deployment and adoption of RLHF-based systems like Claude. Add to that, the AI of today is likely as bad as it's ever going to be (i.e., it's only going to get better). Though I do think the 10x claims are probably unfounded.
I mean obviously things will always be a little bit behind that one reads about, so this is one of the claims I see sometimes about these studies is they are out of date, and if working with the new models they would find that wasn't the case. but then that is one of the continuing claims one also sees about LLMS, that the newest model fixes whatever issue one is complaining about. And then the claim gets reiterated.
The thing is when I use an AI I sort of feel these gains, but not any greatness, it's like wow it would have taken me days to write all this reasonable albeit sort of mediocre code. I mean that is definitely a productivity gain. Because a lot of times you need to write just mediocre code. But there are parts where I would not have written it like that. So if I go through fixing all these parts, how much of a gain did I actually get?
As most posters on HN I am a conceited jerk, so I can claim that I have worked with lots of mediocre programmers (while ignoring the points where I was mediocre by thinking oh that didn't count I followed the documentation and how it was suggested to use the API and that was a stupid thing to do) and I certainly didn't fix everything that they did, because there just wasn't enough hours in the day.
And they did build stuff that worked, much of the time, so now I got an automated version of that. sweet. But how do I quantify the productivity? Since there are claims put forth with statistical backing that the productivity is illusory.
This is just one of those things that tend to affect me badly, I think X is happening, study shows X does not happen. Am I drinking too much Kool-Aid here or is X really happening!!? How to prove it!!? It is the kind of theoretical, logical problem seemingly designed to drive me out of my gourd.
Well, no, not with that attitude there won’t! I am not trying to insinuate that there is a conspiracy, or that posts like yours are part of it, but there has been a huge wave of posts and comments since February which narrow the Overton window to the distance between “it’s here and it’s great” and “I’m sad but it’s inevitable”.
Humanity has possessed nuclear weapons for 80 years and has used them exactly twice in anger, at the very beginning of that span. We can in fact just NOT do things! Not every world-beating technology takes off, for one reason or another. Supersonic airliners. Eugenics. Betamax.
The best time to air concerns was yesterday. The next best time is today. I think we technologists wildly overestimate public understand and underestimate public distrust of our work and of “AI” specifically. We’ve got CEOs stating that LLMs are a bigger deal than nuclear weapons or fire(!) and yet getting upset that the government wants control of their use. We’ve got giddy thinkpieces from people (real example from LinkedIn!) who believe we’ll hit 100% white collar unemployment in 5 years and wrap up by saying they’re “5% nervous and 95% excited”. If that’s what they really think, and how they really feel, it’s psychopathic! Those numbers get you a social scene that’ll make the French Revolution look like a tea party. (“And honestly? I’m here for it.”)
So no, while I _think_ you’re correct, I don’t accept the inevitability of it all. There are possibilities I don’t want to see closed off (maybe data finally really is the new oil, and that’s the basis for a planetary sovereign wealth fund. Maybe every man, woman, and child who ever wrote a book or a program or an internet comment deserves a royalty check in the mail each month!) just yet.
I agree with you on that. Not just on AI but a lot of things that suck about this world, and in particular the United States. But capital is too powerful. And these tools are legitimately transformative for business. And business pays our bills and, more importantly, provides the healthcare insurance for our families. The wheel is a real fucking drag isn't it?
I don't see anything short of a larger revolution against capital stopping or even stemming this. For that to really happen we would need a lot more people and interests than just those of software practitioners. Which may come yet when trucking jobs collapse and customer service jobs disappear. I don't know. I do know that I'm taking part in something that will potentially (likely?) seed the end of my career as I know it but it's just one of many contradictions that I live with. In the meantime the tools are impressive and I'm just figuring out how to live with them and do good work with them and as you can probably tell, I'm pretty convinced that's the best we can make of the situation right now.
If it helps, I would consider learning about and deploying local LLMs (llama.cpp etc), if you have the hardware (and it doesn't take all that much hardware at all). They're nowhere near as fast as LLM providers, but they come fairly close in overall quality (including agentic workflows).
The reason that I suggest this is that not only does running locally democratize AI (while AI democratizes software), it solves a lot of the discomfort surrounding the new reliance, dependence, data, and trust given to LLM providers. Basically it removes a lot of the "ick" surrounding LLMs, while also hedging any bets that the investment schemes of LLM providers are unsustainable. You may also hedge bets surrounding "owning" the maintenance of the machines that replace you.
I don't know, it's a small thing that makes me feel a little better during this tumultuous time.
100% this. I don't know why we think that pouring trillions of dollars into something we barely understand to create an economic revolution that is almost certainly awful is at all "inevitable". We just need leaders that aren't complete idiots. I'm generally cynical, but I do see that normies (ie not in tech) are waking up a bit. I don't think the technology is inherently a bad thing, but the people that think that we should just do this as fast as possible to win "the race" should be shot into space as far as I'm concerned. To start with, we need a working SEC that can actually punish the grifting CEO's that are using fear to manipulate markets.
I'm still going to need at least one of my vendors to speed up their release pace before I'll believe that. I'm seeing a ton of churn and no actual new product.
Things are changing so fast and so chaotically with this technology. I'm also writing everything now using Claude code, and I've been thinking a lot about what this means for my work moving forward. One thing I've noticed, is that I will just keep hammering and hammering on my work until I force myself to quit. Even on the weekend I feel the pull to go work on it. I'm just less sort of mentally exhausted by work, I suppose, but I don't think that's particularly healthy if it leads to me working way more than I should. On one hand, I think that's a reflection of how powerful and exciting this technology is, but on the other hand, I think that it triggers some different kind of reward function in my mind that I'm not used to.
In any case, I think if one wants to continue to have a career in this industry for years to come, it's basically table stakes to become fluent in using these tools.
If you are talking about a consumer product, one of these is not like the others.
reply