Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperpape's commentslogin

That's a real question, maybe the changes are useful, though I think I'd like to see some examples. I do not trust cognitive complexity metrics, but it is a little interesting that the changes seem to reliably increase cognitive complexity.

I haven't previously thought about this, but I think words over a commutative monoid are equivalent to a vector of non-negative integers, at which point you have vector addition systems, and I believe those are decidable, though still computationally incredibly hard: https://www.quantamagazine.org/an-easy-sounding-problem-yiel....

Thanks, that's an interesting tidbit!

(The whole thing made me think about applications to SQL query optimizers, although I'm not sure if it's practically useful for anything.)


With all due respect, this is completely wrong.

There is a difference that someone smoking nearby automatically harms people around you. With alcohol, the effect is more unpredictable, but it is equally real.

Alcohol is a factor in an automobile crashes, and a factor in a significant proportion of violent crime, especially domestic violence (https://www.cato-unbound.org/2008/09/17/mark-kleiman/taxatio... edit: this source isn't as great, Kleiman has written elsewhere about the subject, but google is failing me). If we could wave a magic wand and cause drinking to cease to exist, many lives would be saved.

Note: I do in fact drink, I am not a teetotaler. But what I said above is factual. I personally believe that prohibition would be worse, and it's reasonable for individuals to make their own choices. But that does not entail denying that it goes very badly for many.


Second-hand smoke does affect people around you. It is how people get addicted to nicotine. It is how new smokers are created.

And there are some people who are more sensitive to temporary exposure to smoke (and pollution in general) than others. That is why smoking tends to be is banned around hospitals and day care centers ­— because those are places where you will find those people. My father was one of them, after he had got his larynx removed for throat cancer after having smoked for decades. He could not suffer being subjected to even small amounts of second-hand smoke again because then the breathing hole in his throat would get irritated, fill up with mucus and have to be cleaned with a suction device.

And if you drink alcohol next to me, it does not make my clothes and my hair stink so much afterwards that I will want to wash my hair and change my clothes before going to bed.


No, but the person drinking next to you can suddenly decide you gave them a bad look and decide to pick a fight.

Why are you replying as if I denied second hand smoke harms people? I very clearly said it did.

This is a great piece of data, but only a piece of the actual question that we need to answer, which is:

For a given input, how many tokens will be used for an answer, and how high quality will that answer be?

Measuring the tokenizer is just one input into the cost-benefit tradeoff.


This is an interesting analysis, but "are the costs of AI agents also rising exponentially is?" is a very bad question that this doesn't answer.

What's rising exponentially is the price of the most ambitious thing cutting edge agents can do.

But to answer whether the cost of AI agents is rising in general, you would take a fixed set of problems, and for each of them, ask "once it's solvable, how does the price change?"

For that latter question, there isn't a lot of data in these charts because there aren't enough curves for models of the same family over time, but it does look like there are a number of points where newer models solve the same problems at lower prices. Look at GPT5 vs. the older GPT models--the curve for GPT5 is shifted left.


The cost of models are almost exponentially decreasing with time.

The author performs a non sequitur by muddling two concepts of time. They say costs are getting “unsustainable” which is not a conclusion that follows.

What is true is that at a given point in time, cost to perform a task is exponentially related to the human time taken. But it does not mean it will remain that way.. far from it.


His book appears to have been published in 1942 or earlier: https://time.com/archive/6786636/books-biography-in-pictures....

I uploaded the book here, I can't find that quote or the photo in it, though:

https://transfer.it/t/wCLoeh9XEZrZ


> Founders who started pre-2025 typically have built a technical stack optimized for a world where software development was bespoke and expensive.

Of all the things that AI has changed, tech stacks aren't one of them. The bots will gladly write Typescript, Java, Python, Rust, what have you. They could not give less of a shit.


I caught that too.

What is he getting at? How does the code and infra stack differ at all between a company that is using AI, vs one that is not?


Here's my take on what he was getting at:

Build vs. buy is an eternal question in enterprises. I remember many in-house data teams trying to build tools for "digital transformation" and cloud migration about 10 years ago. The challenge was, building those tools was more expensive than those enterprises could budget for (IT as cost center), so a startup like Snowflake would easily outcompete in-house solutions with their custom, cloud-based tech stack that was necessarily complex because it needed to serve the needs of thousands of customers.

If he's right, the build vs. buy equation has shifted more towards build, at least as far as enterprise software is concerned. IT is still a cost center, but in theory an internal team can now handle more requests for custom tools without looking to outside vendors. Essentially the cost of building in-house might be collapsing and therefore enterprise software startups will be serving fewer customers (who would all pay you more because if solving the problem was cheap they'd do it).

If you had to build a stack for dozens of customers paying huge amounts of money, how would that stack differ from the stack you'd build to serve thousands of customers? Certainly it wouldn't need to be as scalable! And that's probably what he's getting at. I think what you'd do instead, to capture those higher price point customers, is solve their problems more specifically, in a higher value manner.

Many companies already do this, investing far more in field engineers than they do in their tech stack, since customization is essential.


Thanks, this is a good explanation, though I would not have phrased it the way he did.

As blind as my belief that Asia exists, because I haven't personally navigated there. Hell, I've used electricity (using it right now), but I couldn't do the experiments you need to do to get myself to an 1850s level of understanding of how it works, much less our current level.

I trust that Linux has a process. I do not believe it is perfect. But it gives me a better assurance than downloading random packages from PyPi (though I believe that the most recent release of any random package on PyPi is still more likely safe than not--it's just a numbers game).


I get what you are saying but as you said, if you are already under attack you can't trust your own computer, you just hope that you aren't downloading another exploit/bogus update. Real software I imagine is not so easy to pwn so completely but I don't know.



1. They maintain and sell one of the largest relational databases.

2. They're the primary maintainer of one of the largest programming languages.

3. They do tons of HR/ERP type software.

4. They have a supply chain division (my company is a direct competitor, and we have 2000 employees--it's a drop in the bucket, but a few thousand here, a few thousand there and it starts to add up. Afaik, their supply chain org is bigger than ours).

5. Other things I probably don't know about.

Many of these things come with swarms of consultants who implement the software for companies that don't have any internal technical competency, which swells the number of workers by a lot.

Don't get me wrong, I'm not remotely a fan, I like to quote Bryan Cantrill's rant. However, they do a lot of things.


>> Many of these things come with swarms of consultants who implement the software for companies that don't have any internal technical competency,

I have some anecdotal evidence for this. I worked at a medium sized family owned business. They were going through a massive ERP upgrade/replacement. One of the bids was from Oracle. The company was able to essentially test drive each company they were reviewing to see if the software was going to be a good fit.

Oracle's sales team was like a having a football on site. They sent over no less than about 20 people to swarm our pretty small office, barge into the dev spaces and generally annoy the fuck out of everybody for several months. The other vendors? They sent one, maybe two people to work alongside us as we test drove their software.

It was funny being in those meetings listening to people talk about the Oracle people. Nobody even remembered how good or bad their software was. Every single comment was about how overbearing and pushy their sales people were.

Needless to say, we went with a different company.


That sales process is directly tied to the type of customer they're aiming for, which is larger than a "medium-sized family-owned business".

They mis-aligned but for someone like Boeing or United, they'd go gaga over the footy-crowd.


They also own multiple other huge companies that had tens of thousands of their own employees working in completely different areas (Netsuite, Cerner, Acme, etc)


6. Lawyers


"The first thing we do, let's AI all the lawyers" ?


Also their cloud

And all the supporting legal team of course.


No better proof that they're a huge company than that I could forget about an entire public cloud offering. Good point.


If I do my python right, from 2010-2020 they grew by 2.5% annually, from 2020 to 2025, they grew headcount by 3.7% annually.

After the layoffs, they'll apparently now have grown by 1.0% annually since 2020.

So yes, from 2021 to 2023, they had a huge spike, but overall, it's a net slowdown in growth relative to the 2010-2020 period.

If this was about reversion to the old pattern they'd have done a smaller set of layoffs or simply wait for a few years of zero growth.


Or a pickup from 2015 - 2021 which was 0% growth.

It's tricky to pick an end-of-decade year also - recessions tend to happen +/- 2 years of the end of each decade in the USA, or at least have done since records began in the 19th century. For example 2010 was recovery over 2008/2009's bust. It's not like comparing March to Ma4ch for a crude seasonal adjustment.


You did the Python right but the analysis wrong. Looking at it on a graph you can see that interpreting a single growth rate for the entire period (even if you stop pre-covid) doesn’t make sense.

You can see linear growth from 2010-2017. Then slow decline or at best a flatline from 2018-2021. Then they went crazy in 2022-2025.

Now if we just do 162k - 30k we are back to 132k, basically same ballpark as pre-COVID.


That's not how stocks are measured on wall street. They picked the dumb metric.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: