Hacker Newsnew | past | comments | ask | show | jobs | submit | dangus's commentslogin

I’ll go further: there should be laws addressing account consolidation. Getting banned from an Apple or Google account is an incredibly wide blast radius. It would be like being banned from buying Unilever or Nestle food from your grocery store.

Email providers should be utilities and also legally require a warrant before disclosing any information whatsoever to the government.

Unfortunately the government is full of corrupt geriatrics who do not understand technology and are paid to continue not understanding technology as they sign bills prepared for them by ALEC.


Per your last paragraph, I also think we are in an awkward middle period where developers are embarrassed to admit how much code is vibes with very little review before they submit.

The embarrassment is understanding. It feels wrong, because in many ways it is wrong.

The only way I’ve had this feel any better is by using it on a non-critical internal tool. I can confidently say “I didn’t write any of this code because it’s a quality of life tool that only lives on developer manners and is not required at any point in our workflow.”

I also agree with the article that, unless computer science departments maintain some pretty strict discipline, this idea of a seniority collapse could be very real.

Will we need those senior engineers if AI keeps getting better? I don’t know. Maybe one day the AI systems are going to just be trusted to be able to untangle complex architectural problems.

If it wasn’t for leaded gasoline, rudimentary cancer treatment, and a good section of my modern video game catalog. I might be wishing I was born earlier.


Ads are imminent, TOS just changed to allow them, and free users will get trash models that are net positive profitable after ads. Better to just leave now.

I disagree with this idea on some level.

I don’t think AI is very good at “plan” compared to someone who is actually experienced with a toolset.

I think the more pertinent “falling behind” aspect for most people is the assumption that the models can’t do complex work. I.e., many people using the AI tools limit what they ask for because they are afraid of getting back bad code they have to fix.

It’s also important to be incredibly specific on what you want wherever possible.

I’ve also tried multiple agent workflows and have found them to generally be tiring and cluttered more than helpful.

Here’s a real life pro tip for you: don’t rush to become amazingly more efficient when a new tool comes along. The only benefactor of that attitude is your employer. I’d rather my employer think that AI is giving them a ~10% boost at best while my workload stays the same. I have a family, I don’t live to work, I work because I have to. Crazy brain-melting shit like multi-agent workflows is antithetical to that.

My last bit of feedback is that this reads too much like a LinkedIn post.

I see now that you’re a director of engineering, and so I now understand the LinkedIn influencer style going on here. Since you’re in a position of leadership, take my advice: don’t expect your ICs to pick up these insane thought leadership workflows that sound amazing on paper but end up causing pain, burnout, and low product quality for the engineers on your team who are actually in the trenches doing the work.

No, you won’t magically get 10x engineers and get to make your CTO happy. Don’t treat AI like a magic pill.

When the Covid-era tech overhiring correction ends, your best employees who have spent 2023-2026 getting squeezed and burned out but haven’t quit due to the job market will be the first to leave when the job market inevitably rebounds. These are the engineers whose dumbass bosses think shove AI down their throats and tell them that they aren’t agentmaxxing sloperator code enough.


I appreciate you taking the time to write this. There are a few fair points here.

I agree that AI is not automatically better at planning than an experienced engineer. In fact, I would never outsource planning blindly. My point is not that AI replaces thinking. It is that planning becomes a collaborative loop. The engineer still owns the judgment. I also strong believe that human engineers are not going to be replaced by AI.

I also agree that many people under-ask AI because they assume it cannot handle complex work. That hesitation is real. In my experience, the bigger unlock is not complexity, but clarity. The more specific and constrained you are, the better the output. That part is 100 percent true.

On multi-agent workflows feeling tiring and cluttered, I understand that too. If it feels like mental overload, it is probably poorly designed. Multi-agent setups are not meant to increase cognitive stress. They are meant to reduce context switching and batch certain types of work. If they create chaos, the workflow needs redesign, not more pressure.

I also want to be very clear, this is not about squeezing more output from engineers or chasing 10x productivity. I do not believe in “AI as a magic pill” thinking.

Efficiency gains should create space. Not pressure. I believe if we can offload more things to AI, then we have more space to do a lot more innovative things.

If AI gives a team leverage, that leverage should go into better design, stronger testing, less firefighting, more sustainable pacing. Not into compressing people’s lives.

The last thing I want is engineers feeling like they burn out chasing some productivity narrative. That is not healthy, and it is not sustainable.

My article is about discipline in workflow, not about forcing intensity. The goal is long-term system quality and personal clarity, not squeezing hours out of people.

Thanks again for the sharing, it is a good perspective and I really appreciate it.



I think you’re just describing how it’s circular.

It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.

But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.

Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.


> But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

It's like how Uber and Airbnb in the early days were burning loads of cash to build market share. People went to these services because they were cheaper. Then they would increase prices once they had a comfortable position.

OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc. Compared to say Uber which didn't provide a lot of efficiency gains.


A little bit, but the scale is another magnitude higher. I just saw a chart yesterday that shows Uber burning $18B, Tesla burning $9B, and Netflix burning 11B before reaching profitability. Open AI so far spent $218 Billion.

The opportunity is disproportionately greater as well though.

Unfortunately that doesn't change the fact even a small miscalculation could have an enormous impact. We are approaching levels of risk comparable in size to the subprime crisis of 2008.


> It's like how Uber and Airbnb [...]

I disagree. It's like Uber and Airbnb in how they try to gain market share. Big difference: For Uber (and when it got big, basically everybody I know has used it once in a while) and Airbnb, you oaid for each transaction. With OpenAI, most peopme are on the free tier. And if there is something incredibly hard, it's converting free users to paid users. That will, IMHO, be the thong that blows (many) of the AI companies up. They won't ever reach a profit/loss-equality.


I agree with this. For the casual user, I feel AI is only a "nice to have".

Uber and Airbnb have network effects. You cant increase price when there is no cost in switching.

I dont see how network effects applies to Uber/Airbnb because nothing stops drivers/hosts from listing their property in multiple such apps

People continue using Airbnb because that's where the properties are listed. And owners keep listing properties because that's where the users are.

My point was that nothing stops hosts from listing their properties in AirBnb as well as a competitor. Unless AirBnb penalizes delisting or enforces price parity I guess?

> OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc.

But also ever increasing quality requirements. So we can't possibly know at this point if this is a market with high margins or not.


And unlike Uber and Airbnb, OpenAI has no way to maintain marketshare. It’s a domain name with no moat.

Google has to pay Apple billions of dollars to make Google.com the default search engine. I just looked it up, over 15% of search revenue goes to pay to be the default search engine.

Every Android device defaults to Gemini.

Every Microsoft device defaults to Copilot.

I’d love to see where these cost reductions are. If costs are going to decrease rapidly why does OpenAI’s spending plan look so insane?


> Every Android device defaults to Gemini.

> Every Microsoft device defaults to Copilot.

I don't think it's right to say that these devices "default" to their vendors' AI software when it's impossible to replace it with something else. Yes I can install Claude as a standalone app but I don't have the OS-wide integration that Gemini does for Android for example.


Where are the cost reductions exactly? Except for using AI hype as an excuse for layoffs. Can you showe a reference? Genuinely interested.

This is a common misconception

OpenAI and others are already profitable on inference (inference is really really cheap)

They are just heavily investing into the latest frontier

The biggest risk is whether they can stay cutting edge, or if open source or others will catch up quickly.


> OpenAI and others are already profitable on inference (inference is really really cheap)

If it's that cheap I'll soon be doing it self-hosted, or switching to a local provider.

It's a race to the bottom for tokens-providers.


It is that cheap. Look at Deepseek or GLM pricing.

> It is that cheap. Look at Deepseek or GLM pricing.

Then it's a race to the bottom.


Yep.

And unlike competitors, OpenAI has no ecosystem. Just a website and a domain name. Even a VSCode fork like Cursor is an improvement over that state.

Google pays over 15% of search revenue to be the default search engine on various browsers.


If you need to do the latter to be able to make money on the former, then you're not making money. Because if the latter requirement would disappear, inference margins would also drop.

>inference is really really cheap

cough Sora cough


At the end of the day, they're still burning cash. Even if inference is cheap, it's also not hard to compete on. They aren't going to be a trillion dollar inference company.

Eventually there will be a race to the bottom on inference price to the customer by companies that aren't trying to subsidize their GPU investments.

OpenAI is spending money because they think they need to for their business to survive. They're hoping that the next big breakthrough just requires more compute and, somehow, that'll build them a moat.


OpenAI and quite honestly the others think they are in a race to AGI not the bottom. That's why they aren't concerning themselves with moats or cost. This is quite simply a massive bet that we've already cracked AGI and the rest is just funding the engineering to make it happen.

I personally think we haven't cracked AGI yet but it doesn't change their calculus.


OK, so absolutely good faith here what is the end game?

Obviously, there’s a scenario of super power AI and then it’s a matter of continuing course. Electricity and silicon.

What if you are right, and the scaling doesn’t work. It is too much power, time, hardware to improve… does openAI fold?

Do they just actual use the models they have?

Does everyone just decide that AI didn’t work and go back 5 years like it didn’t happen?

Does the price change so that they have to be profitable making AI services expensive and rare instead of today where they are everywhere pointlessly?

Or does this insane valuation only make sense with information you don’t have like insider scaling or efficiency news?

Does China’s strategy of undercutting US value of models pay off bigly?


Why so extreme, most likely just AI winter for a while, then when tech and societies has caught up, the advancements begins again.

It is not like we threw away the dotcom advances, they were just put on hold for a while..


Growth decoupled from labor costs

The people running these companies have a perverse incentive to keep the ball rolling as long as possible so that they can extricate as much personal wealth and influence as possible. Maybe AGI makes all the problems go away. But, failing that, they get out relatively scot-free when it all collapses. And they don't owe anything to the public. And no one is going to bring them up on fraud charges or any other kind of criminal charges. So, while the world is burning around them (including their former companies), they have the money and connections to acquire property and businesses that are actually productive. It's the Russian oligarch playbook. They're the kings of a struggling society on the brink of failure, but they heard "kings" and said, "Let's go."

"so that they can extricate as much personal wealth and influence as possible"

I've always thought this. If you're running something like OpenAI, it really doesn't matter to you if the company fails because you're already comfortably wealthy. But, it sure would be nice to be worth another 10x billion - though I'm not totally sure why.

So these individuals perceive a large upside and no downside. It's more of a hobby than a job. Like learning to play piano. It would be amazing to be a badass pianist...but not a big deal if that never happens.


I generally agree with the sentiment, but it's not the russian oligarch playbook. The playbook is some kind of a variation of buying out a productive asset in a legacy industry under it's market price (because everything is on fire already), then using political or monopoly power to funnel (tax) money through it and into your pockets (the asset has to function, but doesn't have to provide a good quality of service due to not allocating proper maintenance). Sovereign AI fund and Microsoft are very close to that setup. If NYC subway would be sold to certain Elon and he will then jack up the prices and have the city hall to subsidize it still, but keep the quality of service the same, that would be more or less it.

The other variation goes in reverse -- using the legacy asset and it's capture labor force to output some kind of a commodity that is sold below market price to a controlled company in a different jurisdiction, where it's resold at small discount of a market price. The company still has to function here too.

Bonus points for not even owning the asset in question, but having effective control over it through the corrupt management, this way the government still pays the bills to keep it running at loss.

What you are describing is actually very western thing, because it assumes you can exchange the asset into cash directly and then buy something with that liquidity, which assumes solid property rights. I'm not even talking about OpenAI being an actual tech company that just wasn't there before. It's not how oligarchy works in the places.

Since the US is slowly moving in a direction of oligarchy, I think the actual reference will be helpful.


Please read Sarah Kendzior. What's happening under Trump is different from what's happened under other admins precisely because he's drawing from the Russian quasi-state/mob playbook, and not from the normal "socially-caustic Capitalism" one. The difference is that one seeks to maintain a state, and one seeks to dismantle it and replace it with a quasi-state, which exists mainly to interface with other the entities that are still playing in the nation-state system, but which internally functions almost completely as a projection of the power of the elites.

You're conflating the assets the elites own before the state collapse with the ones they seek to acquire afterwards. The don't care if the ones from before function, because their only purpose is to be maximally extractive. Afterwards, there's no need to funnel tax money through the functional businesses they acquire; they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money. No laundering games necessary.


>replace it with a quasi-state, which exists mainly to interface with other the entities

I don't exactly disagree with that assessment and I think you should stay vigilant for that indeed. What I'm saying, that selling a hot potato to get cash is the opposite of what oligarchs are known to do. I could be that it's but a step to buy something else with oligarchic intentions in mind, but alternatively it could a normal westerner money-handling behavior.

>they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money.

That doesn't contradict what I wrote or at least meant. The asset in question is not the means of laundering, but a pretext for extracting money from everyone unfortunate enough to live in the forsaken place.

The laundering part usually comes when the oligarch wants to safeguard their own money from political risks, which they do by keeping the funds in a place that is outside of their (and their potential rivals) political influence. Otherwise, once the political balance shifts, the money is just gone, because no laws exist to guard it anymore. I'm not sure what this "outside" place could be for Americans, but could guess (with no confidence in the answer at all), it's either Swiss or Gulf banks. Maybe UK or whatnot. Some structures that have a combination of impartiality to their disputes, strong enough property and privacy regimes, but with zero to none ethical constrains to walk away from it.


Kagi has been an upgrade compared to DuckDuckGo for me.

It’s hard to describe but the results are just better, and it loads incredibly fast.

With DDG I always had this 20% wish to have Google back and frequently queries with !g bangs, not so much with Kagi.


Ditto. Basically the only thing I !g for now are maps and other geo-specific queries like the names of local restaurants or stores. Google still outperforms Kagi on those, but for nearly everything else I prefer the Kagi ad-free, ai-summary-onlt-if-requested results.

Same! I tried to switch to DDG at first (5-6 years ago now), but all too often the results were poor and I had to use !g to search Google to get somewhere. Since I started using Kagi, I've never once had that issue.

Could have been helpful for the article to include a tutorial:

In the filters, there’s one for “completed” and “sold,” you want both checked.


Why would you want "Completed" if you want to know how much an item sells for? This would be telling you how much it didn't sell for.

If both are checked it only displays “sold” items. For some reason the eBay UI used to auto-check “completed” whenever “sold” was selected.

Maybe this is too much of a side topic or tangent, but I think that mergers shouldn’t be legal by default once a company is a certain size, regardless of the level of competition.

I understand that there is a lot of competition and that a merger at this stage probably won’t harm competition significantly.

But that shaky justification can be used until suddenly there isn’t sufficient competition.

This reminds me of a recent Wendover Productions video talking about antitrust waivers for airline alliances flying transatlantic flights. In recent waiver applications, the ability to compete with other airline alliance conglomerates that have received antitrust exceptions is the justifying reason for requesting an antitrust exception, and that keeps happening until the industry becomes wildly consolidated.

I think our antitrust system should say, while you have a lot of competitors, and under that criteria you would be allowed to merge, but your company revenue/market cap/employee count/majority owner wealth is too high to be eligible to merge. You’re sufficiently large and prosperous, there is no need to grow your company larger through M&A. If you don’t like the business environment you are in, change your operations. You have the money to pursue your goals.


>but your company revenue/market cap/employee count/majority owner wealth is too high to be eligible to merge

Can you posit numbers for these 4 figures (although I don't know how you could calculate "majority owner wealth" when the majority owner is usually 401Ks and pension funds)? Also, if the goal is to stop a business from getting "bigger", then shouldn't there be a strict cap on the measures, rather than just preventing mergers?

If 1M employees is too many, then the business simply should not be allowed to hire more. If $x market cap is too high, then the business should be forced to issue dividends. If the revenue is too high, then it should be forced to stop selling whatever it is selling once it reaches that revenue.


I didn’t intend to flesh out these details thoroughly, but present the idea in spirit, so I wouldn’t be surprised if it needs tweaking as you mention.

I totally get the point of the article but the analogy isn’t a good one. It’s got the vibe that it’s written by someone who hasn’t been following the 3D printing/maker scene in a long time, which is more popular than ever.

I realize that the wildest promises of 3D printing and maker stuff like Arduono never came to fruition, but maker spaces have matured greatly. If that is the analogy we are making, that means that vibecoding won’t reach “the masses” necessarily but it will be popular beyond the present audience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: