> Why is that inherently bad? Should I be able to buy fire insurance on pre-existing embers?
What if someone gets Type 1 diabetes as a child so they can no longer get insurance because of that "pre-existing" condition: if they get cancer for unrelated reasons they should just be saddled with medical debt? Or because of your Type 1 you can't get coverage, and you get t-boned in your car by a drunk driver.
Certainly it sounds 'unfair' that someone who smokes (a personal choice) gets similar cancer coverage for someone who does not smoke. But it also means that if your ((great-)grand-)mother had cancer, and you get it through no fault/choice of your own (i.e. genetics), you can also get coverage. (This latter effects a cousin of mine: her aunt (mom's sister) died of cancer at 37, her mom at 63; so now she's wonder when here number will come up. We're in Canada, so have universal care, but it's still something in her DNA.)
There are many circumstances in which you suffer through no fault of your own, and universal health coverage is present in many societies because it was decided to protect those people—even if it allows some 'free-riding' by others making poor choices.
People make all sorts of crazy decisions to prevent the "wrong" people from getting what they "don't deserve":
Pre-existing conditions also continue to frame healthcare as 'insurance' against a bad thing happening to you, when it should just be a regular service like any other.
You don't need 'insurance' in order to get your vehicle serviced, but that is what the US does with healthcare.
When one of my kids was 4, they had an unexplained seizure. Hospital workup, whole nine yards, never recurred; it was probably a medication reaction. Years later we were denied coverage from all the private insurers over it (more accurately: we were denied any coverage for that child).
Similarly, insurers would as a matter of course exclude from coverage any woman with one of several extremely common conditions, including endometriosis, PCOS, fibroids, and adenomyosis.
Prior to Obamacare, insurers were free to deny coverage wholesale for these conditions. It would have been fucked up to extend coverage but exclude any neurological conditions from my kid, but the actual outcome was worse: they were under the law entitled to withhold any coverage.
If you live long enough, you will have a pre existing condition.
The way it was suppose to work with the original mandate is that everyone had to be insured either through their employee or the exchange. So you couldn’t just buy insurance when you were sick. The Supreme Court struck that down.
If you lost your job, before the ACA, you could not get health insurance outside of working for someone and having group insurance at any cost.
But you do realize that the entire idea of not being able to get insurance because of pre-existing conditions is completely unique to the US?
Costa Rica for instance (where I am right now for a month and half) allows anyone to become a resident as long as you have guaranteed income of around $2000 a month or you deposit $60K into a local bank account and they arrange monthly disbursements and you pay 15% of your stated income to CAJA. Healthcare is both better and more affordable here.
The same is true for Panama. Why can’t the US figure this out?
It interacts badly with insurance being offered as workplace benefit. If you quit or lose your job, you'd lose your health insurance. And any plan you signed up for after that would then treat you as "pre-existing embers" and expect you to pay accordingly. The bundling of health insurance with workplace seems like the healthcare original sin to me.
Obama couldn't change that, so the ACA redesigned the system to work with it. Despite being called insurance, health insurance is no longer really viewed or designed to be any kind of insurance. Instead, it's supposed to be Netflix for healthcare. You pay a flat rate, and then get unlimited healthcare. Obviously, the issue with this is that if you don't need healthcare you can just not sign up for the subscription. So the ACA tried to solve this by requiring everyone to sign up. Once everyone is required to sign up, it's not right to discriminate against preexisting conditions. It may not be an especially good system, but it is coherent.
The US is allergic to taxes. Maybe it's a marketing thing. Benefits paid for by society.
Maybe a department of Return on Investment. See what those taxes pay for. Contrast to buying private versions of the services at the same SLA or better.
It’s more that the US is more like a collection of 50 little countries, and it’s supposed to be hard to accomplish much at a federal level. That separation has eroded a bit in the last 50 years but it’s still very much a part of our political ideology.
it's bad for the person, obviously. The point of society-wide policies is not to maximize economic efficiency; they're supposed to making society a good place to live. Of course if you only look at them under an economic lens they're going to seem bad. Economically the best policy would be to kill all the sick people.
Not really, because whereas before things were bad for people with pre-existing conditions, now they are really bad for everyone.
People are paying exorbitant prices either for insurance, for routine health care stuff, or for both.
There was no free lunch, so we traded some health care for the chronically ill, for slightly less healthcare for everyone else. The insurance companies make sure it's an extractive zero-sum game in terms of actual healthcare provided.
> Instead use an inside source, an employee you know at the company you are interested in
I have been reading this advice for a decade, and I have been working as a software engineer as a decade, and I don't know anyone who got a job this way.
I'm not doubting it happens. It's just interesting that this obviously seems very common in some software engineering circles, but is virtually unheard of in others.
A Human Interviewer can be held responsible for their actions, a machine, so far, cannot. Outside of the potential for cutting costs, abdication of responsibility is the number one reason we're looking to adopt these systems.
Humans have a much greater diversity in bias because we have all lived our own unique lives. LLMs are incredibly limited, by contrast. Even if you were somehow to simulate bias by exposing subsets of LLMs to subsets of human knowledge corpuses, you would need billions of subsets to simulate the diversity of human bias.
Wisdom of the crowd also implies that diversity of human bias is a good thing, in aggregate.
To more closely address your point: if all companies use the same LLM they’ll all have the same hiring bias. But if Company Foo has Hiring Manager Bob that’s biased against me, I can shoot my shot with Company Bar with Hiring Manager Alice who might not be.
LLMs have no awareness of their own bias, and no incentive or ability to mitigate it. A human can, in theory, realize "hey, I tend to be a little harsh on <demographic>, is this negative judgement just that?" while an LLM could never.
In practice I doubt many people are aware of their biases either, or think "it's not bias if it's true" or something. But at least on the less "internally" biased end of humans there will be less external manifestation of it.
They don't have any concept of their "personal" bias, so they'd imitate whatever training data they received that was tagged as not being biased, if there even was any.
So you might think, but no. The LLM contains a large number of biases, coming from different training texts. Depending on how you structure the question, you can get biased statements.
For instance, if I discuss audio electronics with Google Gemini, depending on what kinds of questions I ask, I can get audiophile crackpot quackery out of it, or I can get solid electronic engineering statements.
The training data contains a vast number of narratives that are filled with different points of view. Generally speaking, you get the ones that resonate with your own narrative threaded through your prompts.
One way is if you ask loaded questions: questions which assume that some statements hold true, and are seeking clarification within that context. If the AI hasn't been system-prompted or fine tuned to push back on that topic, it may just take those assumptions at face value, and then produce token predictions out of narratives which express similar assumptions.
I've been working on react and react native applications professionally for over ten years, and I have never worked on a project with any kind of meaningful test coverage
I have not seen tests in any code base I worked on in the past 20 years. I have noticed that there is some kind of sanctimonious demeanor to quite a few people that advocate for tests (on comment boards). I find the reactions to discussions on tests fascinating because it seems to elicit very strong opinions, sort of a "do you put your shopping cart back" kind of topic, but for programmers.
I find that fascinating, because interacting with the tests in our codebase (both Python and JS) answers a _lot_ about "how is this meant to work", or "why do we have this". I won't say I do test-driven development, at least not very rigorously, but any time I am trying to make a small change in a thing I'm not 100% familiar with, it's been helpful to have tests that cover those edge cases. :)
I've checked the stats, the previous app I've worked on has 31% reported coverage and I think the actual value is higher, with coverage of most of the critical paths. But it's been a lot of work and the engineering hierarchy is supportive in adding time to manage the existing tests and test the new features.
Have you ever worked at a place where you were put on an existing codebase, and that code has no tests? Have you ever worked at a place where, when you try to fix that, management tells you that they don't have the time to do so, they have to crank out new features?
Is ipsento606 working at such a place? I don't know, and neither do you. Why do you jump to the conclusion that it's their personal failing?
> Have you ever worked at a place where you were put on an existing codebase, and that code has no tests?
Yes.
Then I added tests. Now the codebase has tests.
Funny how that works.
> Have you ever worked at a place where, when you try to fix that, management tells you that they don't have the time to do so, they have to crank out new features?
Yes.
I then added tests that covered my features. Now the project has tests.
> Software engineers are scared of designing things themselves.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
To be blunt, I think it's a form of mania that drives someone to reject human-written code in favor of LLM-generated code. Every time I read writing from this perspective that exceeds a paragraph, I quickly realize the article itself was written by an LLM. When they automate this much writing, it makes me wonder how much of their own reading they automate away too.
The below captures this perfectly. The author is trying to explain that vibe-coding their own frameworks lets them actually "understand" the code, while not noticing that the LLM-generated text they used to make this point is talking about cutting and sewing bricks.
> But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
I think the bit you quoted is a tie in with an earlier bit:
“ I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric”
To me, this text doesn’t read as being entirely written by an LLM, there is definitely an air of LLM about it though, so maybe the first draft was.
Correct. The history is rife with examples of manias taking hold of societies, I recommend "Memoirs of Extraordinary Popular Delusions and the Madness of Crowds" by Charles Mackay[1], it's an absolutely fascinating book.
Yeah the “not invented here” syndrome was considered an anti pattern before the agentic coding boom and I don’t see how these tools make it irrelevant. If you’re starting a business, it’s still likely a distraction if you’re writing all of the components of your stack from scratch. Agentic tools have made development less expensive, but it’s still far from zero. By the author’s admission, they still need to think through all these problems critically, architect them, pick the right patterns. You also have to maintain all this code. That’s a lot of energy that’s not going towards the core of your business.
What I think does change is now you can more easily write components that are tailor made to your problem, and situation. Some of these frameworks are meant to solve problems at varying levels of complexity and need to worry about avoid breaking changes. It’s nice to have the option to develop alternatives that are as sophisticated as your problem needs and not more. But I’m not convinced that it’s always the right choice to build something custom.
The cost of replacement-level software drops a lot with agentic coding. And maintenance tasks are similarly much smaller time syncs. When you combine that with the long-standing benefits of inhouse software (customizable to your exact problem, tweakable, often cleaner code because the feature-set can be a lot smaller), I think a lot of previously obvious dependencies become viable to write in house.
It's going to vary a lot by the dependency and scope - obvious owning your own react is a lot different than owning your own leftpad, but to me it feels like there's no way that agentic coding doesn't shift the calculus somewhat. Particularly when agentic coding make a lot of nice-to-have mini-features trivial to add so the developer experience gap between a maintained library and a homegrown solution is smaller than it used to be.
my problem with frameworks has always been that the moment I want to do something the framework writers aren't interested in, I now have three problems: my problem, how to implement it in the underlying platform and how to work around the framework to not break my feature.
Yes this happens in every framework I've ever used. My approach used to be to try to work around it, but now I've got these local exceptions to what the framework does and that is inevitably where problems/bugs pop up. Now I simply say "we can't implement the feature that way in this framework, we need to rework the specification." I no longer try to work against the framework, it's just a massive time sink and creates problems down the road.
It's like designing a kitchen and you don't make all the spaces some multiple of three inches. Now, standard cabinets and appliances will not fit. You will be using filler panels or need custom cabinetry. And anyone who later wants countertops or different cabinets will be working around this design too. Just follow the established standard practices.
I'm so glad software engineering isn't my job. I love solving problems, and I'm somewhat better at using code to do it than my peers (fellow scientists), but I would hate to have a boss/client that says "it needs to do X" and the framework writer (or SDK, ala Android/Xcode) say "no, that hurts my profits/privacy busting".
I've never found something that was impossible to implement in any framework or SDK. Even in Android SDK land, you can easily get access to an OpenGL surface and import the whole world via the NDK. There's nothing limiting other than the OS itself and its mechanism.
Same with Web framework. Even React (a library) has its escape hatches to let in the rest of the world.
Where is your copy of the android source code for the device you’re manufacturing? Because that’s how you can get the full feature set. Otherwise you will be restricted by Android aggresive suspending and killing policy.
> I would hate to have a boss/client that says "it needs to do X" and the framework writer (or SDK, ala Android/Xcode) say "no, that hurts my profits/privacy busting".
An answer to such request should be: "We would need to ship a custom version of Android". Just like if you need to setup a web server on a Linux system, you would need to be root. You don't choose a shared hosting and then complain about the lack of permissions.
that's amazing, shared hosting on the device I bought. no thank you. I'll root the damn thing and do as I please. If future devices don't allow that, I won't have a reason to carry them in my pocket.
Yeah, I'm huge on using LLMs for coding, but one of the biggest wins for me is that the LLM already knows the frameworks. I no longer need to learn whatever newest framework there is. I'll stick to my frameworks, especially when using an LLM to code.
after 3 decades as SWE I mostly found both i) and ii) to not be true, for the most part. a lot of frameworks are not built from the ground up as “i am building a thing to solve x” but “i had a thing and built something that may (or may not) be generally useful.” so a lot of them carry weight from what they were originally built from. then people start making requests to mold the framework to their needs, some get implemented, some don’t. those that don’t good teams will build extensions/plugins etc into the framework and pretty soon you got a monster thing inside of your codebase you probably did not need to begin with. i think every single ORM that i’ve ever used fits this description.
Totally. Frameworks also make it a lot easier for new team members to contribute. React, for example, makes it a lot easier to hire. Any project with moderate size will require some kind of convention to keep things consistent and choosing a framework makes this easier.
Now look at the cross team collaboration and it gets even harder without frameworks. When every team has their own conventions, how would they communicate and work together? Imagine a website with React, Vue, Angular all over the place, all fighting for the same DOM.
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
As someone who’s built a lot of frontend frameworks this isn’t what I’ve found. Instead I’ve found that you end up with the middle ground choice which while effective is no better than the externally maintained library of choice. The reason to build your own framework is so it’s tailor suited to your use cases. The architecting required to do that LLMs can help with but you have to guide them and to guide them you need expertise.
I would like a more reliable way to activate this "way more experience."
What I see in my own domain I often recognize as superficially working but flawed in various ways. I have to assume the domains I am less familiar are the same.
> can generate just the bit of code that you actually need
Design is the key. Codebases (libraries and frameworks not exempt,) have a designed uniformity to them. How does a beginner learn to do this sort of design? Can it be acquired completely by the programmer who uses LLMs to generate their code? Can it be beneficial to recognize opinionated design in the output of an LLM? How do you come to recognize opinion?
In my personal history, I've worked alongside many programmers who only ever used frameworks. They did not have coding design sensibilities deeper than a social populist definition of "best practice." They looked to someone else to define what they can or cannot do. What is right to do.
Reducing ambiguity by definition increases effective communication. Any number of social experts would undoubtedly herald an increase in effective communication an unequivocal boon to human relationships.
Despite the name, many people use "credit cards" simply for rewards and enhanced purchase protections, with only incidental use of the credit facility.
In the US market, it is surprising that someone would choose to use a debit card over a credit card (if they have the choice) because they are giving up the rewards and enhanced purchase protections, which are available at effectively zero cost.
If I used a debit card over a credit card, I'd effectively be paying ~2% more for most things I buy, for no benefit.
Not to mention the grace period. Especially with high interest rates, it's another perk to have thousands of my dollars stay in the bank all month while my credit card bill piles up. This matters less when rates are super low.
One thing I didn't truly appreciate until my wife and I consolidated our spending and had children - having nearly every expense flow through a credit card puts total spending into perspective without having to look through bank statements or keep up a spreadsheet. Getting a $10k bill when you're expecting $8k (or a $30k bill when you're expecting $20k) can be a pretty jarring event and is a built-in monthly touch point to review budgeting and spending.
It wouldn't be quite the same impact spread out over 5 cards paid out of multiple checking accounts with slightly different billing cycles.
> One thing I didn't truly appreciate until my wife and I consolidated our spending and had children - having nearly every expense flow through a credit card puts total spending into perspective without having to look through bank statements or keep up a spreadsheet.
This can work amazingly well for some folks. And can be a spiral of debt for others. This is generally good advice if you can and do actually pay off your credit cards every month. This gets quickly out of control as soon as you don't or won't for one reason or another.
Better fraud protection, too. Depending on the bank it can be a real battle to get fraudulent charges dropped and funds restored, but credit card companies go out of their way to make that process easy. Some even offer it as a function of their site/app so you don’t even need to make a call to get things resolved.
I have several cards and don’t keep a balance on any of them. They’re a tool with several uses, and one of mine is to be able to pay for things without exposing my debit card/bank account.
The problem with the housing issue is that real solutions to it are extremely unpopular, even amount people who agree with the scale and intensity of the problem.
The regular voting public doesn't even agree that there's a connection between increasing the supply of housing and housing becoming more affordable.
Their position is, roughly, "there's plenty of housing already - it just needs to be more affordable for regular people". Sometimes this even manifests in support for self-defeating demand subsidies like help-to-buy schemes for new homeowners
This is a position that can never be satisfied because it is fundamentally disconnected from reality. It is equivalent to the meme of the dog with the stick in its mouth who wants you to throw the stick for them, but not take the stick from them.
before the ACA, insurers could deny coverage for pre-existing conditions
people have forgotten how bad things used to be
reply