Hacker Newsnew | past | comments | ask | show | jobs | submit | jerkstate's commentslogin

This means the patient makes up their own strategy and the doctor says “we are checking”

Must be the water.

was not expecting a HN crossover with /r/formula1 today, but here we are

This is really cool. I vibe coded almost exactly the same app, mine also tracks saturated fat, cholesterol, sodium, and fiber (I'm getting older so these macros are pretty important to me). One of the really cool things you can do if you have tool calling hooked up is to have the LLM analyze your diet and tell you what you can do better to hit your targets - swap pizza for pasta, decrease the amount of cheese you put on your sandwich, if you're gonna have fast food don't get the fries and eat low fat/low sodium the rest of the day, etc. What model are you using? I have found Qwen Flash to be really good for my app - smart enough, tool calling works really well, and very cheap.

Nice. I vibe coded a similar kind of system, you can dump a recipe into the chat window and it will use tool-calling to lookup macros for any foods it doesn't have in the DB and put them in, estimate raw -> cooked changes in nutrition and weight (if needed), estimate total weight of the cooked product, and macros per gram (e.g. writes a 100 gram serving to the db, you can scale it up and down and it scales the macros linearly). Similar to you I have used this app to alter my macro mix from high-fat to high-carb (for workout performance) and cut my sodium from ~4g/day to ~2.4g/day by interrogating the DB about what foods I should eat more and less of. Found some surprising wins in my habitual diet that were easy to change to hit my health targets, and looking up and logging these things by hand without LLM assistance would have been too tedious and time-consuming for me to continue to do it for as long as I have been (maybe 3 months now)

Curious, what model are you using? I have found Qwen Flash to be really great for this - tool calling works well, it's smart enough, and very cheap.


Maybe people who choose blue do so because they assume there’s some kind of monkey paw involved in the choosing the red option, like your wife and kids die or something like that

I wonder if red choosers really don’t understand that they are choosing to live in a world where half of all people, the more selfless half, are dead. It’s like living through a nuclear war except all of the nice people are gone, not just a random sample

I guess you base that surprising "half" on the transparent analogy with politics, and so you think real-world Democrats will press blue to assert their Democrattiness. This is probably true. However, since this is the internet, and being a Democrat is associated with being online and living in a city, there are probably more than 50% blue-pressers, as shown in the poll. Just like in the real world, you won't change that ratio whichever way you vote in the poll. If swaying opinion is within a voter's control, then your "choosing" is meaningful but the "half" becomes meaningless. If swaying opinion isn't within the voter's control, it's "choosing" that becomes meaningless, and the fate of the half is already sealed by cultural forces beyond our control.

I disagree that the hypothetical maps onto politics. Voting for democrats is mostly a vote for someone else (billionaires, etc.) to shoulder additional burdens to achieve some positive end. By contrast, this game involves serious risk to one's self and family. Lots of people would vote to raise corporate taxes to increase funding for schools in Baltimore. But those people aren't going to move their kids to Sandtown to help increase the property tax base.

I think if you played the game for real, blue would get maybe 5% of the vote, tops.


Technically for red to win the number of dead people will be between 0% and 49.999% of the population.

The entire reason to campaign for red is to reduce the dead percentage.


If your goal is to reduce the number of dead, is red really the one to campaign for?

If we imagine people will ignore their real-world political tribalism:

Voting blue is voting to possibly die, either because you want death or in risky solidarity with others who voted to possibly die, who may have chosen by mistake. Voting red is voting for those interested in death to die, along with those who chose blue by mistake, and along with anybody who voted blue in support of those who voted blue by mistake.

So we can have a blue campaign that says "we must not allow even one voter to die, we must all pull together and vote blue", and a red campaign that says "please don't be a giant crowd of idiots who risk death, just accept that maybe two voters aren't going to make it because one was depressed and the other had an involuntary hand movement, and everybody else play it safe and vote red".

This is a ridiculous situation, and Jonathan Swift unfortunately died in 1745, so the best commentary I can offer is "I don't know".


I don’t see them as selfless I see them as unintelligent.

The means of production are for sale, they can own them if they want!

But we don't pay for coding tools, we want them for free!

Thats fine, the cost for me to re-implement your code is nearly zero now, I don’t have to cajole you into fixing problems anymore.

This is obviously in an open source environment. You never needed to cajole them into fixing problems, you could just fix it yourself. That was always an option. That's literally the entire point of open source.

People doing work doing work that you can take for free to make money off of is another big point of open source you can't ignore.

It seems like quite a tower of babel just waiting to happen.. All those libraries that once had thought go into tangled consequences of supporting new similar features and once had ways to identity for their security updates needed will all just be defective clones with 5%-95% compatibility for security exploits and support for integrations that are mostly right but a little hallucinated?

I think it's more likely that libraries will give way to specified interfaces. Good libraries that provide clean interfaces with a small surface area will be much less affected by thos compared to frameworks that like to be a part of everything you do.

The JavaScript ecosystem is a good demonstration of a platform that is encumbered with layers that can only ever perform the abilities provivded by the underlying platform while adding additional interfaces that, while easier for some to use, frequently provide a lot of functionality a program might not need.

Adding features as a superset of a specification allows compatibility between users of a base specification, failure to interoperate would require violating the base spec, and then they are just making a different thing.

Bugs are still bugs, whether a human or AI made them, or fixed them. Let's just address those as we find them.


I could certainly see that direction earlier in some communities, but reaching agreement on specs seems like the opposite of where distributed low cost code writing is headed.. I.e. I like 20% of your OSS library and have one different opinion so I pull part of it in directly, change something, and ask an LLM to freshen it where that should mean what the LLM thinks I usually mean which is kind of like what some other people mean.

Given the supposed quality of top flight models there ought to be a lot more people forking open source projects, implementing missing features and releasing "xyz software that can do a and b".

Somehow it's not really happening.


I've actually been doing this for my own purposes - an adhoc buggy half-implemented low latency version of Project Wyoming from home assistant.

Repo, for those interested: https://github.com/jaggederest/pronghorn/

I find that the core issues really revolve around the audience - getting it good enough that I can use it for my own purposes, where I know the bugs and issues and understand how to use it, on the specific hardware, is fabulous. Getting it from there to "anyone with relatively low technical knowledge beyond the ability to set up home assistant", and "compatible with all the various RPi/smallboard computers" is a pretty enormous amount of work. So I suspect we'll see a lot of "homemade" software that is definitely not salable, but is definitely valuable and useful for the individual.

I hope, over the long to medium term, that these sorts of things will converge in an "rising tide lifts all boats" way so that the ecosystem is healthier and more vibrant, but I worry that what we may see is a resurgence of shovelware.


I have already forked open source software to fix issues or enhance it via coding agents. I put it on github publicly, so other people can use it if they see it, but I don't announce it anywhere. I don't want to deal with user complaints any more than the current maintainers do. (I'm also not going to post my github profile here since it has my legal name and is trivially linked to my home address.)

Because it still requires the desire to do it.

The cost of forking open source code was always effectively zero.

It's not really, because you now have the cost of maintaining that fork, even if it's just for yourself.

Which is still true in our brave new llm world.

That may be part of the issue. Perhaps LLMs are just causing people to reveal how much they consider a maintainer as providing a service for them. Maintainers don't work for you, they let you benefit from the service they perform.

That workload of maintaining a fork doesn't come from nowhere, it's just a workload someone else would have to do before the fork occured.


I'm talking about the literal process of forking an open source project. You're just making a copy of a set of files.

This is an unethical take, and long-term and at scale, an unsustainable/impractical one. This kind of mindset results in tool fragmentation, erosion of trust, and ultimately worse quality in software.

So you're saying people forking open source software is "unethical"? What is open source then? Just a polite offer that it is rude to accept?

As a sidenote: what's with the usage of "take" to designate an opinion instead of the word "opinion" or "view"?


Open-source is heavily community-oriented, and yes, I think that subverting the contributions of the community like this (and honestly, just kind of being a dick about it) is unethical, yeah. It erodes the fabric of open source, and will be detrimental not just to OSS, but to the field of software in the medium and long term for the reasons I stated earlier.

To your side note: "take" is a very common synonym for "thought"/"opinion"/"view" in the version (dialect? I guess?) of English I grew up with. If you're unfamiliar with it, that might be a regional or generational effect. I don't know. I'm not a linguist.


Nobody actually understands what they're doing. When you're learning electronics, you first learn about the "lumped element model" which allows you to simplify Maxwell's equations. I think it is a mistake to think that solving problems with a programming language is "knowing how to do things" - at this point, we've already abstracted assembly language -> machine instructions -> logic gates and buses -> transistors and electronic storage -> lumped matter -> quantum mechanics -> ???? - so I simply don't buy the argument that things will suddenly fall apart by abstracting one level higher. The trick is to get this new level of abstraction to work predictably, which admittedly it isn't yet, but look how far it's come in a short couple of years.

This article first says that you give juniors well-defined projects and let them take a long time because the process is the product. Then goes on to lament the fact that they will no longer have to debug Python code, as if debugging python code is the point of it all. The thing that LLMs can't yet do is pick a high-level direction for a novel problem and iterate until the correct solution is reached. They absolutely can and do iterate until a solution is reached, but it's not necessarily correct. Previously, guiding the direction was the job of the professor. Now, in a smaller sense, the grad student needs to be guiding the direction and validating the details, rather than implementing the details with the professor guiding the direction. This is an improvement - everybody levels up.

I also disagree with the premise that the primary product of astrophysics is scientists. Like any advanced science it requires a lot of scientists to make the breakthroughs that trickle down into technology that improves everyday life, but those breakthroughs would be impossible otherwise. Gauss discovered the normal distribution while trying to understand the measurement error of his telescope. Without general relativity we would not have GPS or precision timekeeping. It uncovers the rules that will allow us to travel interplanetary. Understanding the composition and behavior of stars informs nuclear physics, reactor design, and solar panel design. The computation systems used by advanced science prototyped many commercial advances in computing (HPC, cluster computing, AI itself).

So not only are we developing the tools to improve our understanding of the universe faster, we're leveling everybody up. Students will take on the role of professors (badly, at first, but are professors good at first? probably not, they need time to learn under the guidance of other faculty). professors will take on the role of directors. Everybody's scope will widen because the tiny details will be handled by AI, but the big picture will still be in the domain of humans.


> as if debugging python code is the point of it all.

You have a good point, but I would argue that debugging itself is a foundational skill. Like imagine Sherlock Holmes being able to use any modern crime-fighting technology, and using it extensively. If Sherlock is not using his deductive reasoning, then he's not a 'detective'. He's just some schmuck who has a cool device to find the right/wrong person to arrest.

Debugging is "problem-solving" in a specific domain. Sure, if the problem is solved, then I guess that's the point of it all and you don't have to solve the problem. But we're all looking towards a world in which people have to solve problems, but their only problem-solving skill is trying to get an AI to find someone to arrest. We need more Sherlocks to use their minds to get to the bottom of things, not more idiot cops who arrest the wrong person because the AI told them to.


With AI, VR is even more promising. I have been working on a Gaussian splat renderer for the Quest 3, and by having Claude and ChatGPT read state-of-the-art papers, I have been able to build a training and rendering pipeline that is getting >50 fps for large indoor scenes on the Quest 3. I started with an (AI-driven) port of a desktop renderer, which got less than 1 fps, but I've integrated both training and rendering improvements from research and added a bunch of quality and performance improvements and now it's actually usable. Applying research papers to a novel product is something that used to take weeks or months of a person's time and can now be measured in minutes and hours (and tokens).


You might be interested in a new experimental 3D scene learning and rendering approach called Radiant foam [1], which is supposed to be better suited for GPUs that don't have hardware ray tracing acceleration.

[1]: https://radfoam.github.io/


Cool! I'll definitely check it out. The great thing about LLMs is I can probably have a trainer and renderer using this technology up and running for my platform in a day or two, OR I can just pick and choose parts that would work well for my implementation and merge them in.


Sorry if this is a basic question, but what's you workflow for feeding the papers into the LLM and getting the implementation done? The coding agents that I've used are not able to read PDFs, so I've been wondering how to do it.


this is actually a great question - I just extract the text with PyPDF, but did a brief search on the functionality I'd like to have (convert math equations to LaTeX, extract images, reformat in markdown, extract data from charts) and it looks like there are a couple of promising Python libs like Docling and Marker.. I should really improve this part of my workflow.


after looking into it for a little while, Docling and Marker work pretty well but are very slow. I haven't found anything else that extracts math suitably. It takes 10+ minutes per pdf, so I'm going to run it on a batch of these papers overnight and create my own little gaussian splatting RAG database. It's really too bad PDF is so terrible.


What's your take on WorldLabs and Apple's splat models? Are there other open source alternatives?

How would editing work?

Do you think these will win over video world models like Genie?

Have you played with DiamondWM and other open source video world models?


My understanding is that those models create gaussian splats from a text prompt, kinda like a 3d version of nano banana. I'm not doing that (yet), what I'm doing is creating splats from a set of photos - aka "splat training" and then rendering the splat as a static (working on dynamism) on the Quest headset. This is pretty well-worn territory with a lot of good implementations, but I have my own implementation of a trainer in C++/CUDA (originally based in SpeedySplat, which was written in Python, but now completely rewritten and not much of SpeedySplat is left) and renderer in C++/OpenXR for the Quest (originally based on a LLM-made port of 3DGS.cpp to OpenXR, but 100% rewritten now), and I can easily integrate techniques from research.


I bought a copy of Office this year to do my taxes with the excel1040.com template and Claude for Excel, and dropped my 1099s and stuff into the chat window and Claude just transferred the numbers to the correct cells and Excel did the calculations. It was super easy (also easy to check because my tax picture didn’t change much from last year). It got some things right that TurboTax always got wrong (like cost basis for ESPPs). The only part that was difficult was getting Claude to transfer the data to the IRS fillable PDFs - I probably spent longer iterating on that than it would have taken to copy-paste the data from Excel. Other than that, it worked great, highly recommend.


I couldn't find an equivalent of this for Canadians but thanks for the tip.

The copy-pasting of data from my markdowns to the tax software was the hardest path for me as well.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: