Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I saw someone saying 80% of doctors believe that LLM's are trustworthy consultation partners.

Code created by LLM's doesnt compile, hallucinated API's.. invalid syntax and completely broken logic, why would you trust it with someones life !



I wonder if the exact phrasing has varied from the source, but even then if "consultation partners" is doing the heavy lifting there. If it was something like "useful consultation partners", I can absolutely see value as an extra opinion that is easy to override. "Oh yeah, I hadn't thought about that option - I'll look into it further."

I imagine we're talking about it as an extra resource rather than trusting it as final in a life or death decision.


> I imagine we're talking about it as an extra resource rather than trusting it > as final in a life or death decision.

I'd like to think so. Trust is also one of those non-concrete terms that have different meanings to different people. I'd like to think that doctors use their own judgement to include the output from their trained models, I just wonder how long it is till they become the default judgement when humans get lazy.


I think that's a fair assessment on trust as a term, and incorporating via personal judgement. If this was any public story, I'd also factor in breathless reporting about new tech.

Black-box decisions I absolutely have a problem with. But an extra resource considered by people with an understanding of risks is fine by me. Like I've said in other comments, I understand what it is and isn't good at, and have a great time using ChatGPT for feedback or planning or extrapolating or brainstorming. I automatically filter out the "Good point! This is a fantastic idea..." response it inevitably starts with...


I'll see if i can dig it up, it was from a real life meeting which I have tossed the printed notes from a while back in disgust.


Because LLM’s, with like 20% hallucination rate, are more reliable than overworked, tired doctors that can spend only one ounce of their brainpower on the patient they’re currently helping?


Yeah, I'm gonna need really strong evidence for that claim before I entrust my life to an AI.


Apologies, but have you noticed that if your entrusted (the "doctor") trusted the unentrustable (the "LLM"), then your entrusted is not trustworthy?


yes, I have noticed.. and I am concerned.


"Quis custodiet ipsos custodes". The old problem.

In fact, the phenomenon of pseudo-intelligence scares those who were hoping to get tools that limited the original problem, as opposed to potentially boosting it.


>I saw someone saying 80% of doctors believe that LLM's are trustworthy consultation partners.

See, now that is something I don't know why I should trust: a random person on the internet citing a statistics that they saw someone else say.


The claim seems plausible because it doesn't say there was any formal evaluation, just that some doctors (who may or may not understand how LLMs work) hold an opinion.


I wish I could cite the actual study, but I'm my feeble mind only remembers the anger I felt at the statistic.

Unlike the LLM, i'm willing to be truthful about my memory.


luckily us being programmers can fix things like syntax errors.


> I saw someone saying

The irony...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: