A generative system, be it a neural network or a human, needs a way to test ideas in order to align with reality. If testing is available, then it is possible to advance the state of the art. Ideas are cheap, results matter.
Sure, but that doesn’t seem to square with the topic at hand - “why does an infinite truth and lies machine feel less trustworthy than another human”. It just isn’t a question that needs a high degree of abstraction to respond to.
It is sometimes a lie machine because it lacks grounding in verification. Humans get more grounding than language models but even we are not 100% there - remember the antivax hysteria. The most grounded field is science, but even in scientific papers most things don't replicate. Verification is hard on all levels and requires extensive work. In particle physics all scientists clump together around the CERN accelerator as it is the only source of verification they have (almost, I exaggerate a bit).
It's going to be important to develop AI methods to test and verify, I think unverified model outputs are worthless verbiage. Verification can be based on references, code execution, physical simulations, lab experiments and even language based simulations.
In a few years the situation is going to flip, AI is going to become more reliable than humans. Being tested on millions of cases, it will be more trustworthy than us, no human can be tested to that extent. It's going to be interesting to see how we react to super-valid AI. Our guiding role is going to shrink more and more, we will be the children.