> To avoid that, the bot would have to be a high-ranking professional psychologist with an explicit purpose not to trigger destructive reactions.
Sounds fairly dooable, TBH, given what else they're doing reasonably well at.
> And that would fail the nature of a "consultant", which is something "under best effort to speak the truth" - incompatible direction with "something reassuring".
I don't think they're as incompatible as you appear to.
With reference to the case I mentioned (which seems like a decent border case to point out difficulties), what you want as a consultant is something that tells you "the likelihood of the event is bound to this and that", not something that goes "Oh, everything will be all right, dear". Outside figurative: truth (say, facts, or instances of computation) and comfort may not always be compatible.
> reasonably well
But I wrote «high-ranking professional». «Reasonably well» does not cover the cases relevant to full safety.
And, by the way, such attitude will backfire heavily on false positives.
Anyway: the case presented is there to point to a few difficulties involved in the idea of «not giving out dangerous answers».
Sounds fairly dooable, TBH, given what else they're doing reasonably well at.
> And that would fail the nature of a "consultant", which is something "under best effort to speak the truth" - incompatible direction with "something reassuring".
I don't think they're as incompatible as you appear to.