Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is simply some randomization going on in the paths it goes down. The way the language model is run isn’t deterministic. It could conceivably also depend on current resource consumption. The computational budget for an answer may run out, causing it to fall back to a shallower response.


Some of its responses demonstrate a degree of consistent repetitiveness which I wouldn't expect from a GPT-style model, which I would expect would give more variable outputs. Sometimes it really gives the impression of just regurgitating a collection of fixed scripts – closer to old-school Eliza than a GPT-style system.

I'm not expecting the underlying GPT-based system to have perfect conversational pragmatics, but I suspect the Eliza-like component makes its pragmatics a lot worse than a purely GPT-based chat system might have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: