Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sounds like it is trained to avoid answering questions, and instead tries to bait you to give it more information so that it can tell you what it told it.

It answers some things directly, but so many things are just avoidance. And then people say "see, it understood after a bit of back and forth, it is smart!", even though it is basically iterating through Google responses and reformulate those to fit the conversation until you say it got it right. Google v1 used pure logic and got you waht you wanted, Google v2 tries to do natural language and sometimes misses, ChatGPT is the next step and tries to do full language but misses most of the time.



So one should use chatGPT as a frontend to Google v1!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: