Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Combining two broken systems - compromised search engines and unreliable LLMs - seems unlikely to yield that vision

Counterpoint: with a chain-of-thought process running atop search, you can potentially avoid much of the meta-search / epistemic hygiene work currently required. If your “search” verb actually reads the top-100 results, runs analyses for a suite of cognitive biases such as partisanship, and gives you error bars / warnings on claims that are uncertain, the quality could be dramatically improved.

There are already custom retrieval/evaluation systems doing this, it’s only a matter of a year or two before it’s commoditized.

The concern is around OpenAI monetization, do they eventually start offering paid ads? This could be fine if the unpaid results are great, a big part of why the web is perceived to be declining is link-spam that Google doesn’t count as an ad.

My prediction would be that there is a more subtle monetization channel; companies that can afford to RAG their products well and share those indexes with AI search providers will get better results. RAG APIs will be the new SEO.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: