The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions. I use the Duck Duck Go AI a lot to answer questions. I trust it about as far as I can throw the datacenter it resides in, but it's useful for quickly verifiable things. Especially stuff like syntax and command line options for various programs.
Nope, this only applies to a small percent of content, where a relatively small number of people needs access to it and the incentive to create derivative work based on it is low, or where there's a huge amount of content that's frequently changing (think airfares). But yes, they will protect it more.
For content that doesn't change frequently and is used by a lot of people it will be hard to control access to it or derivative works based on it.
I don't think you're considering the enshittification route here. I'm sure it will be: Ask ChatGPT a question -> "While I'm thinking, here's something from our sponsor which is tailored to your question" -> lame answer which requires you to ask another question. And on and on. While you're asking these questions, a profile of you is built and sold on the market.
Almost every big tech company is an ad company. Google sells ads, Meta sells ads, Microsoft sells ads, Amazon sells ads, Apple sells ads, only Nvidia doesn't because they sell hardware components.
It's practically inevitable for a tech company offering content and everyone who thinks otherwise should set a reminder to 5 years from now.
> The next year, Google began selling advertisements associated with search keywords against Page and Brin's initial opposition toward an advertising-funded search engine.
Those ships of motive have long since sailed into some very brown, foul smelling waters for many different companies, and more ships will keep sailing the same way.