People assume training on past data means no novelty, but novelty comes from recombination. No one has written your exact function, with your exact inputs, constraints, and edge cases, yet an LLM can generate a working version from a prompt. That’s new output. The real limitation isn’t novelty, it’s grounding.
Just because there are lots of tasks which can be accomplished without the need for anything novel doesn't mean LLMs can match a human. It just means humans spend a lot of time doing some really boring stuff.