A bit off topic, but that’s pretty much a result of “prompt stuffing”. Your prompt is processed into a good old fashioned search query and then the search results are sort of added to the prompt. Basically from the LLM perspective, it seems a request to rework your source material in a manner consistent with your prompt. The LLM is fed the correct answer, so it doesn’t have to answer, it just has to reword the input.
But I’ve seen AI results that are basically extracts of sources. They’ll even give a link to them.
A bit off topic, but that’s pretty much a result of “prompt stuffing”. Your prompt is processed into a good old fashioned search query and then the search results are sort of added to the prompt. Basically from the LLM perspective, it seems a request to rework your source material in a manner consistent with your prompt. The LLM is fed the correct answer, so it doesn’t have to answer, it just has to reword the input.
So?