Hi Andy, thanks for your comment. For complex, multi-part questions I would suggest to use a MRKL/ReAct approach which combines really well with RAG. The idea is that the LLM breaks down complex questions and searches for the answers individually & compiling everything before responding back to the user. In my blog post earlier this year I have an example where the LLM receives a multipart question and combines several RAG calls into a final response. See https://medium.com/mlearning-ai/supercharging-large-language-models-with-langchain-1cac3c103b52