System Info
The SQL-based Agent (using GeneratePythonCodeWithSQLPrompt) fails to maintain conversation context across multiple turns. While the Python class correctly extracts the conversation history from memory, the corresponding Jinja2 template fails to render it.
As a result, neither agent.chat() (in multi-turn scenarios) nor agent.follow_up() provide the LLM with the previous Q&A pairs. The LLM effectively operates in a "Zero-Shot" mode for every single query, seeing only the current question but forgetting everything discussed prior.
Data Preparation is Correct (Python Side) In pandasai/core/prompts/sql.py (Class GeneratePythonCodeWithSQLPrompt), the method to_json correctly prepares the history:
🐛 Describe the bug
...
{# Logic for last_code_generated is present #}
{% if last_code_generated ... %} ... {% endif %}
{# Logic for previous conversation is MISSING here #}
{{ context.memory.get_last_message() }} <--- Only the CURRENT query is rendered