When automatic function calling is enabled: https://github.com/googleapis/python-genai?tab=readme-ov-file#automatic-python-function-support --- or when an mcp server is passed to tools (https://github.com/googleapis/python-genai?tab=readme-ov-file#model-context-protocol-mcp-support-experimental) -- the gen AI SDK will automatically call functions for the user when the LLM asks for it, and then call the LLM with the result (and this can happen multiple times).
The instrumentation just writes 1 span / log in this case, and we miss any intermediate request/response the gen AI SDK makes, because we just monkey patch the top level call / response to / from the SDK..
Ideally we get 1 span / log for each call to the LLM. Might be hard to monkey patch deep into the gen AI SDK to do that..