Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Bug: Kernel Function plugin not working with AzureAssistantAgent #10141

Open
vslepakov opened this issue Jan 9, 2025 · 5 comments
Open
Assignees
Labels
bug Something isn't working python Pull requests for the Python Semantic Kernel

Comments

@vslepakov
Copy link
Member

Describe the bug
Testing the setup described here with a bugfix released in 1.18.0

To Reproduce
See the setup here.

Expected behavior
AzureAssistantAgent with a kernel function plugin works as part of AgentGroupChat

Platform

  • OS: Windows
  • IDE: VS Code
  • Language: Python
  • Source: semantic-kernel==1.18.0

Additional context

ERROR:

semantic_kernel.exceptions.service_exceptions.ServiceResponseException: ("<class 'semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AzureChatCompletion'> service failed to complete the prompt", BadRequestError('Error code: 400 - {\'error\': {\'message\': "An assistant message with \'tool_calls\' must be followed by tool messages responding to each \'tool_call_id\'. The following tool_call_ids did not have response messages: call_74vVFw3smVjsnsoCwcbrUNaN", \'type\': \'invalid_request_error\', \'param\': \'messages.[3].role\', \'code\': None}}'))

According to this, the tool_call_id should be included in messages with AuthorRole.TOOL. I believe this should be handled in semantic kernel

Part of the stack trace:

...
 File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\group_chat\agent_group_chat.py", line 144, in invoke
    async for message in super().invoke_agent(selected_agent):
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\group_chat\agent_chat.py", line 144, in invoke_agent
    async for is_visible, message in channel.invoke(agent):
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\channels\chat_history_channel.py", line 71, in invoke
    async for response_message in agent.invoke(self):
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\chat_completion\chat_completion_agent.py", line 111, in invoke
    messages = await chat_completion_service.get_chat_message_contents(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\chat_completion_client_base.py", line 142, in get_chat_message_contents
    return await self._inner_get_chat_message_contents(chat_history, settings)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\utils\telemetry\model_diagnostics\decorators.py", line 83, in wrapper_decorator
    return await completion_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\open_ai\services\open_ai_chat_completion_base.py", line 88, in _inner_get_chat_message_contents
    response = await self._send_request(settings)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\open_ai\services\open_ai_handler.py", line 59, in _send_request
    return await self._send_completion_request(settings)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\open_ai\services\open_ai_handler.py", line 99, in _send_completion_request
    raise ServiceResponseException(
semantic_kernel.exceptions.service_exceptions.ServiceResponseException: ("<class 'semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AzureChatCompletion'> service failed to complete the prompt", BadRequestError('Error code: 400 - {\'error\': {\'message\': "An assistant message with \'tool_calls\' must be followed by tool messages responding to each \'tool_call_id\'. The following tool_call_ids did not have response messages: call_74vVFw3smVjsnsoCwcbrUNaN", \'type\': \'invalid_request_error\', \'param\': \'messages.[3].role\', \'code\': None}}'))
@vslepakov vslepakov added the bug Something isn't working label Jan 9, 2025
@markwallace-microsoft markwallace-microsoft added python Pull requests for the Python Semantic Kernel triage labels Jan 9, 2025
@moonbox3
Copy link
Contributor

moonbox3 commented Jan 9, 2025

Hi @vslepakov, it looks like one of the tool calls may be failing and we're not sending back a result for that particular tool call? Are you able to enable logging so we can get some more information about the number of tool calls being made, and what else could be going on?

@moonbox3 moonbox3 self-assigned this Jan 10, 2025
@moonbox3 moonbox3 removed the triage label Jan 10, 2025
@vslepakov
Copy link
Member Author

Hi @moonbox3, sure here you go. Let me know if you need anything else:

https://gist.github.com/vslepakov/715e7eb0a85688564da987d1633ccbf6

@moonbox3
Copy link
Contributor

moonbox3 commented Jan 10, 2025

Thanks for sending, @vslepakov. I'm not able to reproduce the tool calling issue with an AzureAssistantAgent. Are you able to share some code that I'd be able to use to reproduce it?

As a baseline, could you run this sample, please? https://github.com/microsoft/semantic-kernel/blob/main/python/samples/getting_started_with_agents/step7_assistant.py. It makes several tool calls. I'd like to know if you can run that sample, as well, or if you experience failures. Thanks.

As a note, I have the AZURE_OPENAI_API_VERSION in my .env file as 2024-09-01-preview.

@vslepakov
Copy link
Member Author

Thanks @moonbox3. Just add you to my private repo playground.
It's on this branch: bug-repro-10141

Using the same AZURE_OPENAI_API_VERSION

Not sure if it makes a difference but I am using AgentGroupChat whereas the sample you provided does not.

@moonbox3
Copy link
Contributor

Thanks, @vslepakov. I will take a look at your repo soon. I just adjusted the mixed_agent_chats sample here so that there is an AzureAssistantAgent, and it uses the menu plugin as part of writing its copy. I am still not able to get it to fail right now... I dug into the chat history that is sent from the AzureAssistantAgent -> AzureChatCompletion service, and it does contain a message with FunctionCallContent from the Assistant, and then the next message is the FunctionResultContent from the Tool. This is acceptable ordering when sending to an OpenAI model:

Image

I see in your logs, though, that right when you send a message to the AzureChatCompletion agent, after first running the AzureAssistantAgent it is failing with the 400. This is puzzling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working python Pull requests for the Python Semantic Kernel
Projects
Status: No status
Development

No branches or pull requests

3 participants