dotnet/website/articles/Function-call-with-ollama-and-litellm.md
This example shows how to use function call with local LLM models where Ollama as local model provider and LiteLLM proxy server which provides an openai-api compatible interface.
To run this example, the following prerequisites are required:
dolphincoder:latest is used.dolphincoder:latest modelFirst, install Ollama by following the instructions on the Ollama website.
After installing Ollama, pull the dolphincoder:latest model by running the following command:
ollama pull dolphincoder:latest
You can install LiteLLM by following the instructions on the LiteLLM website.
pip install 'litellm[proxy]'
Then, start the proxy server by running the following command:
litellm --model ollama_chat/dolphincoder --port 4000
This will start an openai-api compatible proxy server at http://localhost:4000. You can verify if the server is running by observing the following output in the terminal:
#------------------------------------------------------------#
# #
# 'The worst thing about this product is...' #
# https://github.com/BerriAI/litellm/issues/new #
# #
#------------------------------------------------------------#
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command:
dotnet add package AutoGen
dotnet add package AutoGen.SourceGenerator
The AutoGen.SourceGenerator package is used to automatically generate type-safe FunctionContract instead of manually defining them. For more information, please check out Create type-safe function.
And in your project file, enable structural xml document support by setting the GenerateDocumentationFile property to true:
<PropertyGroup>
<!-- This enables structural xml document support -->
<GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>
WeatherReport function and create @AutoGen.Core.FunctionCallMiddlewareCreate a public partial class to host the methods you want to use in AutoGen agents. The method has to be a public instance method and its return type must be Task<string>. After the methods are defined, mark them with AutoGen.Core.FunctionAttribute attribute.
[!code-csharpDefine WeatherReport function]
Then create a @AutoGen.Core.FunctionCallMiddleware and add the WeatherReport function to the middleware. The middleware will pass the FunctionContract to the agent when generating a response, and process the tool call response when receiving a ToolCallMessage.
[!code-csharpDefine WeatherReport function]
GetWeatherReport tool and chat with itBecause LiteLLM proxy server is openai-api compatible, we can use @AutoGen.OpenAI.OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a @AutoGen.Core.FunctionCallMiddleware which contains the WeatherReport tool. Therefore, the agent can call the WeatherReport tool when generating a response.
[!code-csharpCreate an agent with tools]
The reply from the agent will similar to the following:
AggregateMessage from assistant
--------------------
ToolCallMessage:
ToolCallMessage from assistant
--------------------
- GetWeatherAsync: {"city": "new york"}
--------------------
ToolCallResultMessage:
ToolCallResultMessage from assistant
--------------------
- GetWeatherAsync: The weather in new york is 72 degrees and sunny.
--------------------