docs/decisions/0017-openai-function-calling.md
The function calling capability of OpenAI's Chat Completions API allows developers to describe functions to the model, and have the model decide whether to output a JSON object specifying a function and appropriate arguments to call in response to the given prompt. This capability is enabled by two new API parameters to the /v1/chat/completions endpoint:
function_call - auto (default), none, or a specific function to callfunctions - JSON descriptions of the functions available to the modelFunctions provided to the model are injected as part of the system message and are billed/counted as input tokens.
We have received several community requests to provide support for this capability when using SK with the OpenAI chat completion models that support it.
Chosen option: "Support sending/receiving functions via chat completions endpoint without modifications to interfaces"
With this option, we utilize the existing request settings object to send functions to the model. The app developer controls what functions are included and is responsible for validating and executing the function result.
This option would update the IChatCompletion and IChatResult interfaces to expose parameters/methods for providing and accessing function information.
Orchestrating external function calls fits within SK's concept of planning. With this approach, we would implement a planner that would take the function calling result and produce a plan that the app developer could execute (similar to SK's ActionPlanner).
There has been much discussion and debate over the pros and cons of automatically invoking a function returned by the OpenAI model, if it is registered with the kernel. As there are still many open questions around this behavior and its implications, we have decided to not include this capability in the initial implementation. We will continue to explore this option and may include it in a future update.