docs/integrations/ai-engines/langchain.mdx
This documentation describes the integration of MindsDB with LangChain, a framework for developing applications powered by language models. The integration allows for the deployment of LangChain models within MindsDB, providing the models with access to data from various data sources.
Before proceeding, ensure the following prerequisites are met:
Create an AI engine from the LangChain handler.
CREATE ML_ENGINE langchain_engine
FROM langchain
USING
serper_api_key = 'your-serper-api-key'; -- it is an optional parameter (if provided, the model will use serper.dev search to enhance the output)
Create a model using langchain_engine as an engine and a selected model provider.
CREATE MODEL langchain_model
PREDICT target_column
USING
engine = 'langchain_engine', -- engine name as created via CREATE ML_ENGINE
<provider>_api_key = 'api-key-value', -- replace <provider> with one of the available values (openai, anthropic, google, litellm)
model_name = 'model-name', -- optional, model to be used (for example, 'gpt-4' if 'openai_api_key' provided)
prompt_template = 'message to the model that may include some {{input}} columns as variables',
max_tokens = 4096; -- defines the maximum number of tokens
langfuse_host,langfuse_public_key,langfuse_secret_key.
</Info>
Each tool exposes the internal MindsDB executor in a different way to perform its tasks, effectively enabling the agent model to read from (and potentially write to) data sources or models available in the active MindsDB project. </Info>
Create a conversational model using langchain_engine as an engine and a selected model provider.
The following usage examples utilize langchain_engine to create a model with the CREATE MODEL statement.
Create a model that will be used to ask questions.
CREATE ML_ENGINE langchain_engine_google
FROM langchain;
CREATE MODEL langchain_google_model
PREDICT answer
USING
engine = 'langchain_engine_google',
provider = 'google',
google_api_key = 'api-key-value',
model_name = 'gemini-1.5-flash',
mode = 'conversational',
user_column = 'question',
assistant_column = 'answer',
verbose = True,
prompt_template = 'Answer the users input in a helpful way: {{question}}',
max_tokens = 4096;
Ask questions.
SELECT question, answer
FROM langchain_google_model
WHERE question = 'How many planets are in the solar system?';
Go to the Use Cases section to see more examples. </Tip>