Back to Langflow

Language Model

docs/versioned_docs/version-1.8.0/Components/components-models.mdx

1.10.0.dev2010.4 KB
Original Source

import Icon from "@site/src/components/icon"; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';

Language model components in Langflow generate text using a specified Large Language Model (LLM). These components accept inputs like chat messages, files, and instructions in order to generate a text response.

Langflow includes a Language Model core component that has built-in support for many LLMs. Alternatively, you can use any additional language model in place of the Language Model core component.

Use language model components in flows

Use language model components anywhere you would use an LLM in a flow.

<Tabs> <TabItem value="chat" label="Chat" default>

One of the most common use cases of language model components is to chat with LLMs in your flows.

The following example uses a language model component in a chatbot flow similar to the Basic Prompting template.

  1. <PartialGlobalModelProviders />

    :::tip My preferred provider or model isn't listed If you want to use a provider or model that isn't built-in to Langflow's global <Icon name="BrainCog" aria-hidden="true" /> Models, you can replace the Language Model component with any additional language model component.

    Browse <Icon name="Blocks" aria-hidden="true" /> Bundles or <Icon name="Search" aria-hidden="true" /> Search for your preferred provider to find additional language models.

    Alternatively, you can use Ollama to host your preferred model, and then configure your Ollama service in Langflow's global <Icon name="BrainCog" aria-hidden="true" /> Models. Or, create your own custom component to support any provider and model of your choice, and then use your custom component in place of the Language Model core component. As a shortcut, use an existing language model component as the basis for your custom component. :::

  2. Add the Language Model core component to your flow, and then select your model from the Language Model field.

    Optionally, to configure API keys and enable or disable models, click Manage Model Providers to open the Model Providers pane.

  3. In the component inspection panel, enable the System Message parameter.

  4. Add a Prompt Template component to your flow.

  5. In the Template field, enter some instructions for the LLM, such as You are an expert in geography who is tutoring high school students.

  6. Connect the Prompt Template component's output to the Language Model component's System Message input.

  7. Add Chat Input and Chat Output components to your flow. These components are required for direct chat interaction with an LLM.

  8. Connect the Chat Input component to the Language Model component's Input, and then connect the Language Model component's Message output to the Chat Output component.

  9. Open the Playground, and ask a question to chat with the LLM and test the flow, such as What is the capital of Utah?.

    <details> <summary>Result</summary>

    The following response is an example of an OpenAI model's response. Your actual response may vary based on the model version at the time of your request, your template, and input.

    The capital of Utah is Salt Lake City. It is not only the largest city in the state but also serves as the cultural and economic center of Utah. Salt Lake City was founded in 1847 by Mormon pioneers and is known for its proximity to the Great Salt Lake and its role in the history of the Church of Jesus Christ of Latter-day Saints. For more information, you can refer to sources such as the U.S. Geological Survey or the official state website of Utah.
    
    </details>
  10. Optional: Try a different model or provider to see how the response changes.

    If you enabled multiple models in Langflow's global Model Providers pane, select a different model in the Language Model field. To open the Model Providers pane, click your profile icon, select Settings, and then click <Icon name="Brain" aria-hidden="true"/> Model Providers. Then, open the Playground, ask the same question as you did before, and then compare the content and format of the responses.

    This helps you understand how different models handle the same request so you can choose the best model for your use case. You can also learn more about different models in each model provider's documentation.

    <details> <summary>Result</summary>

    The following response is an example of an Anthropic model's response. Your actual response may vary based on the model version at the time of your request, your template, and input.

    Note that this response is shorter and includes sources, whereas the previous OpenAI response was more encyclopedic and didn't cite sources.

    The capital of Utah is Salt Lake City. It is also the most populous city in the state. Salt Lake City has been the capital of Utah since 1896, when Utah became a state.
    Sources:
    Utah State Government Official Website (utah.gov)
    U.S. Census Bureau
    Encyclopedia Britannica
    
    </details>
</TabItem> <TabItem value="drivers" label="Drivers">

Some components use a language model component to perform LLM-driven actions. Typically, these components prepare data for further processing by downstream components, rather than emitting direct chat output. For an example, see the Smart Transform component.

A component must accept a LanguageModel input to use a language model component as a driver, and you must set the language model component's output type to LanguageModel. For more information, see Language Model output types.

</TabItem> <TabItem value="agents" label="Agents">

If you don't want to use the Agent component's built-in LLM, you can use a language model component to connect your preferred model:

  1. Add a language model component to your flow.

    You can use the Language Model core component or browse <Icon name="Blocks" aria-hidden="true" /> Bundles to find additional language models. Components in bundles may not have language model in the name. For example, Azure OpenAI LLMs are provided through the Azure OpenAI component.

  2. Select your preferred model from the Language Model dropdown. The model must be configured globally in the Models pane.

  3. Change the language model component's output type from Model Response to Language Model. The output port changes to a LanguageModel port. This is required to connect the language model component to the Agent component. For more information, see Language Model output types.

  4. Add an Agent component to the flow.

  5. Connect the language model component's output to the Agent component's Language Model input. The Agent component now inherits the language model settings from the connected language model component instead of using any of the built-in models.

</TabItem> </Tabs>

Language model parameters

The following parameters are for the Language Model core component. Other language model components can have additional or different parameters.

import PartialParams from '@site/docs/_partial-hidden-params.mdx';

<PartialParams />
NameTypeDescription
providerStringInput parameter. The model provider to use. Options depend on your global <Icon name="BrainCog" aria-hidden="true" /> Models configuration.
model_nameStringInput parameter. The name of the model to use. Options depend on the selected provider and your global <Icon name="BrainCog" aria-hidden="true" /> Models configuration.
input_valueStringInput parameter. The input text to send to the model.
system_messageStringInput parameter. A system message that helps set the behavior of the assistant.
streamBooleanInput parameter. Whether to stream the response. Default: false.
temperatureFloatInput parameter. Controls randomness in responses. Range: [0.0, 1.0]. Default: 0.1.
modelLanguageModelOutput parameter. Alternative output type to the default Message output. Produces an instance of Chat configured with the specified parameters. See Language Model output types.

Language model output types

Language model components, including the core component and bundled components, can produce two types of output:

  • Model Response: The default output type emits the model's generated response as Message data. Use this output type when you want the typical LLM interaction where the LLM produces a text response based on given input.

  • Language Model: Change the language model component's output type to LanguageModel when you need to attach an LLM to another component in your flow, such as an Agent or Smart Transform component.

    With this configuration, the language model component supports an action completed by another component, rather than a direct chat interaction. For an example, the Smart Transform component uses an LLM to create a function from natural language input.

Additional language models

If your provider or model isn't supported by the Language Model core component, additional language model components are available in <Icon name="Blocks" aria-hidden="true" /> Bundles.

You can use these components in the same way that you use the core Language Model component, as explained in Use language model components in flows.

Pair models with vector stores

import PartialVectorRagBlurb from '@site/docs/_partial-vector-rag-blurb.mdx';

<PartialVectorRagBlurb /> <details> <summary>Example: Vector search flow</summary>

import PartialVectorRagFlow from '@site/docs/_partial-vector-rag-flow.mdx';

<PartialVectorRagFlow /> </details>