Back to Devexpress

DevExpress AI-powered Extensions for Blazor

blazor-405228-ai-powered-extensions.md

latest13.2 KB
Original Source

DevExpress AI-powered Extensions for Blazor

  • Mar 18, 2026
  • 13 minutes to read

Use the following links for details on how to add AI-powered functionality to DevExpress Blazor components:

How it Works

DevExpress AI APIs leverage the Microsoft.Extensions.AI libraries for integration and interoperability with a wide range of AI services. These libraries establish a unified C# abstraction layer for standardized interaction with language models.

This architecture decouples your application code from specific AI SDKs. You can seamlessly switch the underlying AI model or provider with minimal code modifications. For example, you can build a prototype with a locally deployed AI model and then quickly transition to an enterprise-grade online LLM provider. These changes only involve adjustments to the app’s startup logic and the installation of necessary NuGet packages.

The IChatClient interface serves as the central mechanism for language models interaction. Currently, supported AI providers include:

The Microsoft.Extensions.AI framework allows developers to integrate support for AI language models and services without modifying the core library. This means you can leverage third-party libraries for new AI providers or create your own custom implementation for in-house language models.

Note

DevExpress AI-powered extensions operate on a “bring your own key” (BYOK) model. We do not provide a proprietary REST API or bundled language models (LLMs/SLMs).

You can either deploy a self-hosted model or connect to a cloud AI provider and obtain necessary connection parameters (endpoint, API key, language model identifier, and so on). These parameters must be configured at application startup to register an AI client and enable extension functionality.

Prerequisites

  • .NET 8 SDK or above
  • AI language model (choose one of the following):

AI Project Templates

The DevExpress Template Kit is the fastest way to register AI services in a DevExpress Blazor project:

  1. Create an ASP.NET Core Blazor Application.
  2. Select an AI provider (Azure OpenAI, OpenAI, or Ollama) from the Add AI Resources list. The Template Kit automatically installs the necessary NuGet packages and adds the corresponding AI resources to your project.
  3. Specify your API key, endpoint, and model/deployment in the project’s appsettings.json file.
  4. Optional. Select the AI Chat view to add a sample page featuring the AI chat component powered by your selected AI service.

Manual AI Services Integration

Follow the instructions below to register an AI model and enable DevExpress AI-powered Extensions in your application.

Important

Never hardcode AI provider access keys, credentials, or API endpoints directly in your source code. Refer to the following help topic for additional information: Secret Management for Blazor AI Components.

OpenAI

  1. Install the following NuGet packages to your project:

  2. Register the OpenAI model in the project’s entry point class:

  3. Register DevExpress Blazor services and DevExpress AI-powered extensions in the project’s entry point class:

Azure OpenAI

  1. Install the following NuGet packages to your project:

  2. Register the Azure OpenAI model in the project’s entry point class:

  3. Register DevExpress Blazor services and DevExpress AI-powered extensions in the project’s entry point class:

Ollama

  1. Install the following NuGet packages to your project:

  2. Register the self-hosted AI model in the project’s entry point class:

  3. Register DevExpress Blazor services and DevExpress AI-powered extensions in the project’s entry point class:

Foundry Local

Note

Foundry Local is available as a public preview. Features, approaches, and processes can change or have limited capabilities before General Availability (GA).

  1. Select a target platform for your app. There are two NuGet packages for the Foundry Local SDK - a Windows-specific and a cross-platform package. These packages have the same API surface but are optimized for different platforms.

  2. Install the following NuGet packages to your project:

  3. Add a method that registers a Foundry Local client in the project’s entry point class:

  4. Add a cleanup service that unloads the model and releases resources during application shutdown. Without a cleanup service, the Foundry model can remain loaded and the local web service can continue running.

  5. Register the Foundry Local AI model and a cleanup service in the project’s entry point class:

  6. Register DevExpress Blazor services and DevExpress AI-powered extensions in the project’s entry point class:

First Run

On the first run, Foundry Local SDK detects your machine’s capabilities and selects the model optimized for your hardware. If the selected model is not available locally, the SDK contacts the Microsoft AI Foundry registry and downloads the model weights (for Phi-4-mini, typically 2–4 GB). The SDK stores the downloaded model in a local cache. Subsequent runs load the model from cache without re-downloading.

Because the first run requires downloading several gigabytes, your Blazor app can take longer to start. To reduce startup delays, use the Foundry Local command-line interface (CLI) to download the selected model in advance.

ONNX Runtime

  1. Install the following NuGet packages to your project:

  2. Register the self-hosted AI model in the project’s entry point class:

  3. Register DevExpress Blazor services and DevExpress AI-powered extensions in the project’s entry point class:

Semantic Kernel

The Semantic Kernel SDK provides a common interface to interact with different AI services. The Kernel communicates with AI services through AI Connectors, which expose multiple AI service types from different providers.

Semantic Kernel works with an ecosystem of ready-to-use connectors which support leading AI models from OpenAI, Google, Anthropic, DeepSeek, Mistral AI, Hugging Face, and more. You can also build custom connectors for any other service, such as your in-house language models.

The following example connects DevExpress AI-powered Extensions for Blazor to Google Gemini through the Semantic Kernel SDK:

Note

The Google chat completion connector is currently experimental. To acknowledge this and use the feature, you must explicitly suppress the compiler warnings with the #pragma warning disable directive.

  1. Sign in to Google AI Studio.

  2. Create an API key.

  3. Install the following NuGet packages to your project:

  4. Register the Gemini model in the project’s entry point class:

  5. Register DevExpress Blazor services and DevExpress AI-powered extensions in the project’s entry point class:

Configure Inference Parameters

To control the AI model’s behavior and creativity, set inference parameters using IChatClient options. These parameters are configured once when you register the IChatClient service in the project’s entry point class. The settings then apply to all DevExpress AI-powered features, ensuring a consistent tone and style across your app.

The following code snippet configures an Azure OpenAI client that is moderately creative, avoids repeating itself, and produces reasonably detailed but not excessively long responses:

csharp
AzureOpenAIClient azureOpenAIClient = new AzureOpenAIClient(
        new Uri(azureOpenAiEndpoint),
        new ApiKeyCredential(azureOpenAiKey)
);
IChatClient azureOpenAIChatClient = azureOpenAIClient.GetChatClient(azureOpenAiModel).AsIChatClient();
IChatClient chatClient = new ChatClientBuilder(azureOpenAIChatClient)
    .ConfigureOptions(options => {
        options.Temperature = 0.7f;
        options.MaxOutputTokens = 1200;
        options.PresencePenalty = 0.5f;
    })
    .Build();
builder.Services.AddChatClient(chatClient);

Note

A specific IChatClient implementation might have its own internal representation of options. It may use a subset of options or ignore the provided options entirely.

Verify AI Service Connectivity

To verify connectivity with the configured AI service, add the DxAIChat component into your application.

razor
@using DevExpress.AIIntegration.Blazor.Chat
@page "/"
@rendermode InteractiveServer

<PageTitle>DevExpress Blazor AI Chat</PageTitle>

<DxAIChat />

Send a test prompt and confirm a response is received.

Examples

See the following examples for different ways to use AI features in Blazor apps:

Troubleshooting

This section describes common AI integration issues and steps you can follow to diagnose and resolve these issues. If the solutions listed here do not help, create a ticket in our Support Center and attach a reproducible sample project.

The AI chat responds with an “Internal Server Error“ message.

  • Verify that the model name, API key, endpoint, and other AI service registration parameters are correct.
  • For cloud AI providers, make sure you are online and that your firewall allows access to the provider’s endpoint.
  • Confirm that the self-hosted language model service (for example, Ollama) is active and responsive.

Environment variable is not set“ exception in Visual Studio.

  • If you store AI service registration parameters in environment variables, confirm that all necessary environment variables are set.
  • Restart Visual Studio to detect the newly created environment variable.

See Also

Grid - AI Semantic Search