aspnetcore-405495-ai-powered-extensions.md
The links below provide instructions on how to add AI-powered functionality to DevExpress ASP.NET Core components.
The DevExpress.AIIntegration.AspNetCore.Reporting NuGet package contains classes and methods required for AI-powered Extensions for ASP.NET Core Reporting Controls. Call the AddWebReportingAIIntegration method to configure AI-powered Extensions in your application.
The following examples integrate an AI Assistant (based on dxChat) into ASP.NET Core Dashboard and Reporting controls:
View Example: Reporting for ASP.NET Core — Integrate Azure OpenAI-based AI AssistantView Example: BI Dashboard for ASP.NET Core — Integrate Azure OpenAI-based AI Assistant
Review the following Reporting AI Extensions demos:
Run Demo: Report Viewer AI Extensions Run Demo: Report Designer AI Extensions
DevExpress AI APIs leverage the Microsoft.Extensions.AI libraries for integration and interoperability with a wide range of AI services. These libraries establish a unified C# abstraction layer for standardized interaction with language models.
This architecture decouples your application code from specific AI SDKs. You can seamlessly switch the underlying AI model or provider with minimal code modifications. For example, you can build a prototype with a locally deployed AI model and then quickly transition to an enterprise-grade online LLM provider. These changes only involve adjustments to the app’s startup logic and the installation of necessary NuGet packages.
The IChatClient interface serves as the central mechanism for language models interaction. Currently, supported AI providers include:
IChatClient implementation for unsupported providers or private language models.The Microsoft.Extensions.AI framework allows developers to integrate support for AI language models and services without modifying the core library. This means you can leverage third-party libraries for new AI providers or create your own custom implementation for in-house language models.
Note
DevExpress does not provide a REST API or include built-in LLMs/SLMs. To use AI services, you need an active Azure/OpenAI subscription to obtain the necessary REST API endpoint and key. This information must be provided at application startup to register AI clients and enable DevExpress AI-powered Extensions.
Follow the instructions below to register an AI model and enable DevExpress AI Services in your application.
Install the following NuGet packages to your project:
Register the OpenAI model in the project’s entry point class:
Register DevExpress AI Service in the project’s entry point class:
Install the following NuGet packages to your project:
Register the Azure OpenAI model in the project’s entry point class:
Register DevExpress AI Service in the project’s entry point class:
Install the following NuGet packages to your project:
Register the self-hosted AI model in the project’s entry point class:
Register DevExpress AI Service in the project’s entry point class:
The Semantic Kernel SDK provides a common interface to interact with different AI services. The Kernel communicates with AI services through AI Connectors, which expose multiple AI service types from different providers.
Semantic Kernel works with an ecosystem of ready-to-use connectors which support leading AI models from OpenAI, Google, Anthropic, DeepSeek, Mistral AI, Hugging Face, and more. You can also build custom connectors for any other service, such as your in-house language models.
The following example connects DevExpress AI Service to Google Gemini through the Semantic Kernel SDK:
Note
The Google chat completion connector is currently experimental. To acknowledge this and use the feature, you must explicitly suppress the compiler warnings with the #pragma warning disable directive.
Sign in to Google AI Studio.
Create an API key.
Install the following NuGet packages to your project:
Register the Gemini model in the project’s entry point class:
Register DevExpress AI Service in the project’s entry point class:
To control the AI model’s behavior and creativity, set inference parameters using IChatClient options. These parameters are configured once when you register the IChatClient service in the project’s entry point class. The settings then apply to all DevExpress AI-powered features, ensuring a consistent tone and style across your app.
The following code snippet configures an Azure OpenAI client that is moderately creative, avoids repeating itself, and produces reasonably detailed but not excessively long responses:
AzureOpenAIClient azureOpenAIClient = new AzureOpenAIClient(
new Uri(azureOpenAiEndpoint),
new ApiKeyCredential(azureOpenAiKey)
);
IChatClient azureOpenAIChatClient = azureOpenAIClient.GetChatClient(azureOpenAiModel).AsIChatClient();
IChatClient chatClient = new ChatClientBuilder(azureOpenAIChatClient)
.ConfigureOptions(options => {
options.Temperature = 0.7f;
options.MaxOutputTokens = 1200;
options.PresencePenalty = 0.5f;
})
.Build();
builder.Services.AddChatClient(chatClient);
Note
A specific IChatClient implementation might have its own internal representation of options. It may use a subset of options or ignore the provided options entirely.