doc/development/testing_guide/testing_ai_features.md
This document highlights AI-specific testing considerations that complement GitLab standard testing guidelines. It focuses on the challenges AI features bring to testing, such as non-deterministic responses from third-party providers. Examples are included for each testing level.
AI-powered features depend on system components outside the GitLab monolith, such as the AI Gateway and IDE extensions. In addition to these guidelines, consult any testing guidelines documented in each component project.
Follow standard unit testing guidelines. For AI features, always mock third-party AI provider calls to ensure fast, reliable tests.
ee/spec/lib/code_suggestions/tasks/code_completion_spec.rbcode_suggestions/code_suggestions.test.tsUse integration tests to verify request construction and response handling for AI providers. Mock AI provider responses to ensure predictable, fast tests that handle various responses, errors, and status codes.
ee/spec/requests/api/code_suggestions_spec.rbmain/test/integration/chat.test.jsUse frontend feature tests to validate AI features from an end-user perspective. Mock AI providers to maintain speed and reliability. Focus on happy paths with selective negative path testing for high-risk scenarios.
ee/spec/features/duo_chat_spec.rbTo test that DAP features are functional in a core feature page and core features are functional with DAP components, use the following shared context and examples in a feature spec:
include_context 'with duo features enabled and agentic chat available for group on SaaS'
to load DAP components in a feature page by default.it_behaves_like 'user can use agentic chat' to test DAP features in a feature page.For instance, ee/spec/features/epic_boards/epic_boards_spec.rb asserts the following scenario:
These feature tests run when we make a change to AI Gateway repository as well, to verify that an MR does not accidentally break DAP features e.g.
aigw/test-branch test branch.
This branch points the same SHA with master.[!note]
aigw/test-branchbranch is unprotected by default for allowing AIGW & DWS maintainers to trigger downstream pipelines in GitLab project.
gdk start to start services including DWS.<gdk-root>/gitlab and use one of the following options:
export TEST_AI_GATEWAY_REPO_REF=<your-remote-feature-branch> and delete <gitlab-rails-root>/tmp/tests/gitlab-ai-gateway/ cache dir, ORexport TEST_DUO_WORKFLOW_SERVICE_ENABLED="false" && export TEST_DUO_WORKFLOW_SERVICE_PORT=<your-local-dws-port>.
This allows the feature tests to request to your local DWS instance. Make sure the following configuration is set to your local DWS and it's running:
true to AIGW_MOCK_MODEL_RESPONSEStrue to AIGW_USE_AGENTIC_MOCKbundle exec rspec ee/spec/features/epic_boards/epic_boards_spec.rb.DAP consists of multiple services and API calls. To debug a test case failure, you may need to examine service logs to identify the root cause. Here are the couple of pointers:
GitLab-Rails REST API ... log/api_json.log
GitLab-Rails GraphQL API ... log/graphql_json.log
GitLab-Workhorse ... log/workhorse-test.log
DWS ... Either stdout or DUO_WORKFLOW_LOGGING__TO_FILE in gitlab-ai-gateway repo.
You can also examine the state of VueJS app by having JS console log output:
it 'runs a test' do
...
# This prints the browser logs. Combine with `console.log()` in JavaScript.
browser_logs.each do |log|
puts "#{log.level}: #{log.message}"
end
...
end
Use end-to-end tests sparingly to verify AI features work with real provider responses. Key considerations:
specs/features/ee/browser_ui/3_create/web_ide/code_suggestions_in_web_ide_spec.rbtest/kotlin/com/gitlab/plugin/e2eTest/tests/CodeSuggestionTest.ktgitlab-qa orchestrator with AI Gateway scenarios to test AI features on GitLab Self-Managed instances.Perform exploratory testing before significant milestones to uncover bugs outside expected workflows and UX issues. This is especially important for AI features as they progress through experiment, beta, and GA phases.
We dogfood everything. This is especially important for AI features given the rapidly changing nature of the field. See the dogfooding process for details.