doc/development/ai_features/logging.md
In addition to standard logging in the GitLab Rails Monolith instance, specialized logging is available for features based on large language models (LLMs).
See logged events for the documentation.
To implement LLM-specific logging, use the Gitlab::Llm::Logger class.
Important: User inputs and complete prompts containing user data must not be logged unless explicitly permitted.
A feature flag named expanded_ai_logging controls the logging of sensitive data.
The instance setting ::Ai::Setting.instance.enabled_instance_verbose_ai_logs controls the logging of sensitive data.
Use the conditional_info helper method for conditional logging based on the status of the feature flag or the instance setting:
info level (logs are accessible in Kibana).info level, but without optional parameters (logs are accessible in Kibana, but only obligatory fields).When implementing logging for LLM features, consider the following:
conditional_info helper method to respect the expanded_ai_logging feature flag.# including concern that handles logging
include Gitlab::Llm::Concerns::Logger
# Logging potentially sensitive information
log_conditional_info(user, message:"User prompt processed", event_name: 'ai_event', ai_component: 'abstraction_layer', prompt: sanitized_prompt)
# Logging application error information
log_error(user, message: "System application error", event_name: 'ai_event', ai_component: 'abstraction_layer', error_message: sanitized_error_message)
Important: Familiarize yourself with our Data Retention Policy and remember to make sure we are not logging user input and LLM-generated output.