Back to Ruflo

LLM Configuration

plugins/ruflo-ruvllm/skills/llm-config/SKILL.md

3.6.301.1 KB
Original Source

LLM Configuration

Configure RuVLLM for local inference and fine-tuning.

When to use

When you need to configure local LLM inference, create MicroLoRA adapters for task-specific fine-tuning, or set up SONA for real-time adaptation.

Steps

  1. Check status — call mcp__claude-flow__ruvllm_status to see current model and adapter state
  2. Generate config — call mcp__claude-flow__ruvllm_generate_config with model parameters
  3. Create MicroLoRA — call mcp__claude-flow__ruvllm_microlora_create for task-specific adapters
  4. Adapt MicroLoRA — call mcp__claude-flow__ruvllm_microlora_adapt with training data
  5. Create SONA — call mcp__claude-flow__ruvllm_sona_create for real-time neural adaptation
  6. Adapt SONA — call mcp__claude-flow__ruvllm_sona_adapt with feedback signals

MicroLoRA vs SONA

FeatureMicroLoRASONA
SpeedMinutes to train<0.05ms adaptation
ScopeTask-specific fine-tuningReal-time micro-adjustments
PersistenceSaved as adapter weightsSession-scoped
Use caseSpecialized domain tasksContinuous feedback loops