plugins/ruflo-ruvllm/skills/llm-config/SKILL.md
Configure RuVLLM for local inference and fine-tuning.
When you need to configure local LLM inference, create MicroLoRA adapters for task-specific fine-tuning, or set up SONA for real-time adaptation.
mcp__claude-flow__ruvllm_status to see current model and adapter statemcp__claude-flow__ruvllm_generate_config with model parametersmcp__claude-flow__ruvllm_microlora_create for task-specific adaptersmcp__claude-flow__ruvllm_microlora_adapt with training datamcp__claude-flow__ruvllm_sona_create for real-time neural adaptationmcp__claude-flow__ruvllm_sona_adapt with feedback signals| Feature | MicroLoRA | SONA |
|---|---|---|
| Speed | Minutes to train | <0.05ms adaptation |
| Scope | Task-specific fine-tuning | Real-time micro-adjustments |
| Persistence | Saved as adapter weights | Session-scoped |
| Use case | Specialized domain tasks | Continuous feedback loops |