Back to Ruview

π RuView

README.md

0.7.0146.5 KB
Original Source

π RuView

<p align="center"> <a href="https://x.com/rUv/status/2037556932802761004"> </a> </p>

Beta Software — Under active development. APIs and firmware may change. Known limitations:

  • ESP32-C3 and original ESP32 are not supported (single-core, insufficient for CSI DSP)
  • Single ESP32 deployments have limited spatial resolution — use 2+ nodes or add a Cognitum Seed for best results
  • Camera-free pose accuracy is limited (2.5% PCK@20) — camera-labeled data significantly improves accuracy

Contributions and bug reports welcome at Issues.

See through walls with WiFi

Turn ordinary WiFi into a sensing system. Detect people, measure breathing and heart rate, track movement, and monitor rooms — through walls, in the dark, with no cameras or wearables. Just physics.

π RuView is a WiFi sensing platform that turns radio signals into spatial intelligence.

Every WiFi router already fills your space with radio waves. When people move, breathe, or even sit still, they disturb those waves in measurable ways. RuView captures these disturbances using Channel State Information (CSI) from low-cost ESP32 sensors and turns them into actionable data: who's there, what they're doing, and whether they're okay.

What it senses:

  • Presence and occupancy — detect people through walls, count them, track entries and exits
  • Vital signs — breathing rate and heart rate, contactless, while sleeping or sitting
  • Activity recognition — walking, sitting, gestures, falls — from temporal CSI patterns
  • Environment mapping — RF fingerprinting identifies rooms, detects moved furniture, spots new objects
  • Sleep quality — overnight monitoring with sleep stage classification and apnea screening

Built on RuVector and Cognitum Seed, RuView runs entirely on edge hardware — an ESP32 mesh (as low as $9 per node) paired with a Cognitum Seed for persistent memory, cryptographic attestation, and AI integration. No cloud, no cameras, no internet required.

The system learns each environment locally using spiking neural networks that adapt in under 30 seconds, with multi-frequency mesh scanning across 6 WiFi channels that uses your neighbors' routers as free radar illuminators. Every measurement is cryptographically attested via an Ed25519 witness chain.

RuView also supports pose estimation (17 COCO keypoints via the WiFlow architecture), trained entirely without cameras using 10 sensor signals — a technique pioneered from the original DensePose From WiFi research at Carnegie Mellon University.

Built for low-power edge applications

Edge modules are small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response.

WhatHowSpeed
Pose estimationCSI subcarrier amplitude/phase → 17 COCO keypoints171K emb/s (M4 Pro)
Breathing detectionBandpass 0.1-0.5 Hz → zero-crossing BPM6-30 BPM
Heart rateBandpass 0.8-2.0 Hz → zero-crossing BPM40-120 BPM
Presence sensingTrained model + PIR fusion — 100% accuracy0.012 ms latency
Through-wallFresnel zone geometry + multipath modelingUp to 5m depth
Edge intelligence8-dim feature vectors + RVF store on Cognitum Seed$140 total BOM
Camera-free training10 sensor signals, no labels needed84s on M4 Pro
Multi-frequency meshChannel hopping across 6 bands, neighbor APs as illuminators3x sensing bandwidth
bash
# Option 1: Docker (simulated data, no hardware needed)
docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
# Open http://localhost:3000

# Option 2: Live sensing with ESP32-S3 hardware ($9)
# Flash firmware, provision WiFi, and start sensing:
python -m esptool --chip esp32s3 --port COM9 --baud 460800 \
  write_flash 0x0 bootloader.bin 0x8000 partition-table.bin \
  0xf000 ota_data_initial.bin 0x20000 esp32-csi-node.bin
python firmware/esp32-csi-node/provision.py --port COM9 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20

# Option 3: Full system with Cognitum Seed ($140)
# ESP32 streams CSI → bridge forwards to Seed for persistent storage + kNN + witness chain
node scripts/rf-scan.js --port 5006           # Live RF room scan
node scripts/snn-csi-processor.js --port 5006  # SNN real-time learning
node scripts/mincut-person-counter.js --port 5006  # Correct person counting

[!NOTE] CSI-capable hardware recommended. Presence, vital signs, through-wall sensing, and all advanced capabilities require Channel State Information (CSI) from an ESP32-S3 ($9) or research NIC. The Docker image runs with simulated data for evaluation. Consumer WiFi laptops provide RSSI-only presence detection.

Hardware options for live CSI capture:

OptionHardwareCostFull CSICapabilities
ESP32 + Cognitum Seed (recommended)ESP32-S3 + Cognitum Seed~$140YesPose, breathing, heartbeat, motion, presence + persistent vector store, kNN search, witness chain, MCP proxy
ESP32 Mesh3-6x ESP32-S3 + WiFi router~$54YesPose, breathing, heartbeat, motion, presence
Research NICIntel 5300 / Atheros AR9580~$50-100YesFull CSI with 3x3 MIMO
Any WiFiWindows, macOS, or Linux laptop$0NoRSSI-only: coarse presence and motion

No hardware? Verify the signal processing pipeline with the deterministic reference signal: python v1/data/proof/verify.py


Pre-Trained Models (v0.6.0) — No Training Required

<details open> <summary><strong>Download from HuggingFace and start sensing immediately</strong></summary>

Pre-trained models are available on HuggingFace:

https://huggingface.co/ruv/ruview (primary) | mirror

Trained on 60,630 real-world samples from an 8-hour overnight collection. Just download and run — no datasets, no GPU, no training needed.

ModelSizeWhat it does
model.safetensors48 KBContrastive encoder — 128-dim embeddings for presence, activity, environment
model-q4.bin8 KB4-bit quantized — fits in ESP32-S3 SRAM for edge inference
model-q2.bin4 KB2-bit ultra-compact for memory-constrained devices
presence-head.json2.6 KB100% accurate presence detection head
node-1.json / node-2.json21 KBPer-room LoRA adapters (swap for new rooms)
bash
# Download and use (Python)
pip install huggingface_hub
huggingface-cli download ruv/ruview --local-dir models/

# Or use directly with the sensing pipeline
node scripts/train-ruvllm.js --data data/recordings/*.csi.jsonl  # retrain on your own data
node scripts/benchmark-ruvllm.js --model models/csi-ruvllm       # benchmark

Benchmarks (Apple M4 Pro, retrained on overnight data):

What we measuredResultWhy it matters
Presence detection100% accuracyNever misses a person, never false alarms
Inference speed0.008 ms per embedding125,000x faster than real-time
Throughput164,183 embeddings/secOne Mac Mini handles 1,600+ ESP32 nodes
Contrastive learning51.6% improvementStrong pattern learning from real overnight data
Model size8 KB (4-bit quantized)Fits in ESP32 SRAM — no server needed
Total hardware cost$140ESP32 ($9) + Cognitum Seed ($131)
</details>

17 Sensing Applications (v0.6.0)

<details> <summary><strong>Health, environment, security, and multi-frequency mesh sensing</strong></summary>

All applications run from a single ESP32 + optional Cognitum Seed. No camera, no cloud, no internet.

Health & Wellness:

ApplicationScriptWhat it detects
Sleep Monitornode scripts/sleep-monitor.jsSleep stages (deep/light/REM/awake), efficiency, hypnogram
Apnea Detectornode scripts/apnea-detector.jsBreathing pauses >10s, AHI severity scoring
Stress Monitornode scripts/stress-monitor.jsHeart rate variability, LF/HF stress ratio
Gait Analyzernode scripts/gait-analyzer.jsWalking cadence, stride asymmetry, tremor detection

Environment & Security:

ApplicationScriptWhat it detects
Person Counternode scripts/mincut-person-counter.jsCorrect occupancy count (fixes #348)
Room Fingerprintnode scripts/room-fingerprint.jsActivity state clustering, daily patterns, anomalies
Material Detectornode scripts/material-detector.jsNew/moved objects via subcarrier null changes
Device Fingerprintnode scripts/device-fingerprint.jsElectronic device activity (printer, router, etc.)

Multi-Frequency Mesh (requires --hop-channels provisioning):

ApplicationScriptWhat it detects
RF Tomographynode scripts/rf-tomography.js2D room imaging via RF backprojection
Passive Radarnode scripts/passive-radar.jsNeighbor WiFi APs as bistatic radar illuminators
Material Classifiernode scripts/material-classifier.jsMetal/water/wood/glass from frequency response
Through-Wallnode scripts/through-wall-detector.jsMotion behind walls using lower-frequency penetration

All scripts support --replay data/recordings/*.csi.jsonl for offline analysis and --json for programmatic output.

</details>

What's New in v0.5.5

<details> <summary><strong>Advanced Sensing: SNN + MinCut + WiFlow + Multi-Frequency Mesh</strong></summary>

v0.5.5 adds four new sensing capabilities built on the ruvector ecosystem:

CapabilityWhat it doesADR
Spiking Neural NetworkAdapts to your room in <30s with STDP online learning — no labels, no batches, 16-160x less computeADR-074
MinCut Person CountingStoer-Wagner min-cut on subcarrier correlation graph — fixes #348 (was always 4, now correct)ADR-075
CNN Spectrogram EmbeddingsTreat CSI as a 64×20 image → 128-dim embedding for environment fingerprinting (0.95+ similarity)ADR-076
WiFlow SOTA ArchitectureTCN + axial attention + pose decoder → 17 COCO keypoints, 1.8M params (881 KB at 4-bit)ADR-072
Multi-Frequency MeshChannel hopping across 6 bands, neighbor WiFi as passive radar illuminatorsADR-073
bash
# Live RF room scan (spectrum visualization)
node scripts/rf-scan.js --port 5006 --duration 30

# Correct person counting (fixes #348)
node scripts/mincut-person-counter.js --port 5006

# SNN real-time adaptation
node scripts/snn-csi-processor.js --port 5006

# CNN spectrogram embeddings
node scripts/csi-spectrogram.js --replay data/recordings/*.csi.jsonl

# WiFlow 17-keypoint pose training
node scripts/train-wiflow.js --data data/recordings/*.csi.jsonl

# Enable channel hopping on ESP32
python firmware/esp32-csi-node/provision.py --port COM9 --hop-channels "1,6,11"

Validated benchmarks:

Metricv0.5.4v0.5.5
Person countingBroken (always 4)Correct (MinCut, 24/24)
WiFi channels16 (multi-freq hopping)
Null subcarriers19% blocked16% (frequency diversity)
Pose model16K params (FC only)1.8M params (WiFlow)
Online adaptationNone<30s (SNN STDP)
Fingerprint dims8128 (CNN spectrogram)
Multi-node fusionAverageGATv2 attention
New scripts015+
New ADRs38 (069-076)
</details>

What's New in v0.5.4

<details> <summary><strong>Cognitum Seed Integration + Camera-Free Pose Training</strong></summary>

v0.5.4 transforms RuView from a real-time sensing tool into a persistent edge AI system. Your ESP32 now remembers what it senses, learns without cameras, and proves its data cryptographically.

CapabilityDetailsHardware
Persistent vector storeEvery sensing event stored as searchable 8-dim vector in RVF formatESP32 + Cognitum Seed ($140)
kNN similarity search"Find the 10 most similar states to right now" — anomaly detection, fingerprintingCognitum Seed
Witness chainSHA-256 tamper-evident audit trail for every measurement (1,747 entries validated)Cognitum Seed
Camera-free pose training17 COCO keypoints from 10 sensor signals — PIR, RSSI triangulation, subcarrier asymmetry, vibration, BME2802x ESP32 + Seed
Pre-trained model82.8 KB (8 KB at 4-bit quantization), 100% presence accuracy, 0 skeleton violationsDownload from release
Sub-ms inference0.012 ms latency, 171,472 embeddings/sec on M4 ProAny machine with Node.js
SONA adaptationAdapts to new rooms in <1ms without retrainingruvllm runtime
LoRA room adaptersPer-node fine-tuning with 2,048 parameters per adapterAutomatic
114-tool MCP proxyAI assistants (Claude, GPT) query sensors directly via JSON-RPCCognitum Seed
Multi-frequency meshChannel hopping across ch 1/3/5/6/9/11 — neighbor WiFi as passive radar2x ESP32 ($18)
RF room scannerReal-time spectrum visualization: nulls, reflectors, movement, multipathnode scripts/rf-scan.js
Security hardenedBearer tokens, TLS, source IP filtering, NaN rejection, credential rotationAll components

Training pipeline (ruvllm, no PyTorch needed):

bash
# Collect data (2 min, ESP32s must be streaming)
python scripts/collect-training-data.py --port 5006 --duration 120

# Train — contrastive pretraining + task heads + LoRA + quantization + EWC
node scripts/train-ruvllm.js --data data/recordings/pretrain-*.csi.jsonl

# Camera-free 17-keypoint pose (uses PIR + RSSI + vibration + subcarrier asymmetry)
node scripts/train-camera-free.js --data data/recordings/pretrain-*.csi.jsonl

# Benchmark
node scripts/benchmark-ruvllm.js --model models/csi-ruvllm

Benchmarks — validated on real hardware (Apple M4 Pro + ESP32-S3 + Cognitum Seed):

What we measuredResultWhy it matters
Presence detection100% accuracyNever misses a person, never false alarms
Person counting24/24 correct (MinCut)Fixed the #1 user-reported issue
Inference speed0.012 ms per embedding83,000x faster than real-time
Throughput171,472 embeddings/secOne Mac Mini handles 1,700+ ESP32 nodes
Training time84 secondsFrom zero to trained model in under 2 minutes
Contrastive learning33.9% improvementModel learns meaningful patterns from CSI
Model size8 KB (4-bit quantized)Fits in ESP32 SRAM — no server needed
Skeleton physics0 violations in 100 framesEvery pose is anatomically valid
Pose keypoints17 COCO keypointsFull body pose, no camera required
WiFi channels6 simultaneous3x more sensing data than single-channel
Online adaptation<30 seconds (SNN)Learns a new room without retraining
Witness chain2,547 entries verifiedCryptographic proof every measurement is real
Test suite1,463 tests passedRock-solid foundation
Total hardware cost$140ESP32 ($9) + Cognitum Seed ($131)

See ADR-069, ADR-071, and the Cognitum Seed tutorial for full details.

</details>

📖 Documentation

DocumentDescription
User GuideStep-by-step guide: installation, first run, API usage, hardware setup, training
Build GuideBuilding from source (Rust and Python)
Architecture Decisions62 ADRs — why each technical choice was made, organized by domain (hardware, signal processing, ML, platform, infrastructure)
Domain Models7 DDD models (RuvSense, Signal Processing, Training Pipeline, Hardware Platform, Sensing Server, WiFi-Mat, CHCI) — bounded contexts, aggregates, domain events, and ubiquitous language
Desktop AppWIP — Tauri v2 desktop app for node management, OTA updates, WASM deployment, and mesh visualization
Medical ExamplesContactless blood pressure, heart rate, breathing rate via 60 GHz mmWave radar — $15 hardware, no wearable

<a href="https://ruvnet.github.io/RuView/"> </a>

<em>Real-time pose skeleton from WiFi CSI signals — no cameras, no wearables</em>

<a href="https://ruvnet.github.io/RuView/"><strong>▶ Live Observatory Demo</strong></a>  |  <a href="https://ruvnet.github.io/RuView/pose-fusion.html"><strong>▶ Dual-Modal Pose Fusion Demo</strong></a>

The server is optional for visualization and aggregation — the ESP32 runs independently for presence detection, vital signs, and fall alerts.

Live ESP32 pipeline: Connect an ESP32-S3 node → run the sensing server → open the pose fusion demo for real-time dual-modal pose estimation (webcam + WiFi CSI). See ADR-059.

🚀 Key Features

Sensing

See people, breathing, and heartbeats through walls — using only WiFi signals already in the room.

FeatureWhat It Means
🔒Privacy-FirstTracks human pose using only WiFi signals — no cameras, no video, no images stored
💓Vital SignsDetects breathing rate (6-30 breaths/min) and heart rate (40-120 bpm) without any wearable
👥Multi-PersonTracks multiple people simultaneously, each with independent pose and vitals — no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP)
🧱Through-WallWiFi passes through walls, furniture, and debris — works where cameras cannot
🚑Disaster ResponseDetects trapped survivors through rubble and classifies injury severity (START triage)
📡Multistatic Mesh4-6 low-cost sensor nodes work together, combining 12+ overlapping signal paths for full 360-degree room coverage with sub-inch accuracy and no person mix-ups (ADR-029)
🌐Persistent Field ModelThe system learns the RF signature of each room — then subtracts the room to isolate human motion, detect drift over days, predict intent before movement starts, and flag spoofing attempts (ADR-030)

Intelligence

The system learns on its own and gets smarter over time — no hand-tuning, no labeled data required.

FeatureWhat It Means
🧠Self-LearningTeaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap (ADR-024)
🎯AI Signal ProcessingAttention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically (RuVector)
🌍Works EverywhereTrain once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware (ADR-027)
👁️Cross-Viewpoint FusionAI combines what each sensor sees from its own angle — fills in blind spots and depth ambiguity that no single viewpoint can resolve on its own (ADR-031)
🔮Signal-Line ProtocolA 6-stage processing pipeline transforms raw WiFi signals into structured body representations — from signal cleanup through graph-based spatial reasoning to final pose output (ADR-033)
🔒QUIC Mesh SecurityAll sensor-to-sensor communication is encrypted end-to-end with tamper detection, replay protection, and seamless reconnection if a node moves or drops offline (ADR-032)
🎯Adaptive ClassifierRecords labeled CSI sessions, trains a 15-feature logistic regression model in pure Rust, and learns your room's unique signal characteristics — replaces hand-tuned thresholds with data-driven classification (ADR-048)

Performance & Deployment

Fast enough for real-time use, small enough for edge devices, simple enough for one-command setup.

FeatureWhat It Means
Real-TimeAnalyzes WiFi signals in under 100 microseconds per frame — fast enough for live monitoring
🦀810x FasterComplete Rust rewrite: 54,000 frames/sec pipeline, multi-arch Docker image, 1,031+ tests
🐳One-Command Setupdocker pull ruvnet/wifi-densepose:latest — live sensing in 30 seconds, no toolchain needed (amd64 + arm64 / Apple Silicon)
📡Fully LocalRuns completely on a $9 ESP32 — no internet connection, no cloud account, no recurring fees. Detects presence, vital signs, and falls on-device with instant response
📦Portable ModelsTrained models package into a single .rvf file — runs on edge, cloud, or browser (WASM)
🔭Observatory VisualizationCinematic Three.js dashboard with 5 holographic panels — subcarrier manifold, vital signs oracle, presence heatmap, phase constellation, convergence engine — all driven by live or demo CSI data (ADR-047)
📟AMOLED DisplayESP32-S3 boards with built-in AMOLED screens show real-time presence, vital signs, and room status directly on the sensor — no phone or PC needed (ADR-045)

🔬 How It Works

WiFi routers flood every room with radio waves. When a person moves — or even breathes — those waves scatter differently. WiFi DensePose reads that scattering pattern and reconstructs what happened:

WiFi Router → radio waves pass through room → hit human body → scatter
    ↓
ESP32 mesh (4-6 nodes) captures CSI on channels 1/6/11 via TDM protocol
    ↓
Multi-Band Fusion: 3 channels × 56 subcarriers = 168 virtual subcarriers per link
    ↓
Multistatic Fusion: N×(N-1) links → attention-weighted cross-viewpoint embedding
    ↓
Coherence Gate: accept/reject measurements → stable for days without tuning
    ↓
Signal Processing: Hampel, SpotFi, Fresnel, BVP, spectrogram → clean features
    ↓
AI Backbone (RuVector): attention, graph algorithms, compression, field model
    ↓
Signal-Line Protocol (CRV): 6-stage gestalt → sensory → topology → coherence → search → model
    ↓
Neural Network: processed signals → 17 body keypoints + vital signs + room model
    ↓
Output: real-time pose, breathing, heart rate, room fingerprint, drift alerts

No training cameras required — the Self-Learning system (ADR-024) bootstraps from raw WiFi data alone. MERIDIAN (ADR-027) ensures the model works in any room, not just the one it trained in.


🏢 Use Cases & Applications

WiFi sensing works anywhere WiFi exists. No new hardware in most cases — just software on existing access points or a $8 ESP32 add-on. Because there are no cameras, deployments avoid privacy regulations (GDPR video, HIPAA imaging) by design.

Scaling: Each AP distinguishes ~3-5 people (56 subcarriers). Multi-AP multiplies linearly — a 4-AP retail mesh covers ~15-20 occupants. No hard software limit; the practical ceiling is signal physics.

Why WiFi sensing winsTraditional alternative
🔒No video, no GDPR/HIPAA imaging rulesCameras require consent, signage, data retention policies
🧱Works through walls, shelving, debrisCameras need line-of-sight per room
🌙Works in total darknessCameras need IR or visible light
💰$0-$8 per zone (existing WiFi or ESP32)Camera systems: $200-$2,000 per zone
🔌WiFi already deployed everywherePIR/radar sensors require new wiring per room
<details> <summary><strong>🏥 Everyday</strong> — Healthcare, retail, office, hospitality (commodity WiFi)</summary>
Use CaseWhat It DoesHardwareKey MetricEdge Module
Elderly care / assisted livingFall detection, nighttime activity monitoring, breathing rate during sleep — no wearable compliance needed1 ESP32-S3 per room ($8)Fall alert <2sSleep Apnea, Gait Analysis
Hospital patient monitoringContinuous breathing + heart rate for non-critical beds without wired sensors; nurse alert on anomaly1-2 APs per wardBreathing: 6-30 BPMRespiratory Distress, Cardiac Arrhythmia
Emergency room triageAutomated occupancy count + wait-time estimation; detect patient distress (abnormal breathing) in waiting areasExisting hospital WiFiOccupancy accuracy >95%Queue Length, Panic Motion
Retail occupancy & flowReal-time foot traffic, dwell time by zone, queue length — no cameras, no opt-in, GDPR-friendlyExisting store WiFi + 1 ESP32Dwell resolution ~1mCustomer Flow, Dwell Heatmap
Office space utilizationWhich desks/rooms are actually occupied, meeting room no-shows, HVAC optimization based on real presenceExisting enterprise WiFiPresence latency <1sMeeting Room, HVAC Presence
Hotel & hospitalityRoom occupancy without door sensors, minibar/bathroom usage patterns, energy savings on empty roomsExisting hotel WiFi15-30% HVAC savingsEnergy Audit, Lighting Zones
Restaurants & food serviceTable turnover tracking, kitchen staff presence, restroom occupancy displays — no cameras in dining areasExisting WiFiQueue wait ±30sTable Turnover, Queue Length
Parking garagesPedestrian presence in stairwells and elevators where cameras have blind spots; security alert if someone lingersExisting WiFiThrough-concrete wallsLoitering, Elevator Count
</details> <details> <summary><strong>🏟️ Specialized</strong> — Events, fitness, education, civic (CSI-capable hardware)</summary>
Use CaseWhat It DoesHardwareKey MetricEdge Module
Smart home automationRoom-level presence triggers (lights, HVAC, music) that work through walls — no dead zones, no motion-sensor timeouts2-3 ESP32-S3 nodes ($24)Through-wall range ~5mHVAC Presence, Lighting Zones
Fitness & sportsRep counting, posture correction, breathing cadence during exercise — no wearable, no camera in locker rooms3+ ESP32-S3 meshPose: 17 keypointsBreathing Sync, Gait Analysis
Childcare & schoolsNaptime breathing monitoring, playground headcount, restricted-area alerts — privacy-safe for minors2-4 ESP32-S3 per zoneBreathing: ±1 BPMSleep Apnea, Perimeter Breach
Event venues & concertsCrowd density mapping, crush-risk detection via breathing compression, emergency evacuation flow trackingMulti-AP mesh (4-8 APs)Density per m²Customer Flow, Panic Motion
Stadiums & arenasSection-level occupancy for dynamic pricing, concession staffing, emergency egress flow modelingEnterprise AP grid15-20 per AP meshDwell Heatmap, Queue Length
Houses of worshipAttendance counting without facial recognition — privacy-sensitive congregations, multi-room campus trackingExisting WiFiZone-level accuracyElevator Count, Energy Audit
Warehouse & logisticsWorker safety zones, forklift proximity alerts, occupancy in hazardous areas — works through shelving and palletsIndustrial AP meshAlert latency <500msForklift Proximity, Confined Space
Civic infrastructurePublic restroom occupancy (no cameras possible), subway platform crowding, shelter headcount during emergenciesMunicipal WiFi + ESP32Real-time headcountCustomer Flow, Loitering
Museums & galleriesVisitor flow heatmaps, exhibit dwell time, crowd bottleneck alerts — no cameras near artwork (flash/theft risk)Existing WiFiZone dwell ±5sDwell Heatmap, Shelf Engagement
</details> <details> <summary><strong>🤖 Robotics & Industrial</strong> — Autonomous systems, manufacturing, android spatial awareness</summary>

WiFi sensing gives robots and autonomous systems a spatial awareness layer that works where LIDAR and cameras fail — through dust, smoke, fog, and around corners. The CSI signal field acts as a "sixth sense" for detecting humans in the environment without requiring line-of-sight.

Use CaseWhat It DoesHardwareKey MetricEdge Module
Cobot safety zonesDetect human presence near collaborative robots — auto-slow or stop before contact, even behind obstructions2-3 ESP32-S3 per cellPresence latency <100msForklift Proximity, Perimeter Breach
Warehouse AMR navigationAutonomous mobile robots sense humans around blind corners, through shelving racks — no LIDAR occlusionESP32 mesh along aislesThrough-shelf detectionForklift Proximity, Loitering
Android / humanoid spatial awarenessAmbient human pose sensing for social robots — detect gestures, approach direction, and personal space without cameras always onOnboard ESP32-S3 module17-keypoint poseGesture Language, Emotion Detection
Manufacturing line monitoringWorker presence at each station, ergonomic posture alerts, headcount for shift compliance — works through equipmentIndustrial AP per zonePose + breathingConfined Space, Gait Analysis
Construction site safetyExclusion zone enforcement around heavy machinery, fall detection from scaffolding, personnel headcountRuggedized ESP32 meshAlert <2s, through-dustPanic Motion, Structural Vibration
Agricultural roboticsDetect farm workers near autonomous harvesters in dusty/foggy field conditions where cameras are unreliableWeatherproof ESP32 nodesRange ~10m open fieldForklift Proximity, Rain Detection
Drone landing zonesVerify landing area is clear of humans — WiFi sensing works in rain, dust, and low light where downward cameras failGround ESP32 nodesPresence: >95% accuracyPerimeter Breach, Tailgating
Clean room monitoringPersonnel tracking without cameras (particle contamination risk from camera fans) — gown compliance via poseExisting cleanroom WiFiNo particulate emissionClean Room, Livestock Monitor
</details> <details> <summary><strong>🔥 Extreme</strong> — Through-wall, disaster, defense, underground</summary>

These scenarios exploit WiFi's ability to penetrate solid materials — concrete, rubble, earth — where no optical or infrared sensor can reach. The WiFi-Mat disaster module (ADR-001) is specifically designed for this tier.

Use CaseWhat It DoesHardwareKey MetricEdge Module
Search & rescue (WiFi-Mat)Detect survivors through rubble/debris via breathing signature, START triage color classification, 3D localizationPortable ESP32 mesh + laptopThrough 30cm concreteRespiratory Distress, Seizure Detection
FirefightingLocate occupants through smoke and walls before entry; breathing detection confirms life signs remotelyPortable mesh on truckWorks in zero visibilitySleep Apnea, Panic Motion
Prison & secure facilitiesCell occupancy verification, distress detection (abnormal vitals), perimeter sensing — no camera blind spotsDedicated AP infrastructure24/7 vital signsCardiac Arrhythmia, Loitering
Military / tacticalThrough-wall personnel detection, room clearing confirmation, hostage vital signs at standoff distanceDirectional WiFi + custom FWRange: 5m through wallPerimeter Breach, Weapon Detection
Border & perimeter securityDetect human presence in tunnels, behind fences, in vehicles — passive sensing, no active illumination to reveal positionConcealed ESP32 meshPassive / covertPerimeter Breach, Tailgating
Mining & undergroundWorker presence in tunnels where GPS/cameras fail, breathing detection after collapse, headcount at safety pointsRuggedized ESP32 meshThrough rock/earthConfined Space, Respiratory Distress
Maritime & navalBelow-deck personnel tracking through steel bulkheads (limited range, requires tuning), man-overboard detectionShip WiFi + ESP32Through 1-2 bulkheadsStructural Vibration, Panic Motion
Wildlife researchNon-invasive animal activity monitoring in enclosures or dens — no light pollution, no visual disturbanceWeatherproof ESP32 nodesZero light emissionLivestock Monitor, Dream Stage
</details>

Edge Intelligence (ADR-041)

Small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response. Each module is a tiny WASM file (5-30 KB) that you upload to the device over-the-air. It reads WiFi signal data and makes decisions locally in under 10 ms. ADR-041 defines 60 modules across 13 categories — all 60 are implemented with 609 tests passing.

CategoryExamples
🏥Medical & HealthSleep apnea detection, cardiac arrhythmia, gait analysis, seizure detection
🔐Security & SafetyIntrusion detection, perimeter breach, loitering, panic motion
🏢Smart BuildingZone occupancy, HVAC control, elevator counting, meeting room tracking
🛒Retail & HospitalityQueue length, dwell heatmaps, customer flow, table turnover
🏭IndustrialForklift proximity, confined space monitoring, structural vibration
🔮Exotic & ResearchSleep staging, emotion detection, sign language, breathing sync
📡Signal IntelligenceCleans and sharpens raw WiFi signals — focuses on important regions, filters noise, fills in missing data, and tracks which person is which
🧠Adaptive LearningThe sensor learns new gestures and patterns on its own over time — no cloud needed, remembers what it learned even after updates
🗺️Spatial ReasoningFigures out where people are in a room, which zones matter most, and tracks movement across areas using graph-based spatial logic
⏱️Temporal AnalysisLearns daily routines, detects when patterns break (someone didn't get up), and verifies safety rules are being followed over time
🛡️AI SecurityDetects signal replay attacks, WiFi jamming, injection attempts, and flags abnormal behavior that could indicate tampering
⚛️Quantum-InspiredUses quantum-inspired math to map room-wide signal coherence and search for optimal sensor configurations
🤖Autonomous & ExoticSelf-managing sensor mesh — auto-heals dropped nodes, plans its own actions, and explores experimental signal representations

All implemented modules are no_std Rust, share a common utility library, and talk to the host through a 12-function API. Full documentation: Edge Modules Guide. See the complete implemented module list below.

<details id="edge-module-list"> <summary><strong>🧩 Edge Intelligence — <a href="docs/edge-modules/README.md">All 65 Modules Implemented</a></strong> (ADR-041 complete)</summary>

All 60 modules are implemented, tested (609 tests passing), and ready to deploy. They compile to wasm32-unknown-unknown, run on ESP32-S3 via WASM3, and share a common utility library. Source: crates/wifi-densepose-wasm-edge/src/

Core modules (ADR-040 flagship + early implementations):

ModuleFileWhat It Does
Gesture Classifiergesture.rsDTW template matching for hand gestures
Coherence Filtercoherence.rsPhase coherence gating for signal quality
Adversarial Detectoradversarial.rsDetects physically impossible signal patterns
Intrusion Detectorintrusion.rsHuman vs non-human motion classification
Occupancy Counteroccupancy.rsZone-level person counting
Vital Trendvital_trend.rsLong-term breathing and heart rate trending
RVF Parserrvf.rsRVF container format parsing

Vendor-integrated modules (24 modules, ADR-041 Category 7):

📡 Signal Intelligence — Real-time CSI analysis and feature extraction

ModuleFileWhat It DoesBudget
Flash Attentionsig_flash_attention.rsTiled attention over 8 subcarrier groups — finds spatial focus regions and entropyS (<5ms)
Coherence Gatesig_coherence_gate.rsZ-score phasor gating with hysteresis: Accept / PredictOnly / Reject / RecalibrateL (<2ms)
Temporal Compresssig_temporal_compress.rs3-tier adaptive quantization (8-bit hot / 5-bit warm / 3-bit cold)L (<2ms)
Sparse Recoverysig_sparse_recovery.rsISTA L1 reconstruction for dropped subcarriersH (<10ms)
Person Matchsig_mincut_person_match.rsHungarian-lite bipartite assignment for multi-person trackingS (<5ms)
Optimal Transportsig_optimal_transport.rsSliced Wasserstein-1 distance with 4 projectionsL (<2ms)

🧠 Adaptive Learning — On-device learning without cloud connectivity

ModuleFileWhat It DoesBudget
DTW Gesture Learnlrn_dtw_gesture_learn.rsUser-teachable gesture recognition — 3-rehearsal protocol, 16 templatesS (<5ms)
Anomaly Attractorlrn_anomaly_attractor.rs4D dynamical system attractor classification with Lyapunov exponentsH (<10ms)
Meta Adaptlrn_meta_adapt.rsHill-climbing self-optimization with safety rollbackL (<2ms)
EWC Lifelonglrn_ewc_lifelong.rsElastic Weight Consolidation — remembers past tasks while learning new onesS (<5ms)

🗺️ Spatial Reasoning — Location, proximity, and influence mapping

ModuleFileWhat It DoesBudget
PageRank Influencespt_pagerank_influence.rs4x4 cross-correlation graph with power iteration PageRankL (<2ms)
Micro HNSWspt_micro_hnsw.rs64-vector navigable small-world graph for nearest-neighbor searchS (<5ms)
Spiking Trackerspt_spiking_tracker.rs32 LIF neurons + 4 output zone neurons with STDP learningS (<5ms)

⏱️ Temporal Analysis — Activity patterns, logic verification, autonomous planning

ModuleFileWhat It DoesBudget
Pattern Sequencetmp_pattern_sequence.rsActivity routine detection and deviation alertsS (<5ms)
Temporal Logic Guardtmp_temporal_logic_guard.rsLTL formula verification on CSI event streamsS (<5ms)
GOAP Autonomytmp_goap_autonomy.rsGoal-Oriented Action Planning for autonomous module managementS (<5ms)

🛡️ AI Security — Tamper detection and behavioral anomaly profiling

ModuleFileWhat It DoesBudget
Prompt Shieldais_prompt_shield.rsFNV-1a replay detection, injection detection (10x amplitude), jamming (SNR)L (<2ms)
Behavioral Profilerais_behavioral_profiler.rs6D behavioral profile with Mahalanobis anomaly scoringS (<5ms)

⚛️ Quantum-Inspired — Quantum computing metaphors applied to CSI analysis

ModuleFileWhat It DoesBudget
Quantum Coherenceqnt_quantum_coherence.rsBloch sphere mapping, Von Neumann entropy, decoherence detectionS (<5ms)
Interference Searchqnt_interference_search.rs16 room-state hypotheses with Grover-inspired oracle + diffusionS (<5ms)

🤖 Autonomous Systems — Self-governing and self-healing behaviors

ModuleFileWhat It DoesBudget
Psycho-Symbolicaut_psycho_symbolic.rs16-rule forward-chaining knowledge base with contradiction detectionS (<5ms)
Self-Healing Meshaut_self_healing_mesh.rs8-node mesh with health tracking, degradation/recovery, coverage healingS (<5ms)

🔮 Exotic (Vendor) — Novel mathematical models for CSI interpretation

ModuleFileWhat It DoesBudget
Time Crystalexo_time_crystal.rsAutocorrelation subharmonic detection in 256-frame historyS (<5ms)
Hyperbolic Spaceexo_hyperbolic_space.rsPoincare ball embedding with 32 reference locations, hyperbolic distanceS (<5ms)

🏥 Medical & Health (Category 1) — Contactless health monitoring

ModuleFileWhat It DoesBudget
Sleep Apneamed_sleep_apnea.rsDetects breathing pauses during sleepS (<5ms)
Cardiac Arrhythmiamed_cardiac_arrhythmia.rsMonitors heart rate for irregular rhythmsS (<5ms)
Respiratory Distressmed_respiratory_distress.rsAlerts on abnormal breathing patternsS (<5ms)
Gait Analysismed_gait_analysis.rsTracks walking patterns and detects changesS (<5ms)
Seizure Detectionmed_seizure_detect.rs6-state machine for tonic-clonic seizure recognitionS (<5ms)

🔐 Security & Safety (Category 2) — Perimeter and threat detection

ModuleFileWhat It DoesBudget
Perimeter Breachsec_perimeter_breach.rsDetects boundary crossings with approach/departureS (<5ms)
Weapon Detectionsec_weapon_detect.rsMetal anomaly detection via CSI amplitude shiftsS (<5ms)
Tailgatingsec_tailgating.rsDetects unauthorized follow-through at access pointsS (<5ms)
Loiteringsec_loitering.rsAlerts when someone lingers too long in a zoneS (<5ms)
Panic Motionsec_panic_motion.rsDetects fleeing, struggling, or panic movementS (<5ms)

🏢 Smart Building (Category 3) — Automation and energy efficiency

ModuleFileWhat It DoesBudget
HVAC Presencebld_hvac_presence.rsOccupancy-driven HVAC control with departure countdownS (<5ms)
Lighting Zonesbld_lighting_zones.rsAuto-dim/off lighting based on zone activityS (<5ms)
Elevator Countbld_elevator_count.rsCounts people entering/leaving with overload warningS (<5ms)
Meeting Roombld_meeting_room.rsTracks meeting lifecycle: start, headcount, end, availabilityS (<5ms)
Energy Auditbld_energy_audit.rsTracks after-hours usage and room utilization ratesS (<5ms)

🛒 Retail & Hospitality (Category 4) — Customer insights without cameras

ModuleFileWhat It DoesBudget
Queue Lengthret_queue_length.rsEstimates queue size and wait timesS (<5ms)
Dwell Heatmapret_dwell_heatmap.rsShows where people spend time (hot/cold zones)S (<5ms)
Customer Flowret_customer_flow.rsCounts ins/outs and tracks net occupancyS (<5ms)
Table Turnoverret_table_turnover.rsRestaurant table lifecycle: seated, dining, vacatedS (<5ms)
Shelf Engagementret_shelf_engagement.rsDetects browsing, considering, and reaching for productsS (<5ms)

🏭 Industrial & Specialized (Category 5) — Safety and compliance

ModuleFileWhat It DoesBudget
Forklift Proximityind_forklift_proximity.rsWarns when people get too close to vehiclesS (<5ms)
Confined Spaceind_confined_space.rsOSHA-compliant worker monitoring with extraction alertsS (<5ms)
Clean Roomind_clean_room.rsOccupancy limits and turbulent motion detectionS (<5ms)
Livestock Monitorind_livestock_monitor.rsAnimal presence, stillness, and escape alertsS (<5ms)
Structural Vibrationind_structural_vibration.rsSeismic events, mechanical resonance, structural driftS (<5ms)

🔮 Exotic & Research (Category 6) — Experimental sensing applications

ModuleFileWhat It DoesBudget
Dream Stageexo_dream_stage.rsContactless sleep stage classification (wake/light/deep/REM)S (<5ms)
Emotion Detectionexo_emotion_detect.rsArousal, stress, and calm detection from micro-movementsS (<5ms)
Gesture Languageexo_gesture_language.rsSign language letter recognition via WiFiS (<5ms)
Music Conductorexo_music_conductor.rsTempo and dynamic tracking from conducting gesturesS (<5ms)
Plant Growthexo_plant_growth.rsMonitors plant growth, circadian rhythms, wilt detectionS (<5ms)
Ghost Hunterexo_ghost_hunter.rsEnvironmental anomaly classification (draft/insect/wind/unknown)S (<5ms)
Rain Detectionexo_rain_detect.rsDetects rain onset, intensity, and cessation via signal scatterS (<5ms)
Breathing Syncexo_breathing_sync.rsDetects synchronized breathing between multiple peopleS (<5ms)
</details>
<details> <summary><strong>🧠 Self-Learning WiFi AI (ADR-024)</strong> — Adaptive recognition, self-optimization, and intelligent anomaly detection</summary>

Every WiFi signal that passes through a room creates a unique fingerprint of that space. WiFi-DensePose already reads these fingerprints to track people, but until now it threw away the internal "understanding" after each reading. The Self-Learning WiFi AI captures and preserves that understanding as compact, reusable vectors — and continuously optimizes itself for each new environment.

What it does in plain terms:

  • Turns any WiFi signal into a 128-number "fingerprint" that uniquely describes what's happening in a room
  • Learns entirely on its own from raw WiFi data — no cameras, no labeling, no human supervision needed
  • Recognizes rooms, detects intruders, identifies people, and classifies activities using only WiFi
  • Runs on an $8 ESP32 chip (the entire model fits in 55 KB of memory)
  • Produces both body pose tracking AND environment fingerprints in a single computation

Key Capabilities

WhatHow it worksWhy it matters
Self-supervised learningThe model watches WiFi signals and teaches itself what "similar" and "different" look like, without any human-labeled dataDeploy anywhere — just plug in a WiFi sensor and wait 10 minutes
Room identificationEach room produces a distinct WiFi fingerprint patternKnow which room someone is in without GPS or beacons
Anomaly detectionAn unexpected person or event creates a fingerprint that doesn't match anything seen beforeAutomatic intrusion and fall detection as a free byproduct
Person re-identificationEach person disturbs WiFi in a slightly different way, creating a personal signatureTrack individuals across sessions without cameras
Environment adaptationMicroLoRA adapters (1,792 parameters per room) fine-tune the model for each new spaceAdapts to a new room with minimal data — 93% less than retraining from scratch
Memory preservationEWC++ regularization remembers what was learned during pretrainingSwitching to a new task doesn't erase prior knowledge
Hard-negative miningTraining focuses on the most confusing examples to learn fasterBetter accuracy with the same amount of training data

Architecture

WiFi Signal [56 channels] → Transformer + Graph Neural Network
                                  ├→ 128-dim environment fingerprint (for search + identification)
                                  └→ 17-joint body pose (for human tracking)

Quick Start

bash
# Step 1: Learn from raw WiFi data (no labels needed)
cargo run -p wifi-densepose-sensing-server -- --pretrain --dataset data/csi/ --pretrain-epochs 50

# Step 2: Fine-tune with pose labels for full capability
cargo run -p wifi-densepose-sensing-server -- --train --dataset data/mmfi/ --epochs 100 --save-rvf model.rvf

# Step 3: Use the model — extract fingerprints from live WiFi
cargo run -p wifi-densepose-sensing-server -- --model model.rvf --embed

# Step 4: Search — find similar environments or detect anomalies
cargo run -p wifi-densepose-sensing-server -- --model model.rvf --build-index env

Training Modes

ModeWhat you needWhat you get
Self-SupervisedJust raw WiFi dataA model that understands WiFi signal structure
SupervisedWiFi data + body pose labelsFull pose tracking + environment fingerprints
Cross-ModalWiFi data + camera footageFingerprints aligned with visual understanding

Fingerprint Index Types

IndexWhat it storesReal-world use
env_fingerprintAverage room fingerprint"Is this the kitchen or the bedroom?"
activity_patternActivity boundaries"Is someone cooking, sleeping, or exercising?"
temporal_baselineNormal conditions"Something unusual just happened in this room"
person_trackIndividual movement signatures"Person A just entered the living room"

Model Size

ComponentParametersMemory (on ESP32)
Transformer backbone~28,00028 KB
Embedding projection head~25,00025 KB
Per-room MicroLoRA adapter~1,8002 KB
Total~55,00055 KB (of 520 KB available)

The self-learning system builds on the AI Backbone (RuVector) signal-processing layer — attention, graph algorithms, and compression — adding contrastive learning on top.

See docs/adr/ADR-024-contrastive-csi-embedding-model.md for full architectural details.

</details>

📦 Installation

<details> <summary><strong>Guided Installer</strong> — Interactive hardware detection and profile selection</summary>
bash
./install.sh

The installer walks through 7 steps: system detection, toolchain check, WiFi hardware scan, profile recommendation, dependency install, build, and verification.

ProfileWhat it installsSizeRequirements
verifyPipeline verification only~5 MBPython 3.8+
pythonFull Python API server + sensing~500 MBPython 3.8+
rustRust pipeline (~810x faster)~200 MBRust 1.70+
browserWASM for in-browser execution~10 MBRust + wasm-pack
iotESP32 sensor mesh + aggregatorvariesRust + ESP-IDF
dockerDocker-based deployment~1 GBDocker
fieldWiFi-Mat disaster response kit~62 MBRust + wasm-pack
fullEverything available~2 GBAll toolchains
bash
# Non-interactive
./install.sh --profile rust --yes

# Hardware check only
./install.sh --check-only
</details> <details> <summary><strong>From Source</strong> — Rust (primary) or Python</summary>
bash
git clone https://github.com/ruvnet/RuView.git
cd RuView

# Rust (primary — 810x faster)
cd rust-port/wifi-densepose-rs
cargo build --release
cargo test --workspace

# Python (legacy v1)
pip install -r requirements.txt
pip install -e .

# Or via pip
pip install wifi-densepose
pip install wifi-densepose[gpu]   # GPU acceleration
pip install wifi-densepose[all]   # All optional deps
</details> <details> <summary><strong>Docker</strong> — Pre-built images, no toolchain needed</summary>
bash
# Rust sensing server (132 MB — recommended)
docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 -p 3001:3001 -p 5005:5005/udp ruvnet/wifi-densepose:latest

# Python sensing pipeline (569 MB)
docker pull ruvnet/wifi-densepose:python
docker run -p 8765:8765 -p 8080:8080 ruvnet/wifi-densepose:python

# Both via docker-compose
cd docker && docker compose up

# Export RVF model
docker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf
ImageTagPlatformsPorts
ruvnet/wifi-denseposelatest, rustlinux/amd64, linux/arm643000 (REST), 3001 (WS), 5005/udp (ESP32)
ruvnet/wifi-denseposepythonlinux/amd648765 (WS), 8080 (UI)
</details> <details> <summary><strong>System Requirements</strong></summary>
  • Rust: 1.70+ (primary runtime — install via rustup)
  • Python: 3.8+ (for verification and legacy v1 API)
  • OS: Linux (Ubuntu 18.04+), macOS (10.15+), Windows 10+
  • Memory: Minimum 4GB RAM, Recommended 8GB+
  • Storage: 2GB free space for models and data
  • Network: WiFi interface with CSI capability (optional — installer detects what you have)
  • GPU: Optional (NVIDIA CUDA or Apple Metal)
</details> <details> <summary><strong>Rust Crates</strong> — Individual crates on crates.io</summary>

The Rust workspace consists of 15 crates, all published to crates.io:

bash
# Add individual crates to your Cargo.toml
cargo add wifi-densepose-core       # Types, traits, errors
cargo add wifi-densepose-signal     # CSI signal processing (6 SOTA algorithms)
cargo add wifi-densepose-nn         # Neural inference (ONNX, PyTorch, Candle)
cargo add wifi-densepose-vitals     # Vital sign extraction (breathing + heart rate)
cargo add wifi-densepose-mat        # Disaster response (MAT survivor detection)
cargo add wifi-densepose-hardware   # ESP32, Intel 5300, Atheros sensors
cargo add wifi-densepose-train      # Training pipeline (MM-Fi dataset)
cargo add wifi-densepose-wifiscan   # Multi-BSSID WiFi scanning
cargo add wifi-densepose-ruvector   # RuVector v2.0.4 integration layer (ADR-017)
CrateDescriptionRuVectorcrates.io
wifi-densepose-coreFoundation types, traits, and utilities--
wifi-densepose-signalSOTA CSI signal processing (SpotFi, FarSense, Widar 3.0)mincut, attn-mincut, attention, solver
wifi-densepose-nnMulti-backend inference (ONNX, PyTorch, Candle)--
wifi-densepose-trainTraining pipeline with MM-Fi dataset (NeurIPS 2023)All 5
wifi-densepose-matMass Casualty Assessment Tool (disaster survivor detection)solver, temporal-tensor
wifi-densepose-ruvectorRuVector v2.0.4 integration layer — 7 signal+MAT integration points (ADR-017)All 5
wifi-densepose-vitalsVital signs: breathing (6-30 BPM), heart rate (40-120 BPM)--
wifi-densepose-hardwareESP32, Intel 5300, Atheros CSI sensor interfaces--
wifi-densepose-wifiscanMulti-BSSID WiFi scanning (Windows, macOS, Linux)--
wifi-densepose-wasmWebAssembly bindings for browser deployment--
wifi-densepose-sensing-serverAxum server: UDP ingestion, WebSocket broadcast--
wifi-densepose-cliCommand-line tool for MAT disaster scanning--
wifi-densepose-apiREST + WebSocket API layer--
wifi-densepose-configConfiguration management--
wifi-densepose-dbDatabase persistence (PostgreSQL, SQLite, Redis)--

All crates integrate with RuVector v2.0.4 — see AI Backbone below.

rUv Neural — A separate 12-crate workspace for brain network topology analysis, neural decoding, and medical sensing. See rUv Neural in Models & Training.

</details>

🚀 Quick Start

<details open> <summary><strong>First API call in 3 commands</strong></summary>

1. Install

bash
# Fastest path — Docker
docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest

# Or from source (Rust)
./install.sh --profile rust --yes

2. Start the System

python
from wifi_densepose import WiFiDensePose

system = WiFiDensePose()
system.start()
poses = system.get_latest_poses()
print(f"Detected {len(poses)} persons")
system.stop()

3. REST API

bash
# Health check
curl http://localhost:3000/health

# Latest sensing frame
curl http://localhost:3000/api/v1/sensing/latest

# Vital signs
curl http://localhost:3000/api/v1/vital-signs

# Pose estimation
curl http://localhost:3000/api/v1/pose/current

# Server info
curl http://localhost:3000/api/v1/info

4. Real-time WebSocket

python
import asyncio, websockets, json

async def stream():
    async with websockets.connect("ws://localhost:3001/ws/sensing") as ws:
        async for msg in ws:
            data = json.loads(msg)
            print(f"Persons: {len(data.get('persons', []))}")

asyncio.run(stream())
</details>

📋 Table of Contents

<details open> <summary><strong>📡 Signal Processing & Sensing</strong> — From raw WiFi frames to vital signs</summary>

The signal processing stack transforms raw WiFi Channel State Information into actionable human sensing data. Starting from 56-192 subcarrier complex values captured at 20 Hz, the pipeline applies research-grade algorithms (SpotFi phase correction, Hampel outlier rejection, Fresnel zone modeling) to extract breathing rate, heart rate, motion level, and multi-person body pose — all in pure Rust with zero external ML dependencies.

SectionDescriptionDocs
Key FeaturesSensing, Intelligence, and Performance & Deployment capabilities
How It WorksEnd-to-end pipeline: radio waves → CSI capture → signal processing → AI → pose + vitals
ESP32-S3 Hardware Pipeline20 Hz CSI streaming, binary frame parsing, flash & provisionADR-018 · Tutorial #34
Vital Sign DetectionBreathing 6-30 BPM, heartbeat 40-120 BPM, FFT peak detectionADR-021
WiFi Scan Domain Layer8-stage RSSI pipeline, multi-BSSID fingerprinting, Windows WiFiADR-022 · Tutorial #36
WiFi-Mat Disaster ResponseSearch & rescue, START triage, 3D localization through debrisADR-001 · User Guide
SOTA Signal ProcessingSpotFi, Hampel, Fresnel, STFT spectrogram, subcarrier selection, BVPADR-014
</details> <details> <summary><strong>🧠 Models & Training</strong> — DensePose pipeline, RVF containers, SONA adaptation, RuVector integration</summary>

The neural pipeline uses a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates. Models are packaged as single-file .rvf containers with progressive loading (Layer A instant, Layer B warm, Layer C full). SONA (Self-Optimizing Neural Architecture) enables continuous on-device adaptation via micro-LoRA + EWC++ without catastrophic forgetting. Signal processing is powered by 5 RuVector crates (v2.0.4) with 7 integration points across the Rust workspace, plus 6 additional vendored crates for inference and graph intelligence.

SectionDescriptionDocs
RVF Model ContainerBinary packaging with Ed25519 signing, progressive 3-layer loading, SIMD quantizationADR-023
Training & Fine-Tuning8-phase pure Rust pipeline (7,832 lines), MM-Fi/Wi-Pose pre-training, 6-term composite loss, SONA LoRAADR-023
RuVector Crates11 vendored Rust crates from ruvector: attention, min-cut, solver, GNN, HNSW, temporal compression, sparse inferenceGitHub · Source
rUv Neural12-crate brain topology analysis ecosystem: neural decoding, quantum sensor integration, cognitive state classification, BCI outputREADME
AI Backbone (RuVector)5 AI capabilities replacing hand-tuned thresholds: attention, graph min-cut, sparse solvers, tiered compressioncrates.io
Self-Learning WiFi AI (ADR-024)Contrastive self-supervised learning, room fingerprinting, anomaly detection, 55 KB modelADR-024
Cross-Environment Generalization (ADR-027)Domain-adversarial training, geometry-conditioned inference, hardware normalization, zero-shot deploymentADR-027
</details> <details> <summary><strong>🖥️ Usage & Configuration</strong> — CLI flags, API endpoints, hardware setup</summary>

The Rust sensing server is the primary interface, offering a comprehensive CLI with flags for data source selection, model loading, training, benchmarking, and RVF export. A REST API (Axum) and WebSocket server provide real-time data access. The Python v1 CLI remains available for legacy workflows.

SectionDescriptionDocs
CLI Usage--source, --train, --benchmark, --export-rvf, --model, --progressive
REST API & WebSocket6 REST endpoints (sensing, vitals, BSSID, SONA), WebSocket real-time stream
Hardware SupportESP32-S3 ($8), Intel 5300 ($15), Atheros AR9580 ($20), Windows RSSI ($0)ADR-012 · ADR-013
</details> <details> <summary><strong>⚙️ Development & Testing</strong> — 542+ tests, CI, deployment</summary>

The project maintains 542+ pure-Rust tests across 7 crate suites with zero mocks — every test runs against real algorithm implementations. Hardware-free simulation mode (--source simulate) enables full-stack testing without physical devices. Docker images are published on Docker Hub for zero-setup deployment.

SectionDescriptionDocs
Testing7 test suites: sensing-server (229), signal (83), mat (139), wifiscan (91), RVF (16), vitals (18)
DeploymentDocker images (132 MB Rust / 569 MB Python), docker-compose, env vars
ContributingFork → branch → test → PR workflow, Rust and Python dev setup
</details> <details> <summary><strong>📊 Performance & Benchmarks</strong> — Measured throughput, latency, resource usage</summary>

All benchmarks are measured on the Rust sensing server using cargo bench and the built-in --benchmark CLI flag. The Rust v2 implementation delivers 810x end-to-end speedup over the Python v1 baseline, with motion detection reaching 5,400x improvement. The vital sign detector processes 11,665 frames/second in a single-threaded benchmark.

SectionDescriptionKey Metric
Performance MetricsVital signs, CSI pipeline, motion detection, Docker image, memory11,665 fps vitals · 54K fps pipeline
Rust vs PythonSide-by-side benchmarks across 5 operations810x full pipeline speedup
</details> <details> <summary><strong>📄 Meta</strong> — License, changelog, support</summary>

WiFi DensePose is MIT-licensed open source, developed by ruvnet. The project has been in active development since March 2025, with 3 major releases delivering the Rust port, SOTA signal processing, disaster response module, and end-to-end training pipeline.

SectionDescriptionLink
Changelogv3.0.0 (AETHER AI + Docker), v2.0.0 (Rust port + SOTA + WiFi-Mat)CHANGELOG.md
LicenseMIT LicenseLICENSE
SupportBug reports, feature requests, community discussionIssues · Discussions
</details>
<details> <summary><strong>🌍 Cross-Environment Generalization (ADR-027 — Project MERIDIAN)</strong> — Train once, deploy in any room without retraining</summary>
WhatHow it worksWhy it matters
Gradient Reversal LayerAn adversarial classifier tries to guess which room the signal came from; the main network is trained to fool itForces the model to discard room-specific shortcuts
Geometry Encoder (FiLM)Transmitter/receiver positions are Fourier-encoded and injected as scale+shift conditioning on every layerThe model knows where the hardware is, so it doesn't need to memorize layout
Hardware NormalizerResamples any chipset's CSI to a canonical 56-subcarrier format with standardized amplitudeIntel 5300 and ESP32 data look identical to the model
Virtual Domain AugmentationGenerates synthetic environments with random room scale, wall reflections, scatterers, and noise profilesTraining sees 1000s of rooms even with data from just 2-3
Rapid Adaptation (TTT)Contrastive test-time training with LoRA weight generation from a few unlabeled framesZero-shot deployment — the model self-tunes on arrival
Cross-Domain EvaluatorLeave-one-out evaluation across all training environments with per-environment PCK/OKS metricsProves generalization, not just memorization

Architecture

CSI Frame [any chipset]
    │
    ▼
HardwareNormalizer ──→ canonical 56 subcarriers, N(0,1) amplitude
    │
    ▼
CSI Encoder (existing) ──→ latent features
    │
    ├──→ Pose Head ──→ 17-joint pose (environment-invariant)
    │
    ├──→ Gradient Reversal Layer ──→ Domain Classifier (adversarial)
    │         λ ramps 0→1 via cosine/exponential schedule
    │
    └──→ Geometry Encoder ──→ FiLM conditioning (scale + shift)
              Fourier positional encoding → DeepSets → per-layer modulation

Security hardening:

  • Bounded calibration buffer (max 10,000 frames) prevents memory exhaustion
  • adapt() returns Result<_, AdaptError> — no panics on bad input
  • Atomic instance counter ensures unique weight initialization across threads
  • Division-by-zero guards on all augmentation parameters

See docs/adr/ADR-027-cross-environment-domain-generalization.md for full architectural details.

</details> <details> <summary><strong>🔍 Independent Capability Audit (ADR-028)</strong> — 1,031 tests, SHA-256 proof, self-verifying witness bundle</summary>

A 3-agent parallel audit independently verified every claim in this repository — ESP32 hardware, signal processing, neural networks, training pipeline, deployment, and security. Results:

Rust tests:     1,031 passed, 0 failed
Python proof:   VERDICT: PASS (SHA-256: 8c0680d7...)
Bundle verify:  7/7 checks PASS

33-row attestation matrix: 31 capabilities verified YES, 2 not measured at audit time (benchmark throughput, Kubernetes deploy).

Verify it yourself (no hardware needed):

bash
# Run all tests
cd rust-port/wifi-densepose-rs && cargo test --workspace --no-default-features

# Run the deterministic proof
python v1/data/proof/verify.py

# Generate + verify the witness bundle
bash scripts/generate-witness-bundle.sh
cd dist/witness-bundle-ADR028-*/ && bash VERIFY.sh
DocumentWhat it contains
ADR-028Full audit: ESP32 specs, signal algorithms, NN architectures, training phases, deployment infra
Witness Log11 reproducible verification steps + 33-row attestation matrix with evidence per row
generate-witness-bundle.shCreates self-contained tar.gz with test logs, proof output, firmware hashes, crate versions, VERIFY.sh
</details> <details> <summary><strong>📡 Multistatic Sensing (ADR-029/030/031 — Project RuvSense + RuView)</strong> — Multiple ESP32 nodes fuse viewpoints for production-grade pose, tracking, and exotic sensing</summary>

A single WiFi receiver can track people, but has blind spots — limbs behind the torso are invisible, depth is ambiguous, and two people at similar range create overlapping signals. RuvSense solves this by coordinating multiple ESP32 nodes into a multistatic mesh where every node acts as both transmitter and receiver, creating N×(N-1) measurement links from N devices.

What it does in plain terms:

  • 4 ESP32-S3 nodes ($48 total) provide 12 TX-RX measurement links covering 360 degrees
  • Each node hops across WiFi channels 1/6/11, tripling effective bandwidth from 20→60 MHz
  • Coherence gating rejects noisy frames automatically — no manual tuning, stable for days
  • Two-person tracking at 20 Hz with zero identity swaps over 10 minutes
  • The room itself becomes a persistent model — the system remembers, predicts, and explains

Three ADRs, one pipeline:

ADRCodenameWhat it adds
ADR-029RuvSenseChannel hopping, TDM protocol, multi-node fusion, coherence gating, 17-keypoint Kalman tracker
ADR-030RuvSense FieldRoom electromagnetic eigenstructure (SVD), RF tomography, longitudinal drift detection, intention prediction, gesture recognition, adversarial detection
ADR-031RuViewCross-viewpoint attention with geometric bias, viewpoint diversity optimization, embedding-level fusion

Architecture

4x ESP32-S3 nodes ($48)     TDM: each transmits in turn, all others receive
        │                    Channel hop: ch1→ch6→ch11 per dwell (50ms)
        ▼
Per-Node Signal Processing   Phase sanitize → Hampel → BVP → subcarrier select
        │                    (ADR-014, unchanged per viewpoint)
        ▼
Multi-Band Frame Fusion      3 channels × 56 subcarriers = 168 virtual subcarriers
        │                    Cross-channel phase alignment via NeumannSolver
        ▼
Multistatic Viewpoint Fusion  N nodes → attention-weighted fusion → single embedding
        │                    Geometric bias from node placement angles
        ▼
Coherence Gate               Accept / PredictOnly / Reject / Recalibrate
        │                    Prevents model drift, stable for days
        ▼
Persistent Field Model       SVD baseline → body = observation - environment
        │                    RF tomography, drift detection, intention signals
        ▼
Pose Tracker + DensePose     17-keypoint Kalman, re-ID via AETHER embeddings
                             Multi-person min-cut separation, zero ID swaps

Seven Exotic Sensing Tiers (ADR-030)

TierCapabilityWhat it detects
1Field Normal ModesRoom electromagnetic eigenstructure via SVD
2Coarse RF Tomography3D occupancy volume from link attenuations
3Intention Lead SignalsPre-movement prediction 200-500ms before action
4Longitudinal BiomechanicsPersonal movement changes over days/weeks
5Cross-Room ContinuityIdentity preserved across rooms without cameras
6Invisible InteractionMulti-user gesture control through walls
7Adversarial DetectionPhysically impossible signal identification

Acceptance Test

MetricThresholdWhat it proves
Torso keypoint jitter< 30mm RMSPrecision sufficient for applications
Identity swaps0 over 10 minutes (12,000 frames)Reliable multi-person tracking
Update rate20 Hz (50ms cycle)Real-time response
Breathing SNR> 10 dB at 3mSmall-motion sensitivity confirmed

New Rust modules (9,000+ lines)

CrateNew modulesPurpose
wifi-densepose-signalruvsense/ (10 modules)Multiband fusion, phase alignment, multistatic fusion, coherence, field model, tomography, longitudinal drift, intention detection
wifi-densepose-ruvectorviewpoint/ (5 modules)Cross-viewpoint attention with geometric bias, diversity index, coherence gating, fusion orchestrator
wifi-densepose-hardwareesp32/tdm.rsTDM sensing protocol, sync beacons, clock drift compensation

Firmware extensions (C, backward-compatible)

FileAddition
csi_collector.cChannel hop table, timer-driven hop, NDP injection stub
nvs_config.c5 new NVS keys: hop_count, channel_list, dwell_ms, tdm_slot, tdm_node_count

DDD Domain Model — 6 bounded contexts: Multistatic Sensing, Coherence, Pose Tracking, Field Model, Cross-Room Identity, Adversarial Detection. Full specification: docs/ddd/ruvsense-domain-model.md.

See the ADR documents for full architectural details, GOAP integration plans, and research references.

</details> <details> <summary><b>🔮 Signal-Line Protocol (CRV)</b></summary>

6-Stage CSI Signal Line

Maps the CRV (Coordinate Remote Viewing) signal-line methodology to WiFi CSI processing via ruvector-crv:

StageCRV NameWiFi CSI Mappingruvector Component
IIdeogramsRaw CSI gestalt (manmade/natural/movement/energy)Poincare ball hyperbolic embeddings
IISensoryAmplitude textures, phase patterns, frequency colorsMulti-head attention vectors
IIIDimensionalAP mesh spatial topology, node geometryGNN graph topology
IVEmotional/AOLCoherence gating — signal vs noise separationSNN temporal encoding
VInterrogationCross-stage probing — query pose against CSI historyDifferentiable search
VI3D ModelComposite person estimation, MinCut partitioningGraph partitioning

Cross-Session Convergence: When multiple AP clusters observe the same person, CRV convergence analysis finds agreement in their signal embeddings — directly mapping to cross-room identity continuity.

rust
use wifi_densepose_ruvector::crv::WifiCrvPipeline;

let mut pipeline = WifiCrvPipeline::new(WifiCrvConfig::default());
pipeline.create_session("room-a", "person-001")?;

// Process CSI frames through 6-stage pipeline
let result = pipeline.process_csi_frame("room-a", &amplitudes, &phases)?;
// result.gestalt = Movement, confidence = 0.87
// result.sensory_embedding = [0.12, -0.34, ...]

// Cross-room identity matching via convergence
let convergence = pipeline.find_cross_room_convergence("person-001", 0.75)?;

Architecture:

  • CsiGestaltClassifier — Maps CSI amplitude/phase patterns to 6 gestalt types
  • CsiSensoryEncoder — Extracts texture/color/temperature/luminosity features from subcarriers
  • MeshTopologyEncoder — Encodes AP mesh as GNN graph (Stage III)
  • CoherenceAolDetector — Maps coherence gate states to AOL noise detection (Stage IV)
  • WifiCrvPipeline — Orchestrates all 6 stages into unified sensing session
</details>

📡 Signal Processing & Sensing

<details> <summary><a id="esp32-s3-hardware-pipeline"></a><strong>📡 ESP32-S3 Hardware Pipeline (ADR-018)</strong> — 28 Hz CSI streaming, flash & provision</summary>

A single ESP32-S3 board (~$9) captures WiFi signal data 28 times per second and streams it over UDP. A host server can visualize and record the data, but the ESP32 can also run on its own — detecting presence, measuring breathing and heart rate, and alerting on falls without any server at all.

ESP32-S3 node                    UDP/5005        Host server (optional)
┌───────────────────────┐      ──────────>      ┌──────────────────────┐
│ Captures WiFi signals │      binary frames    │ Parses frames        │
│ 28 Hz, up to 192 sub- │      or 32-byte       │ Visualizes poses     │
│ carriers per frame     │      vitals packets   │ Records CSI data     │
│                        │                       │ REST API + WebSocket │
│ On-device (optional):  │                       └──────────────────────┘
│  Presence detection    │
│  Breathing + heart rate│
│  Fall detection        │
│  WASM custom modules   │
└───────────────────────┘
MetricMeasured on hardware
CSI frame rate28.5 Hz (channel 5, BW20)
Subcarriers per frame64 / 128 / 192 (depends on WiFi mode)
UDP latency< 1 ms on local network
Presence detection rangeReliable at 3 m through walls
Binary size990 KB (8MB flash) / 773 KB (4MB flash)
Boot to ready~3.9 seconds

Flash and provision

Download a pre-built binary — no build toolchain needed:

ReleaseWhat's includedTag
v0.6.0LatestPre-trained models on HuggingFace, 17 sensing apps, 51.6% contrastive improvement, 0.008ms inferencev0.6.0-esp32
v0.5.5SNN + MinCut (#348 fix) + CNN spectrogram + WiFlow + multi-freq mesh + graph transformerv0.5.5-esp32
v0.5.4Cognitum Seed integration (ADR-069), 8-dim feature vectors, RVF store, witness chain, security hardeningv0.5.4-esp32
v0.5.0mmWave sensor fusion (ADR-063), auto-detect MR60BHA2/LD2410, 48-byte fused vitals, all v0.4.3.1 fixesv0.5.0-esp32
v0.4.3.1Fall detection fix (#263), 4MB flash (#265), watchdog fix (#266)v0.4.3.1-esp32
v0.4.1CSI build fix, compile guard, AMOLED display, edge intelligence (ADR-057)v0.4.1-esp32
v0.3.0-alphaAlpha — adds on-device edge intelligence and WASM modules (ADR-039, ADR-040)v0.3.0-alpha-esp32
v0.2.0Raw CSI streaming, multi-node TDM, channel hoppingv0.2.0-esp32
bash
# 1. Flash the firmware to your ESP32-S3 (8MB flash — most boards)
python -m esptool --chip esp32s3 --port COM7 --baud 460800 \
  write_flash --flash-mode dio --flash-size 8MB --flash-freq 80m \
  0x0 bootloader.bin 0x8000 partition-table.bin \
  0xf000 ota_data_initial.bin 0x20000 esp32-csi-node.bin

# 1b. For 4MB flash boards (e.g. ESP32-S3 SuperMini 4MB) — use the 4MB binaries:
python -m esptool --chip esp32s3 --port COM7 --baud 460800 \
  write_flash --flash-mode dio --flash-size 4MB --flash-freq 80m \
  0x0 bootloader.bin 0x8000 partition-table-4mb.bin \
  0xF000 ota_data_initial.bin 0x20000 esp32-csi-node-4mb.bin

# 2. Set WiFi credentials and server address (stored in flash, survives reboots)
python firmware/esp32-csi-node/provision.py --port COM7 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20

# 3. (Optional) Start the host server to visualize data
cargo run -p wifi-densepose-sensing-server -- --http-port 3000 --source auto
# Open http://localhost:3000

Multi-node mesh

For better accuracy and room coverage, deploy 3-6 nodes with time-division multiplexing (TDM) so they take turns transmitting:

bash
# Node 0 of a 3-node mesh
python firmware/esp32-csi-node/provision.py --port COM7 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20 \
  --node-id 0 --tdm-slot 0 --tdm-total 3

# Node 1
python firmware/esp32-csi-node/provision.py --port COM8 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20 \
  --node-id 1 --tdm-slot 1 --tdm-total 3

Nodes can also hop across WiFi channels (1, 6, 11) to increase sensing bandwidth — configured via ADR-029 channel hopping.

Cognitum Seed integration (ADR-069)

Connect an ESP32 to a Cognitum Seed ($131) for persistent vector storage, kNN search, cryptographic witness chain, and AI-accessible MCP proxy:

ESP32-S3 ($9)  ──UDP──>  Host bridge  ──HTTPS──>  Cognitum Seed ($15)
  CSI capture              seed_csi_bridge.py         RVF vector store
  8-dim features @ 1 Hz                              kNN similarity search
  Vitals + presence                                  Ed25519 witness chain
                                                     114-tool MCP proxy
bash
# 1. Provision ESP32 to send features to your laptop
python firmware/esp32-csi-node/provision.py --port COM9 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20 --target-port 5006

# 2. Run the bridge (forwards to Seed via HTTPS)
export SEED_TOKEN="your-pairing-token"
python scripts/seed_csi_bridge.py \
  --seed-url https://169.254.42.1:8443 --token "$SEED_TOKEN" --validate

# 3. Check Seed stats
python scripts/seed_csi_bridge.py --token "$SEED_TOKEN" --stats

The 8-dim feature vector captures: presence, motion, breathing rate, heart rate, phase variance, person count, fall detection, and RSSI — all normalized to [0.0, 1.0]. See ADR-069 for the full architecture.

On-device intelligence (v0.3.0-alpha)

The alpha firmware can analyze signals locally and send compact results instead of raw data. This means the ESP32 works standalone — no server needed for basic sensing. Disabled by default for backward compatibility.

TierWhat it doesRAM used
0Off — streams raw CSI only (same as v0.2.0)0 KB
1Cleans up signals, picks the best subcarriers, compresses data (saves 30-50% bandwidth)~30 KB
2Everything in Tier 1 + detects presence, measures breathing and heart rate, detects falls~33 KB
3Everything in Tier 2 + runs custom WASM modules (gesture recognition, intrusion detection, and 63 more)~160 KB/module

Enable without reflashing — just reprovision:

bash
# Turn on Tier 2 (vitals) on an already-flashed node
python firmware/esp32-csi-node/provision.py --port COM7 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20 \
  --edge-tier 2

# Fine-tune detection thresholds (fall-thresh in milli-units: 15000 = 15.0 rad/s²)
python firmware/esp32-csi-node/provision.py --port COM7 \
  --edge-tier 2 --vital-int 500 --fall-thresh 15000 --subk-count 16

When Tier 2 is active, the node sends a 32-byte vitals packet once per second containing: presence, motion level, breathing BPM, heart rate BPM, confidence scores, fall alert flag, and occupancy count.

See firmware/esp32-csi-node/README.md, ADR-039, ADR-044, and Tutorial #34.

</details> <details> <summary><strong>🦀 Rust Implementation (v2)</strong> — 810x faster, 54K fps pipeline</summary>

Performance Benchmarks (Validated)

OperationPython (v1)Rust (v2)Speedup
CSI Preprocessing (4x64)~5ms5.19 µs~1000x
Phase Sanitization (4x64)~3ms3.84 µs~780x
Feature Extraction (4x64)~8ms9.03 µs~890x
Motion Detection~1ms186 ns~5400x
Full Pipeline~15ms18.47 µs~810x
Vital SignsN/A86 µs11,665 fps
ResourcePython (v1)Rust (v2)
Memory~500 MB~100 MB
Docker Image569 MB132 MB
Tests41542+
WASM SupportNoYes
bash
cd rust-port/wifi-densepose-rs
cargo build --release
cargo test --workspace
cargo bench --package wifi-densepose-signal
</details> <details> <summary><a id="vital-sign-detection"></a><strong>💓 Vital Sign Detection (ADR-021)</strong> — Breathing and heartbeat via FFT</summary>
CapabilityRangeMethod
Breathing Rate6-30 BPM (0.1-0.5 Hz)Bandpass filter + FFT peak detection
Heart Rate40-120 BPM (0.8-2.0 Hz)Bandpass filter + FFT peak detection
Sampling Rate20 Hz (ESP32 CSI)Real-time streaming
Confidence0.0-1.0 per signSpectral coherence + signal quality
bash
./target/release/sensing-server --source simulate --http-port 3000 --ws-port 3001 --ui-path ../../ui
curl http://localhost:3000/api/v1/vital-signs

See ADR-021.

</details> <details> <summary><a id="wifi-scan-domain-layer"></a><strong>📡 WiFi Scan Domain Layer (ADR-022/025)</strong> — 8-stage RSSI pipeline for Windows, macOS, and Linux WiFi</summary>
StagePurpose
Predictive GatingPre-filter scan results using temporal prediction
Attention WeightingWeight BSSIDs by signal relevance
Spatial CorrelationCross-AP spatial signal correlation
Motion EstimationDetect movement from RSSI variance
Breathing ExtractionExtract respiratory rate from sub-Hz oscillations
Quality GatingReject low-confidence estimates
Fingerprint MatchingLocation and posture classification via RF fingerprints
OrchestrationFuse all stages into unified sensing output
bash
cargo test -p wifi-densepose-wifiscan

See ADR-022 and Tutorial #36.

</details> <details> <summary><a id="wifi-mat-disaster-response"></a><strong>🚨 WiFi-Mat: Disaster Response</strong> — Search & rescue, START triage, 3D localization</summary>

WiFi signals penetrate non-metallic debris (concrete, wood, drywall) where cameras and thermal sensors cannot reach. The WiFi-Mat module (wifi-densepose-mat, 139 tests) uses CSI analysis to detect survivors trapped under rubble, classify their condition using the START triage protocol, and estimate their 3D position — giving rescue teams actionable intelligence within seconds of deployment.

CapabilityHow It WorksPerformance Target
Breathing DetectionBandpass 0.07-1.0 Hz + Fresnel zone modeling detects chest displacement of 5-10mm at 5 GHz4-60 BPM, <500ms latency
Heartbeat DetectionMicro-Doppler shift extraction from fine-grained CSI phase variationVia ruvector-temporal-tensor
3D LocalizationMulti-AP triangulation + CSI fingerprint matching + depth estimation through rubble layers3-5m penetration
START TriageEnsemble classifier votes on breathing + movement + vital stability → P1-P4 priority<1% false negative
Zone Scanning16+ concurrent scan zones with periodic re-scan and audit loggingFull disaster site

Triage classification (START protocol compatible):

StatusColorDetection CriteriaPriority
ImmediateRedBreathing detected, no movementP1
DelayedYellowMovement + breathing, stable vitalsP2
MinorGreenStrong movement, responsive patternsP3
DeceasedBlackNo vitals for >30 min continuous scanP4

Deployment modes: portable (single TX/RX handheld), distributed (multiple APs around collapse site), drone-mounted (UAV scanning), vehicle-mounted (mobile command post).

rust
use wifi_densepose_mat::{DisasterResponse, DisasterConfig, DisasterType, ScanZone, ZoneBounds};

let config = DisasterConfig::builder()
    .disaster_type(DisasterType::Earthquake)
    .sensitivity(0.85)
    .max_depth(5.0)
    .build();

let mut response = DisasterResponse::new(config);
response.initialize_event(location, "Building collapse")?;
response.add_zone(ScanZone::new("North Wing", ZoneBounds::rectangle(0.0, 0.0, 30.0, 20.0)))?;
response.start_scanning().await?;

Safety guarantees: fail-safe defaults (assume life present on ambiguous signals), redundant multi-algorithm voting, complete audit trail, offline-capable (no network required).

</details> <details> <summary><a id="sota-signal-processing"></a><strong>🔬 SOTA Signal Processing (ADR-014)</strong> — 6 research-grade algorithms</summary>

The signal processing layer bridges the gap between raw commodity WiFi hardware output and research-grade sensing accuracy. Each algorithm addresses a specific limitation of naive CSI processing — from hardware-induced phase corruption to environment-dependent multipath interference. All six are implemented in wifi-densepose-signal/src/ with deterministic tests and no mock data.

AlgorithmWhat It DoesWhy It MattersMathSource
Conjugate MultiplicationMultiplies CSI antenna pairs: H₁[k] × conj(H₂[k])Cancels CFO, SFO, and packet detection delay that corrupt raw phase — preserves only environment-caused phase differencesCSI_ratio[k] = H₁[k] * conj(H₂[k])SpotFi (SIGCOMM 2015)
Hampel FilterReplaces outliers using running median ± scaled MADZ-score uses mean/std which are corrupted by the very outliers it detects (masking effect). Hampel uses median/MAD, resisting up to 50% contaminationσ̂ = 1.4826 × MADStandard DSP; WiGest (2015)
Fresnel Zone ModelModels signal variation from chest displacement crossing Fresnel zone boundariesZero-crossing counting fails in multipath-rich environments. Fresnel predicts where breathing should appear based on TX-RX-body geometryΔΦ = 2π × 2Δd / λ, A = |sin(ΔΦ/2)|FarSense (MobiCom 2019)
CSI SpectrogramSliding-window FFT (STFT) per subcarrier → 2D time-frequency matrixBreathing = 0.2-0.4 Hz band, walking = 1-2 Hz, static = noise. 2D structure enables CNN spatial pattern recognition that 1D features missS[t,f] = |Σₙ x[n] w[n-t] e^{-j2πfn}|²Standard since 2018
Subcarrier SelectionRanks subcarriers by motion sensitivity (variance ratio) and selects top-KNot all subcarriers respond to motion — some sit in multipath nulls. Selecting the 10-20 most sensitive improves SNR by 6-10 dBsensitivity[k] = var_motion / var_staticWiDance (MobiCom 2017)
Body Velocity ProfileExtracts velocity distribution from Doppler shifts across subcarriersBVP is domain-independent — same velocity profile regardless of room layout, furniture, or AP placement. Basis for cross-environment recognitionBVP[v,t] = Σₖ |STFTₖ[v,t]|Widar 3.0 (MobiSys 2019)

Processing pipeline order: Raw CSI → Conjugate multiplication (phase cleaning) → Hampel filter (outlier removal) → Subcarrier selection (top-K) → CSI spectrogram (time-frequency) → Fresnel model (breathing) + BVP (activity)

See ADR-014 for full mathematical derivations.

</details>

🧠 Models & Training

<details> <summary><a id="ai-backbone-ruvector"></a><strong>🤖 AI Backbone: RuVector</strong> — Attention, graph algorithms, and edge-AI compression powering the sensing pipeline</summary>

Raw WiFi signals are noisy, redundant, and environment-dependent. RuVector is the AI intelligence layer that transforms them into clean, structured input for the DensePose neural network. It uses attention mechanisms to learn which signals to trust, graph algorithms that automatically discover which WiFi channels are sensitive to body motion, and compressed representations that make edge inference possible on an $8 microcontroller.

Without RuVector, WiFi DensePose would need hand-tuned thresholds, brute-force matrix math, and 4x more memory — making real-time edge inference impossible.

Raw WiFi CSI (56 subcarriers, noisy)
    |
    +-- ruvector-mincut ---------- Which channels carry body-motion signal? (learned graph partitioning)
    +-- ruvector-attn-mincut ----- Which time frames are signal vs noise? (attention-gated filtering)
    +-- ruvector-attention ------- How to fuse multi-antenna data? (learned weighted aggregation)
    |
    v
Clean, structured signal --> DensePose Neural Network --> 17-keypoint body pose
                         --> FFT Vital Signs -----------> breathing rate, heart rate
                         --> ruvector-solver ------------> physics-based localization

The wifi-densepose-ruvector crate (ADR-017) connects all 7 integration points:

AI CapabilityWhat It ReplacesRuVector CrateResult
Self-optimizing channel selectionHand-tuned thresholds that break when rooms changeruvector-mincutGraph min-cut adapts to any environment automatically
Attention-based signal cleaningFixed energy cutoffs that miss subtle breathingruvector-attn-mincutLearned gating amplifies body signals, suppresses noise
Learned signal fusionSimple averaging where one bad channel corrupts allruvector-attentionTransformer-style attention downweights corrupted channels
Physics-informed localizationExpensive nonlinear solversruvector-solverSparse least-squares Fresnel geometry in real-time
O(1) survivor triangulationO(N^3) matrix inversionruvector-solverNeumann series linearization for instant position updates
75% memory compression13.4 MB breathing buffers that overflow edge devicesruvector-temporal-tensorTiered 3-8 bit quantization fits 60s of vitals in 3.4 MB

See issue #67 for a deep dive with code examples, or cargo add wifi-densepose-ruvector to use it directly.

</details> <details> <summary><a id="rvf-model-container"></a><strong>📦 RVF Model Container</strong> — Single-file deployment with progressive loading</summary>

The RuVector Format (RVF) packages an entire trained model — weights, HNSW indexes, quantization codebooks, SONA adaptation deltas, and WASM inference runtime — into a single self-contained binary file. No external dependencies are needed at deployment time.

Container structure:

┌──────────────────────────────────────────────────────┐
│ RVF Container (.rvf)                                  │
│                                                       │
│  ┌─────────────┐  64-byte header per segment          │
│  │ Manifest     │  Magic: 0x52564653 ("RVFS")         │
│  ├─────────────┤  Type + content hash + compression   │
│  │ Weights      │  Model parameters (f32/f16/u8)      │
│  ├─────────────┤                                      │
│  │ HNSW Index   │  Vector search index                │
│  ├─────────────┤                                      │
│  │ Quant        │  Quantization codebooks              │
│  ├─────────────┤                                      │
│  │ SONA Profile │  LoRA deltas + EWC++ Fisher matrix  │
│  ├─────────────┤                                      │
│  │ Witness      │  Ed25519 training proof              │
│  ├─────────────┤                                      │
│  │ Vitals Config│  Breathing/HR filter parameters     │
│  └─────────────┘                                      │
└──────────────────────────────────────────────────────┘

Deployment targets:

TargetQuantizationSizeLoad TimeUse Case
ESP32 / IoTint4~0.7 MB<5ms (Layer A)Presence + breathing only
Mobile / WebViewint8~6 MB~200ms (Layer B)Pose estimation on phone
Browser (WASM)int8~10 MB~500ms (Layer B)In-browser demo
Field (WiFi-Mat)fp16~62 MB~2s (Layer C)Full DensePose + disaster triage
Server / Cloudf32~50+ MB~3s (Layer C)Training + full inference
PropertyDetail
FormatSegment-based binary, 20+ segment types, CRC32 integrity per segment
Progressive LoadingLayer A (<5ms): manifest + entry points → Layer B (100ms-1s): hot weights + adjacency → Layer C (seconds): full graph
SigningEd25519 training proofs for verifiable provenance — chain of custody from training data to deployed model
QuantizationPer-segment temperature-tiered: f32 (full), f16 (half), u8 (int8), int4 — with SIMD-accelerated distance computation
CLI--export-rvf (generate), --load-rvf (config), --save-rvf (persist), --model (inference), --progressive (3-layer load)
bash
# Export model package
./target/release/sensing-server --export-rvf wifi-densepose-v1.rvf

# Load and run with progressive loading
./target/release/sensing-server --model wifi-densepose-v1.rvf --progressive

# Export via Docker
docker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf

Built on the rvf crate family (rvf-types, rvf-wire, rvf-manifest, rvf-index, rvf-quant, rvf-crypto, rvf-runtime). See ADR-023.

</details> <details> <summary><a id="training--fine-tuning"></a><strong>🧬 Training & Fine-Tuning</strong> — MM-Fi/Wi-Pose pre-training, SONA adaptation</summary>

The training pipeline implements 8 phases in pure Rust (7,832 lines, zero external ML dependencies). It trains a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates — following the approach from the CMU "DensePose From WiFi" paper (arXiv:2301.00250). RuVector crates provide the core building blocks: ruvector-attention for cross-attention layers, ruvector-mincut for multi-person matching, and ruvector-temporal-tensor for CSI buffer compression.

Three-tier data strategy:

TierMethodPurposeRuVector Integration
1. Pre-trainMM-Fi + Wi-Pose public datasetsCross-environment generalization (multi-subject, multi-room)ruvector-temporal-tensor compresses CSI windows (114→56 subcarrier resampling)
2. Fine-tuneESP32 CSI + camera pseudo-labelsEnvironment-specific multipath adaptationruvector-solver for Fresnel geometry, ruvector-attn-mincut for subcarrier gating
3. SONA adaptMicro-LoRA (rank-4) + EWC++Continuous on-device learning without catastrophic forgettingSONA architecture (Self-Optimizing Neural Architecture)

Training pipeline components:

PhaseModuleWhat It DoesRuVector Crate
1dataset.rs (850 lines)MM-Fi .npy + Wi-Pose .mat loaders, subcarrier resampling (114→56, 30→56), windowingruvector-temporal-tensor
2graph_transformer.rs (855 lines)COCO BodyGraph (17 kp, 16 edges), AntennaGraph, multi-head CrossAttention, GCN message passingruvector-attention
3trainer.rs (881 lines)6-term composite loss (MSE, CE, UV, temporal, bone, symmetry), SGD+momentum, cosine+warmup, PCK/OKSruvector-mincut (person matching)
4sona.rs (639 lines)LoRA adapters (A×B delta), EWC++ Fisher regularization, EnvironmentDetector (3-sigma drift)sona
5sparse_inference.rs (753 lines)NeuronProfiler hot/cold partitioning, SparseLinear (skip cold rows), INT8/FP16 quantizationruvector-sparse-inference
6rvf_pipeline.rs (1,027 lines)Progressive 3-layer loader, HNSW index, OverlayGraph, RvfModelBuilderruvector-core (HNSW)
7rvf_container.rs (914 lines)Binary container format, 6+ segment types, CRC32 integrityrvf
8main.rs integration--train, --model, --progressive CLI flags, REST endpoints

SONA (Self-Optimizing Neural Architecture) — the continuous adaptation system:

ComponentWhat It DoesWhy It Matters
Micro-LoRA (rank-4)Trains small A×B weight deltas instead of full weights100x fewer parameters to update → runs on ESP32
EWC++ (Fisher matrix)Penalizes changes to important weights from previous environmentsPrevents catastrophic forgetting when moving between rooms
EnvironmentDetectorMonitors CSI feature drift with 3-sigma thresholdAuto-triggers adaptation when the model is moved to a new space
Best-epoch snapshotSaves best validation loss weights, restores before exportPrevents shipping overfit final-epoch parameters
bash
# Pre-train on MM-Fi dataset
./target/release/sensing-server --train --dataset data/ --dataset-type mmfi --epochs 100

# Train and export to RVF in one step
./target/release/sensing-server --train --dataset data/ --epochs 100 --save-rvf model.rvf

# Via Docker (no toolchain needed)
docker run --rm -v $(pwd)/data:/data ruvnet/wifi-densepose:latest \
  --train --dataset /data --epochs 100 --export-rvf /data/model.rvf

See ADR-023 · SONA crate · arXiv:2301.00250

</details> <details> <summary><a id="ruvector-crates"></a><strong>🔩 RuVector Crates</strong> — 11 vendored signal intelligence crates from <a href="https://github.com/ruvnet/ruvector">github.com/ruvnet/ruvector</a></summary>

5 directly-used crates (v2.0.4, declared in Cargo.toml, 7 integration points):

CrateWhat It DoesWhere It's Used in WiFi-DensePoseSource
ruvector-attentionScaled dot-product attention, MoE routing, sparse attentionmodel.rs (spatial attention), bvp.rs (sensitivity-weighted velocity profiles)crate
ruvector-mincutSubpolynomial dynamic min-cut O(n^1.5 log n)metrics.rs (DynamicPersonMatcher — multi-person assignment), subcarrier_selection.rs (sensitive/insensitive split)crate
ruvector-attn-mincutAttention-gated spectrogram noise suppressionmodel.rs (antenna attention gating), spectrogram.rs (gate noisy time-frequency bins)crate
ruvector-solverSparse Neumann series solver O(sqrt(n))fresnel.rs (TX-body-RX geometry), triangulation.rs (3D localization), subcarrier.rs (sparse interpolation 114→56)crate
ruvector-temporal-tensorTiered temporal compression (8/7/5/3-bit)dataset.rs (CSI buffer compression), breathing.rs + heartbeat.rs (compressed vital sign spectrograms)crate

6 additional vendored crates (used by training pipeline and inference):

CrateWhat It DoesSource
ruvector-coreVectorDB engine, HNSW index, SIMD distance functions, quantization codebookscrate
ruvector-gnnGraph neural network layers, graph attention, EWC-regularized trainingcrate
ruvector-graph-transformerProof-gated graph transformer with cross-attentioncrate
ruvector-sparse-inferencePowerInfer-style hot/cold neuron partitioning, skip cold rows at runtimecrate
ruvector-nervous-systemPredictiveLayer, OscillatoryRouter, Hopfield associative memorycrate
ruvector-coherenceSpectral coherence monitoring, HNSW graph health, Fiedler connectivitycrate

The full RuVector ecosystem includes 90+ crates. See github.com/ruvnet/ruvector for the complete library, and vendor/ruvector/ for the vendored source in this project.

</details> <details> <summary><a id="ruv-neural"></a><strong>🧠 rUv Neural</strong> — Brain topology analysis ecosystem for neural decoding and medical sensing</summary>

rUv Neural is a 12-crate Rust ecosystem that extends RuView's signal processing into brain network topology analysis. It transforms neural magnetic field measurements from quantum sensors (NV diamond magnetometers, optically pumped magnetometers) into dynamic connectivity graphs, using minimum cut algorithms to detect cognitive state transitions in real time. The ecosystem includes crates for signal processing (ruv-neural-signal), graph construction (ruv-neural-graph), HNSW-indexed pattern memory (ruv-neural-memory), graph embeddings (ruv-neural-embed), cognitive state decoding (ruv-neural-decoder), and ESP32/WASM edge targets. Medical and research applications include early neurological disease detection via topology signatures, brain-computer interfaces, clinical neurofeedback, and non-invasive biomedical sensing -- bridging RuView's RF sensing architecture with the emerging field of quantum biomedical diagnostics.

</details>
<details> <summary><strong>🏗️ System Architecture</strong> — End-to-end data flow from CSI capture to REST/WebSocket API</summary>

End-to-End Pipeline

mermaid
graph TB
    subgraph HW ["📡 Hardware Layer"]
        direction LR
        R1["WiFi Router 1
<small>CSI Source</small>"]
        R2["WiFi Router 2
<small>CSI Source</small>"]
        R3["WiFi Router 3
<small>CSI Source</small>"]
        ESP["ESP32-S3 Mesh
<small>20 Hz · 56 subcarriers</small>"]
        WIN["Windows WiFi
<small>RSSI scanning</small>"]
    end

    subgraph INGEST ["⚡ Ingestion"]
        AGG["Aggregator
<small>UDP :5005 · ADR-018 frames</small>"]
        BRIDGE["Bridge
<small>I/Q → amplitude + phase</small>"]
    end

    subgraph SIGNAL ["🔬 Signal Processing — RuVector v2.0.4"]
        direction TB
        PHASE["Phase Sanitization
<small>SpotFi conjugate multiply</small>"]
        HAMPEL["Hampel Filter
<small>Outlier rejection · σ=3</small>"]
        SUBSEL["Subcarrier Selection
<small>ruvector-mincut · sensitive/insensitive split</small>"]
        SPEC["Spectrogram
<small>ruvector-attn-mincut · gated STFT</small>"]
        FRESNEL["Fresnel Geometry
<small>ruvector-solver · TX-body-RX distance</small>"]
        BVP["Body Velocity Profile
<small>ruvector-attention · weighted BVP</small>"]
    end

    subgraph ML ["🧠 Neural Pipeline"]
        direction TB
        GRAPH["Graph Transformer
<small>17 COCO keypoints · 16 edges</small>"]
        CROSS["Cross-Attention
<small>CSI features → body pose</small>"]
        SONA["SONA Adapter
<small>LoRA rank-4 · EWC++</small>"]
    end

    subgraph VITAL ["💓 Vital Signs"]
        direction LR
        BREATH["Breathing
<small>0.1–0.5 Hz · FFT peak</small>"]
        HEART["Heart Rate
<small>0.8–2.0 Hz · FFT peak</small>"]
        MOTION["Motion Level
<small>Variance + band power</small>"]
    end

    subgraph API ["🌐 Output Layer"]
        direction LR
        REST["REST API
<small>Axum :3000 · 6 endpoints</small>"]
        WS["WebSocket
<small>:3001 · real-time stream</small>"]
        ANALYTICS["Analytics
<small>Fall · Activity · START triage</small>"]
        UI["Web UI
<small>Three.js · Gaussian splats</small>"]
    end

    R1 & R2 & R3 --> AGG
    ESP --> AGG
    WIN --> BRIDGE
    AGG --> BRIDGE
    BRIDGE --> PHASE
    PHASE --> HAMPEL
    HAMPEL --> SUBSEL
    SUBSEL --> SPEC
    SPEC --> FRESNEL
    FRESNEL --> BVP
    BVP --> GRAPH
    GRAPH --> CROSS
    CROSS --> SONA
    SONA --> BREATH & HEART & MOTION
    BREATH & HEART & MOTION --> REST & WS & ANALYTICS
    WS --> UI

    style HW fill:#1a1a2e,stroke:#e94560,color:#eee
    style INGEST fill:#16213e,stroke:#0f3460,color:#eee
    style SIGNAL fill:#0f3460,stroke:#533483,color:#eee
    style ML fill:#533483,stroke:#e94560,color:#eee
    style VITAL fill:#2d132c,stroke:#e94560,color:#eee
    style API fill:#1a1a2e,stroke:#0f3460,color:#eee

Signal Processing Detail

mermaid
graph LR
    subgraph RAW ["Raw CSI Frame"]
        IQ["I/Q Samples
<small>56–192 subcarriers × N antennas</small>"]
    end

    subgraph CLEAN ["Phase Cleanup"]
        CONJ["Conjugate Multiply
<small>Remove carrier freq offset</small>"]
        UNWRAP["Phase Unwrap
<small>Remove 2π discontinuities</small>"]
        HAMPEL2["Hampel Filter
<small>Remove impulse noise</small>"]
    end

    subgraph SELECT ["Subcarrier Intelligence"]
        MINCUT["Min-Cut Partition
<small>ruvector-mincut</small>"]
        GATE["Attention Gate
<small>ruvector-attn-mincut</small>"]
    end

    subgraph EXTRACT ["Feature Extraction"]
        STFT["STFT Spectrogram
<small>Time-frequency decomposition</small>"]
        FRESNELZ["Fresnel Zones
<small>ruvector-solver</small>"]
        BVPE["BVP Estimation
<small>ruvector-attention</small>"]
    end

    subgraph OUT ["Output Features"]
        AMP["Amplitude Matrix"]
        PHASE2["Phase Matrix"]
        DOPPLER["Doppler Shifts"]
        VITALS["Vital Band Power"]
    end

    IQ --> CONJ --> UNWRAP --> HAMPEL2
    HAMPEL2 --> MINCUT --> GATE
    GATE --> STFT --> FRESNELZ --> BVPE
    BVPE --> AMP & PHASE2 & DOPPLER & VITALS

    style RAW fill:#0d1117,stroke:#58a6ff,color:#c9d1d9
    style CLEAN fill:#161b22,stroke:#58a6ff,color:#c9d1d9
    style SELECT fill:#161b22,stroke:#d29922,color:#c9d1d9
    style EXTRACT fill:#161b22,stroke:#3fb950,color:#c9d1d9
    style OUT fill:#0d1117,stroke:#8b949e,color:#c9d1d9

Deployment Topology

mermaid
graph TB
    subgraph EDGE ["Edge (ESP32-S3 Mesh)"]
        E1["Node 1
<small>Kitchen</small>"]
        E2["Node 2
<small>Living room</small>"]
        E3["Node 3
<small>Bedroom</small>"]
    end

    subgraph SERVER ["Server (Rust · 132 MB Docker)"]
        SENSE["Sensing Server
<small>:3000 REST · :3001 WS · :5005 UDP</small>"]
        RVF["RVF Model
<small>Progressive 3-layer load</small>"]
        STORE["Time-Series Store
<small>In-memory ring buffer</small>"]
    end

    subgraph CLIENT ["Clients"]
        BROWSER["Browser
<small>Three.js UI · Gaussian splats</small>"]
        MOBILE["Mobile App
<small>WebSocket stream</small>"]
        DASH["Dashboard
<small>REST polling</small>"]
        IOT["Home Automation
<small>MQTT bridge</small>"]
    end

    E1 -->|"UDP :5005
ADR-018 frames"| SENSE
    E2 -->|"UDP :5005"| SENSE
    E3 -->|"UDP :5005"| SENSE
    SENSE <--> RVF
    SENSE <--> STORE
    SENSE -->|"WS :3001
real-time JSON"| BROWSER & MOBILE
    SENSE -->|"REST :3000
on-demand"| DASH & IOT

    style EDGE fill:#1a1a2e,stroke:#e94560,color:#eee
    style SERVER fill:#16213e,stroke:#533483,color:#eee
    style CLIENT fill:#0f3460,stroke:#0f3460,color:#eee
ComponentCrate / ModuleDescription
Aggregatorwifi-densepose-hardwareESP32 UDP listener, ADR-018 frame parser, I/Q → amplitude/phase bridge
Signal Processorwifi-densepose-signalSpotFi phase sanitization, Hampel filter, STFT spectrogram, Fresnel geometry, BVP
Subcarrier Selectionruvector-mincut + ruvector-attn-mincutDynamic sensitive/insensitive partitioning, attention-gated noise suppression
Fresnel Solverruvector-solverSparse Neumann series O(sqrt(n)) for TX-body-RX distance estimation
Graph Transformerwifi-densepose-trainCOCO BodyGraph (17 kp, 16 edges), cross-attention CSI→pose, GCN message passing
SONAsona crateMicro-LoRA (rank-4) adaptation, EWC++ catastrophic forgetting prevention
Vital Signswifi-densepose-signalFFT-based breathing (0.1-0.5 Hz) and heartbeat (0.8-2.0 Hz) extraction
REST APIwifi-densepose-sensing-serverAxum server: /api/v1/sensing, /health, /vital-signs, /bssid, /sona
WebSocketwifi-densepose-sensing-serverReal-time pose, sensing, and vital sign streaming on :3001
Analyticswifi-densepose-matFall detection, activity recognition, START triage (WiFi-Mat disaster module)
Web UIui/Three.js scene, Gaussian splat visualization, signal dashboard
</details>

🖥️ CLI Usage

<details> <summary><strong>Rust Sensing Server</strong> — Primary CLI interface</summary>
bash
# Start with simulated data (no hardware)
./target/release/sensing-server --source simulate --ui-path ../../ui

# Start with ESP32 CSI hardware
./target/release/sensing-server --source esp32 --udp-port 5005

# Start with Windows WiFi RSSI
./target/release/sensing-server --source wifi

# Run vital sign benchmark
./target/release/sensing-server --benchmark

# Export RVF model package
./target/release/sensing-server --export-rvf model.rvf

# Train a model
./target/release/sensing-server --train --dataset data/ --epochs 100

# Load trained model with progressive loading
./target/release/sensing-server --model wifi-densepose-v1.rvf --progressive
FlagDescription
--sourceData source: auto, wifi, esp32, simulate
--http-portHTTP port for UI and REST API (default: 8080)
--ws-portWebSocket port (default: 8765)
--udp-portUDP port for ESP32 CSI frames (default: 5005)
--benchmarkRun vital sign benchmark (1000 frames) and exit
--export-rvfExport RVF container package and exit
--load-rvfLoad model config from RVF container
--save-rvfSave model state on shutdown
--modelLoad trained .rvf model for inference
--progressiveEnable progressive loading (Layer A instant start)
--trainTrain a model and exit
--datasetPath to dataset directory (MM-Fi or Wi-Pose)
--epochsTraining epochs (default: 100)
</details> <details> <summary><a id="rest-api--websocket"></a><strong>REST API & WebSocket</strong> — Endpoints reference</summary>

REST API (Rust Sensing Server)

bash
GET  /api/v1/sensing              # Latest sensing frame
GET  /api/v1/vital-signs          # Breathing, heart rate, confidence
GET  /api/v1/bssid                # Multi-BSSID registry
GET  /api/v1/model/layers         # Progressive loading status
GET  /api/v1/model/sona/profiles  # SONA profiles
POST /api/v1/model/sona/activate  # Activate SONA profile

WebSocket: ws://localhost:3001/ws/sensing (real-time sensing + vital signs)

Default ports (Docker): HTTP 3000, WS 3001. Binary defaults: HTTP 8080, WS 8765. Override with --http-port / --ws-port.

</details> <details> <summary><a id="hardware-support-1"></a><strong>Hardware Support</strong> — Devices, cost, and guides</summary>
HardwareCSICostGuide
ESP32-S3Native~$8Tutorial #34
Intel 5300Firmware mod~$15Linux iwl-csi
Atheros AR9580ath9k patch~$20Linux only
Any Windows WiFiRSSI only$0Tutorial #36
Any macOS WiFiRSSI only (CoreWLAN)$0ADR-025
Any Linux WiFiRSSI only (iw)$0Requires iw + CAP_NET_ADMIN
</details> <details> <summary><strong>QEMU Firmware Testing (ADR-061) — 9-Layer Platform</strong></summary>

Test ESP32-S3 firmware without physical hardware using Espressif's QEMU fork. The platform provides 9 layers of testing capability:

LayerCapabilityScript / Config
1Mock CSI generator (10 physics-based scenarios)firmware/esp32-csi-node/main/mock_csi.c
2Single-node QEMU runner + UART validation (16 checks)scripts/qemu-esp32s3-test.sh, scripts/validate_qemu_output.py
3Multi-node TDM mesh simulation (TAP networking)scripts/qemu-mesh-test.sh, scripts/validate_mesh_test.py
4GDB remote debugging (VS Code integration).vscode/launch.json
5Code coverage (gcov/lcov via apptrace)firmware/esp32-csi-node/sdkconfig.coverage
6Fuzz testing (libFuzzer + ASAN/UBSAN)firmware/esp32-csi-node/test/fuzz_*.c
7NVS provisioning matrix (14 configs)scripts/generate_nvs_matrix.py
8Snapshot regression (sub-second VM restore)scripts/qemu-snapshot-test.sh
9Chaos testing (fault injection + health monitoring)scripts/qemu-chaos-test.sh, scripts/inject_fault.py, scripts/check_health.py
bash
# Quick start: build + run + validate
cd firmware/esp32-csi-node
idf.py -D SDKCONFIG_DEFAULTS="sdkconfig.defaults;sdkconfig.qemu" build

# Single-node test (builds, merges flash, runs QEMU, validates output)
bash scripts/qemu-esp32s3-test.sh

# Multi-node mesh test (3 QEMU instances with TDM)
sudo bash scripts/qemu-mesh-test.sh 3

# Fuzz testing (60 seconds per target)
cd firmware/esp32-csi-node/test && make all CC=clang && make run_serialize FUZZ_DURATION=60

# Chaos testing (fault injection resilience)
bash scripts/qemu-chaos-test.sh --faults all --duration 120

10 test scenarios: empty room, static person, walking, fall, multi-person, channel sweep, MAC filter, ring overflow, boundary RSSI, zero-length frames.

14 NVS configs: default, WiFi-only, full ADR-060, edge tiers 0/1/2, TDM mesh, WASM signed/unsigned, 5GHz, boundary max/min, power-save, empty-strings.

CI: GitHub Actions workflow runs 7 NVS matrix configs, 3 fuzz targets, and NVS binary validation on every push to firmware/.

See ADR-061 for the full architecture.

</details> <details> <summary><strong>QEMU Swarm Configurator (ADR-062)</strong></summary>

Test multiple ESP32-S3 nodes simultaneously using a YAML-driven orchestrator. Define node roles, network topologies, and validation assertions in a config file.

bash
# Quick smoke test (2 nodes, 15 seconds)
python3 scripts/qemu_swarm.py --preset smoke

# Standard 3-node test (coordinator + 2 sensors)
python3 scripts/qemu_swarm.py --preset standard

# See all presets
python3 scripts/qemu_swarm.py --list-presets

# Preview without running
python3 scripts/qemu_swarm.py --preset standard --dry-run

Topologies: star (sensors → coordinator), mesh (fully connected), line (relay chain), ring (circular).

Node roles: sensor (generates CSI), coordinator (aggregates), gateway (bridges to host).

7 presets: smoke, standard, ci-matrix, large-mesh, line-relay, ring-fault, heterogeneous.

9 swarm assertions: boot check, crash detection, TDM collision, frame production, coordinator reception, fall detection, frame rate, boot time, heap health.

See ADR-062 and the User Guide for step-by-step instructions.

</details> <details> <summary><strong>Python Legacy CLI</strong> — v1 API server commands</summary>
bash
wifi-densepose start                    # Start API server
wifi-densepose -c config.yaml start     # Custom config
wifi-densepose -v start                 # Verbose logging
wifi-densepose status                   # Check status
wifi-densepose stop                     # Stop server
wifi-densepose config show              # Show configuration
wifi-densepose db init                  # Initialize database
wifi-densepose tasks list               # List background tasks
</details> <details> <summary><strong>Documentation Links</strong></summary> </details>

🧪 Testing

<details> <summary><strong>542+ tests across 7 suites</strong> — zero mocks, hardware-free simulation</summary>
bash
# Rust tests (primary — 542+ tests)
cd rust-port/wifi-densepose-rs
cargo test --workspace

# Sensing server tests (229 tests)
cargo test -p wifi-densepose-sensing-server

# Vital sign benchmark
./target/release/sensing-server --benchmark

# Python tests
python -m pytest v1/tests/ -v

# Pipeline verification (no hardware needed)
./verify
SuiteTestsWhat It Covers
sensing-server lib147Graph transformer, trainer, SONA, sparse inference, RVF
sensing-server bin48CLI integration, WebSocket, REST API
RVF integration16Container build, read, progressive load
Vital signs integration18FFT detection, breathing, heartbeat
wifi-densepose-signal83SOTA algorithms, Doppler, Fresnel
wifi-densepose-mat139Disaster response, triage, localization
wifi-densepose-wifiscan918-stage RSSI pipeline
</details>

🚀 Deployment

<details> <summary><strong>Docker deployment</strong> — Production setup with docker-compose</summary>
bash
# Rust sensing server (132 MB)
docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 -p 3001:3001 -p 5005:5005/udp ruvnet/wifi-densepose:latest

# Python pipeline (569 MB)
docker pull ruvnet/wifi-densepose:python
docker run -p 8765:8765 -p 8080:8080 ruvnet/wifi-densepose:python

# Both via docker-compose
cd docker && docker compose up

# Export RVF model
docker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf

Environment Variables

bash
RUST_LOG=info                    # Logging level
WIFI_INTERFACE=wlan0             # WiFi interface for RSSI
POSE_CONFIDENCE_THRESHOLD=0.7    # Minimum confidence
POSE_MAX_PERSONS=10              # Max tracked individuals
</details>

📊 Performance Metrics

<details> <summary><strong>Measured benchmarks</strong> — Rust sensing server, validated via cargo bench</summary>

Rust Sensing Server

MetricValue
Vital sign detection11,665 fps (86 µs/frame)
Full CSI pipeline54,000 fps (18.47 µs/frame)
Motion detection186 ns (~5,400x vs Python)
Docker image132 MB
Memory usage~100 MB
Test count542+

Python vs Rust

OperationPythonRustSpeedup
CSI Preprocessing~5 ms5.19 µs1000x
Phase Sanitization~3 ms3.84 µs780x
Feature Extraction~8 ms9.03 µs890x
Motion Detection~1 ms186 ns5400x
Full Pipeline~15 ms18.47 µs810x
</details>

🤝 Contributing

<details> <summary><strong>Dev setup, code standards, PR process</strong></summary>
bash
git clone https://github.com/ruvnet/RuView.git
cd RuView

# Rust development
cd rust-port/wifi-densepose-rs
cargo build --release
cargo test --workspace

# Python development
python -m venv venv && source venv/bin/activate
pip install -r requirements-dev.txt && pip install -e .
pre-commit install
  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes
  4. Push and open a Pull Request
</details>

📄 Changelog

<details> <summary><strong>Release history</strong></summary>

v3.2.0 — 2026-03-03

Edge intelligence: 24 hot-loadable WASM modules for on-device CSI processing on ESP32-S3.

  • ADR-041 Edge Intelligence Modules — 24 no_std Rust modules compiled to wasm32-unknown-unknown, loaded via WASM3 on ESP32; 8 categories covering signal intelligence, adaptive learning, spatial reasoning, temporal analysis, AI security, quantum-inspired, autonomous systems, and exotic algorithms
  • Vendor Integration — Algorithms ported from midstream (DTW, attractors, Flash Attention, min-cut, optimal transport) and sublinear-time-solver (PageRank, HNSW, sparse recovery, spiking NN)
  • On-device gesture learning — User-teachable DTW gesture recognition with 3-rehearsal protocol and 16 template slots
  • Lifelong learning (EWC++) — Elastic Weight Consolidation prevents catastrophic forgetting when learning new tasks
  • AI security modules — FNV-1a replay detection, injection/jamming detection, 6D behavioral anomaly profiling with Mahalanobis scoring
  • Self-healing mesh — 8-node mesh with health tracking, degradation/recovery hysteresis, and coverage redistribution
  • Common utility libraryvendor_common.rs shared across all 24 modules: CircularBuffer, EMA, WelfordStats, DTW, FixedPriorityQueue, vector math
  • 243 tests passing — All modules include comprehensive inline tests; 0 failures
  • Security audit — 15 findings addressed (1 critical, 3 high, 6 medium, 5 low)

v3.1.0 — 2026-03-02

Multistatic sensing, persistent field model, and cross-viewpoint fusion — the biggest capability jump since v2.0.

  • Project RuvSense (ADR-029) — Multistatic mesh: TDM protocol, channel hopping (ch1/6/11), multi-band frame fusion, coherence gating, 17-keypoint Kalman tracker with re-ID; 10 new signal modules (5,300+ lines)
  • RuvSense Persistent Field Model (ADR-030) — 7 exotic sensing tiers: field normal modes (SVD), RF tomography, longitudinal drift detection, intention prediction, cross-room identity, gesture classification, adversarial detection
  • Project RuView (ADR-031) — Cross-viewpoint attention with geometric bias, Geometric Diversity Index, viewpoint fusion orchestrator; 5 new ruvector modules (2,200+ lines)
  • TDM Hardware Protocol — ESP32 sensing coordinator: sync beacons, slot scheduling, clock drift compensation (±10ppm), 20 Hz aggregate rate
  • Channel-Hopping Firmware — ESP32 firmware extended with hop table, timer-driven channel switching, NDP injection stub; NVS config for all TDM parameters; fully backward-compatible
  • DDD Domain Model — 6 bounded contexts, ubiquitous language, aggregate roots, domain events, full event bus specification
  • ruvector-crv 6-stage CRV signal-line integration (ADR-033) — Maps Coordinate Remote Viewing methodology to WiFi CSI: gestalt classification, sensory encoding, GNN topology, SNN coherence gating, differentiable search, MinCut partitioning; cross-session convergence for multi-room identity continuity
  • ADR-032 multistatic mesh security hardening — HMAC-SHA256 beacon auth, SipHash-2-4 frame integrity, NDP rate limiter, coherence gate timeout, bounded buffers, NVS credential zeroing, atomic firmware state
  • ADR-032a QUIC transport layermidstreamer-quic TLS 1.3 AEAD for aggregator nodes, dual-mode security (ManualCrypto/QuicTransport), QUIC stream mapping, connection migration, congestion control
  • ADR-033 CRV signal-line sensing integration — Architecture decision record for the 6-stage CRV pipeline mapping to ruvector components
  • Temporal gesture matchingmidstreamer-temporal-compare DTW/LCS/edit-distance gesture classification with quantized feature comparison
  • Attractor drift analysismidstreamer-attractor Takens' theorem phase-space embedding with Lyapunov exponent regime detection (Stable/Periodic/Chaotic)
  • v0.3.0 published — All 15 workspace crates published to crates.io with updated dependencies
  • 28,000+ lines of new Rust code across 26 modules with 400+ tests
  • Security hardened — Bounded buffers, NaN guards, no panics in public APIs, input validation at all boundaries

v3.0.0 — 2026-03-01

Major release: AETHER contrastive embedding model, AI signal processing backbone, cross-platform adapters, Docker Hub images, and comprehensive README overhaul.

  • Project AETHER (ADR-024) — Self-supervised contrastive learning for WiFi CSI fingerprinting, similarity search, and anomaly detection; 55 KB model fits on ESP32
  • AI Backbone (wifi-densepose-ruvector) — 7 RuVector integration points replacing hand-tuned thresholds with attention, graph algorithms, and smart compression; published to crates.io
  • Cross-platform RSSI adapters — macOS CoreWLAN and Linux iw Rust adapters with #[cfg(target_os)] gating (ADR-025)
  • Docker images publishedruvnet/wifi-densepose:latest (132 MB Rust) and :python (569 MB)
  • Project MERIDIAN (ADR-027) — Cross-environment domain generalization: gradient reversal, geometry-conditioned FiLM, virtual domain augmentation, contrastive test-time training; zero-shot room transfer
  • 10-phase DensePose training pipeline (ADR-023/027) — Graph transformer, 6-term composite loss, SONA adaptation, RVF packaging, hardware normalization, domain-adversarial training
  • Vital sign detection (ADR-021) — FFT-based breathing (6-30 BPM) and heartbeat (40-120 BPM), 11,665 fps
  • WiFi scan domain layer (ADR-022/025) — 8-stage signal intelligence pipeline for Windows, macOS, and Linux
  • 700+ Rust tests — All passing, zero mocks

v2.0.0 — 2026-02-28

Complete Rust sensing server, SOTA signal processing, WiFi-Mat disaster response, ESP32 hardware, RuVector integration, guided installer, and security hardening.

  • Rust sensing server — Axum REST API + WebSocket, 810x speedup over Python, 54K fps pipeline
  • RuVector integration — 11 vendored crates for HNSW, attention, GNN, temporal compression, min-cut, solver
  • 6 SOTA signal algorithms (ADR-014) — SpotFi, Hampel, Fresnel, spectrogram, subcarrier selection, BVP
  • WiFi-Mat disaster response — START triage, 3D localization, priority alerts — 139 tests
  • ESP32 CSI hardware — Binary frame parsing, $54 starter kit, 20 Hz streaming
  • Guided installer — 7-step hardware detection, 8 install profiles
  • Three.js visualization — 3D body model, 17 joints, real-time WebSocket
  • Security hardening — 10 vulnerabilities fixed
</details>

📄 License

MIT License — see LICENSE for details.

📞 Support

GitHub Issues | Discussions | PyPI


WiFi DensePose — Privacy-preserving human pose estimation through WiFi signals.