docs/adr/ADR-031-ruview-sensing-first-rf-mode.md
| Field | Value |
|---|---|
| Status | Proposed |
| Date | 2026-03-02 |
| Deciders | ruv |
| Codename | RuView -- RuVector Viewpoint-Integrated Enhancement |
| Relates to | ADR-012 (ESP32 Mesh), ADR-014 (SOTA Signal), ADR-016 (RuVector Integration), ADR-017 (RuVector Signal+MAT), ADR-021 (Vital Signs), ADR-024 (AETHER Embeddings), ADR-027 (MERIDIAN Cross-Environment) |
Current WiFi DensePose operates with a single transmitter-receiver pair (or single node receiving). This creates three fundamental limitations:
The ESP32 mesh (ADR-012) partially addresses this via feature-level fusion across 3-6 nodes, but feature-level fusion cannot learn optimal fusion weights -- it uses hand-crafted aggregation (max, mean, coherent sum).
RuView is NOT a new WiFi standard. It is a sensing-first protocol that rides on existing silicon, bands, and regulations. The key insight: instead of upgrading the RF hardware, upgrade the observability by coordinating multiple commodity receivers.
| Component | ADR | Current State |
|---|---|---|
| ESP32 mesh with feature-level fusion | ADR-012 | Implemented (firmware + aggregator) |
| SOTA signal processing (Hampel, Fresnel, BVP, spectrogram) | ADR-014 | Implemented |
| RuVector training pipeline (5 crates) | ADR-016 | Complete |
| RuVector signal + MAT integration (7 points) | ADR-017 | Accepted |
| Vital sign detection pipeline | ADR-021 | Partially implemented |
| AETHER contrastive embeddings | ADR-024 | Proposed |
| MERIDIAN cross-environment generalization | ADR-027 | Proposed |
RuView fills the gap: cross-viewpoint embedding fusion using learned attention weights.
Introduce RuView as a cross-viewpoint embedding fusion layer that operates on top of AETHER per-viewpoint embeddings. RuView adds a new bounded context (ViewpointFusion) and extends three existing crates.
+-----------------------------------------------------------------+
| RuView Multistatic Pipeline |
+-----------------------------------------------------------------+
| |
| +----------+ +----------+ +----------+ +----------+ |
| | Node 1 | | Node 2 | | Node 3 | | Node N | |
| | ESP32-S3 | | ESP32-S3 | | ESP32-S3 | | ESP32-S3 | |
| | | | | | | | | |
| | CSI Rx | | CSI Rx | | CSI Rx | | CSI Rx | |
| +----+-----+ +----+-----+ +----+-----+ +----+-----+ |
| | | | | |
| v v v v |
| +--------------------------------------------------------+ |
| | Per-Viewpoint Signal Processing | |
| | Phase sanitize -> Hampel -> BVP -> Subcarrier select | |
| | (ADR-014, unchanged per viewpoint) | |
| +----------------------------+---------------------------+ |
| | |
| v |
| +--------------------------------------------------------+ |
| | Per-Viewpoint AETHER Embedding | |
| | CsiToPoseTransformer -> 128-d contrastive embedding | |
| | (ADR-024, one per viewpoint) | |
| +----------------------------+---------------------------+ |
| | |
| [emb_1, emb_2, ..., emb_N] |
| | |
| v |
| +--------------------------------------------------------+ |
| | * RuView Cross-Viewpoint Fusion * | |
| | | |
| | Q = W_q * X, K = W_k * X, V = W_v * X | |
| | A = softmax((QK^T + G_bias) / sqrt(d)) | |
| | fused = A * V | |
| | | |
| | G_bias: geometric bias from viewpoint pair geometry | |
| | (ruvector-attention: ScaledDotProductAttention) | |
| +----------------------------+---------------------------+ |
| | |
| fused_embedding |
| | |
| v |
| +--------------------------------------------------------+ |
| | DensePose Regression Head | |
| | Keypoint head: [B,17,H,W] | |
| | Part/UV head: [B,25,H,W] + [B,48,H,W] | |
| +--------------------------------------------------------+ |
+-----------------------------------------------------------------+
The geometric bias G_bias encodes the spatial relationship between viewpoint pairs:
G_bias[i,j] = w_angle * cos(theta_ij) + w_dist * exp(-d_ij / d_ref)
where:
theta_ij = angle between viewpoint i and viewpoint j (from room center)d_ij = baseline distance between node i and node jw_angle, w_dist = learnable weightsd_ref = reference distance (room diagonal / 2)This allows the attention mechanism to learn that widely-separated, orthogonal viewpoints are more complementary than clustered ones.
/// Only update environment model when phase coherence exceeds threshold.
pub fn coherence_gate(
phase_diffs: &[f32], // delta-phi over T recent frames
threshold: f32, // typically 0.7
) -> bool {
// Complex mean of unit phasors
let (sum_cos, sum_sin) = phase_diffs.iter()
.fold((0.0f32, 0.0f32), |(c, s), &dp| {
(c + dp.cos(), s + dp.sin())
});
let n = phase_diffs.len() as f32;
let coherence = ((sum_cos / n).powi(2) + (sum_sin / n).powi(2)).sqrt();
coherence > threshold
}
| Path | Hardware | Bandwidth | Per-Viewpoint Rate | Target Tier |
|---|---|---|---|---|
| ESP32 Multistatic | 6x ESP32-S3 ($84) | 20 MHz (HT20) | 20 Hz | Silver |
| Cognitum + RF | Cognitum v1 + LimeSDR | 20-160 MHz | 20-100 Hz | Gold |
ESP32 path: commodity, achievable today, targets Silver tier (tracking + pose quality). Cognitum path: higher fidelity, targets Gold tier (tracking + pose + vitals).
Aggregate Root: MultistaticArray
pub struct MultistaticArray {
/// Unique array deployment ID
id: ArrayId,
/// Viewpoint geometry (node positions, orientations)
geometry: ArrayGeometry,
/// TDM schedule (slot assignments, cycle period)
schedule: TdmSchedule,
/// Active viewpoint embeddings (latest per node)
viewpoints: Vec<ViewpointEmbedding>,
/// Fused output embedding
fused: Option<FusedEmbedding>,
/// Coherence gate state
coherence_state: CoherenceState,
}
Entity: ViewpointEmbedding
pub struct ViewpointEmbedding {
/// Source node ID
node_id: NodeId,
/// AETHER embedding vector (128-d)
embedding: Vec<f32>,
/// Geometric metadata
azimuth: f32, // radians from array center
elevation: f32, // radians
baseline: f32, // meters from centroid
/// Capture timestamp
timestamp: Instant,
/// Signal quality
snr_db: f32,
}
Value Object: GeometricDiversityIndex
pub struct GeometricDiversityIndex {
/// GDI = (1/N) sum min_{j!=i} |theta_i - theta_j|
value: f32,
/// Effective independent viewpoints (after correlation discount)
n_effective: f32,
/// Worst viewpoint pair (most redundant)
worst_pair: (NodeId, NodeId),
}
Domain Events:
pub enum ViewpointFusionEvent {
ViewpointCaptured { node_id: NodeId, timestamp: Instant, snr_db: f32 },
TdmCycleCompleted { cycle_id: u64, viewpoints_received: usize },
FusionCompleted { fused_embedding: Vec<f32>, gdi: f32 },
CoherenceGateTriggered { coherence: f32, accepted: bool },
GeometryUpdated { new_gdi: f32, n_effective: f32 },
}
Signal (wifi-densepose-signal):
CrossViewpointSubcarrierSelection
Hardware (wifi-densepose-hardware):
TdmSensingProtocol
TdmSlotCompleted { node_id, slot_index, capture_quality }Training (wifi-densepose-train):
ruview_metrics.rs
| File | Purpose | RuVector Crate |
|---|---|---|
crates/wifi-densepose-ruvector/src/viewpoint/mod.rs | Module root, re-exports | -- |
crates/wifi-densepose-ruvector/src/viewpoint/attention.rs | Cross-viewpoint scaled dot-product attention with geometric bias | ruvector-attention |
crates/wifi-densepose-ruvector/src/viewpoint/geometry.rs | GeometricDiversityIndex, Cramer-Rao bound estimation | ruvector-solver |
crates/wifi-densepose-ruvector/src/viewpoint/coherence.rs | Coherence gating for environment stability | -- (pure math) |
crates/wifi-densepose-ruvector/src/viewpoint/fusion.rs | MultistaticArray aggregate, orchestrates fusion pipeline | ruvector-attention + ruvector-attn-mincut |
| File | Purpose | RuVector Crate |
|---|---|---|
crates/wifi-densepose-signal/src/cross_viewpoint.rs | Cross-viewpoint subcarrier consensus via min-cut | ruvector-mincut |
| File | Purpose | RuVector Crate |
|---|---|---|
crates/wifi-densepose-hardware/src/esp32/tdm.rs | TDM sensing protocol coordinator | -- (protocol logic) |
| File | Purpose | RuVector Crate |
|---|---|---|
crates/wifi-densepose-train/src/ruview_metrics.rs | Three-metric acceptance test (PCK/OKS, MOTA, vital sign accuracy) | ruvector-mincut (person matching) |
| Criterion | Threshold |
|---|---|
| [email protected] (all 17 keypoints) | >= 0.70 |
| [email protected] (torso: shoulders + hips) | >= 0.80 |
| Mean OKS | >= 0.50 |
| Torso jitter RMS (10s window) | < 3 cm |
| Per-keypoint max error (95th percentile) | < 15 cm |
| Criterion | Threshold |
|---|---|
| Subjects | 2 |
| Capture rate | 20 Hz |
| Track duration | 10 minutes |
| Identity swaps (MOTA ID-switch) | 0 |
| Track fragmentation ratio | < 0.05 |
| False track creation | 0/min |
| Criterion | Threshold |
|---|---|
| Breathing detection (6-30 BPM) | +/- 2 BPM |
| Breathing band SNR (0.1-0.5 Hz) | >= 6 dB |
| Heartbeat detection (40-120 BPM) | +/- 5 BPM (aspirational) |
| Heartbeat band SNR (0.8-2.0 Hz) | >= 3 dB (aspirational) |
| Micro-motion resolution | 1 mm at 3m |
| Tier | Requirements | Deployment Gate |
|---|---|---|
| Bronze | Metric 2 | Prototype demo |
| Silver | Metrics 1 + 2 | Production candidate |
| Gold | All three | Full deployment |
| ADR | Interaction |
|---|---|
| ADR-012 (ESP32 Mesh) | RuView extends the aggregator from feature-level to embedding-level fusion; TDM protocol replaces simple UDP collection |
| ADR-014 (SOTA Signal) | Per-viewpoint signal processing is unchanged; cross-viewpoint subcarrier consensus is new |
| ADR-016/017 (RuVector) | All 5 ruvector crates get new cross-viewpoint operations (see Section 4) |
| ADR-021 (Vital Signs) | Multi-viewpoint SNR improvement directly benefits vital sign extraction (Gold tier target) |
| ADR-024 (AETHER) | Per-viewpoint AETHER embeddings are the input to RuView fusion; AETHER is required |
| ADR-027 (MERIDIAN) | Cross-environment (MERIDIAN) and cross-viewpoint (RuView) are orthogonal; MERIDIAN handles room transfer, RuView handles within-room geometry |