Back to Ruview

wifi-densepose-nn

rust-port/wifi-densepose-rs/crates/wifi-densepose-nn/README.md

0.7.03.7 KB
Original Source

wifi-densepose-nn

Multi-backend neural network inference for WiFi-based DensePose estimation.

Overview

wifi-densepose-nn provides the inference engine that maps processed WiFi CSI features to DensePose body surface predictions. It supports three backends -- ONNX Runtime (default), PyTorch via tch-rs, and Candle -- so models can run on CPU, CUDA GPU, or TensorRT depending on the deployment target.

The crate implements two key neural components:

  • DensePose Head -- Predicts 24 body part segmentation masks and per-part UV coordinate regression.
  • Modality Translator -- Translates CSI feature embeddings into visual feature space, bridging the domain gap between WiFi signals and image-based pose estimation.

Features

  • ONNX Runtime backend (default) -- Load and run .onnx models with CPU or GPU execution providers.
  • PyTorch backend (tch-backend) -- Native PyTorch inference via libtorch FFI.
  • Candle backend (candle-backend) -- Pure-Rust inference with candle-core and candle-nn.
  • CUDA acceleration (cuda) -- GPU execution for supported backends.
  • TensorRT optimization (tensorrt) -- INT8/FP16 optimized inference via ONNX Runtime.
  • Batched inference -- Process multiple CSI frames in a single forward pass.
  • Model caching -- Memory-mapped model weights via memmap2.

Feature flags

FlagDefaultDescription
onnxyesONNX Runtime backend
tch-backendnoPyTorch (tch-rs) backend
candle-backendnoCandle pure-Rust backend
cudanoCUDA GPU acceleration
tensorrtnoTensorRT via ONNX Runtime
all-backendsnoEnable onnx + tch + candle together

Quick Start

rust
use wifi_densepose_nn::{InferenceEngine, DensePoseConfig, OnnxBackend};

// Create inference engine with ONNX backend
let config = DensePoseConfig::default();
let backend = OnnxBackend::from_file("model.onnx")?;
let engine = InferenceEngine::new(backend, config)?;

// Run inference on a CSI feature tensor
let input = ndarray::Array4::zeros((1, 256, 64, 64));
let output = engine.infer(&input)?;

println!("Body parts: {}", output.body_parts.shape()[1]); // 24

Architecture

text
wifi-densepose-nn/src/
  lib.rs          -- Re-exports, constants (NUM_BODY_PARTS=24), prelude
  densepose.rs    -- DensePoseHead, DensePoseConfig, DensePoseOutput
  inference.rs    -- Backend trait, InferenceEngine, InferenceOptions
  onnx.rs         -- OnnxBackend, OnnxSession (feature-gated)
  tensor.rs       -- Tensor, TensorShape utilities
  translator.rs   -- ModalityTranslator (CSI -> visual space)
  error.rs        -- NnError, NnResult
CrateRole
wifi-densepose-coreFoundation types and NeuralInference trait
wifi-densepose-signalProduces CSI features consumed by inference
wifi-densepose-trainTrains the models this crate loads
ortONNX Runtime Rust bindings
tchPyTorch Rust bindings
candle-coreHugging Face pure-Rust ML framework

License

MIT OR Apache-2.0