Motion Detection via Communication Signals

Overview

This project was completed for the Signals and Systems lab course. It explored Passive Coherent Location (PCL) — using ambient communication signals (cellular base station transmissions) as a radar illuminator to detect and characterize human motion, without emitting any dedicated radar signal. The approach leverages Doppler shift analysis and cross-correlation of reference and surveillance signals to estimate target velocity and range.

Doppler frequency–time heatmap for motion detection

Results

  • Successfully recovered Doppler frequency shift vs. time heatmaps from 20 field data files, clearly showing the presence and motion pattern of a human target.
  • Estimated target velocity using the ambiguity function peak at each time window, with the geometry of the base station-target-receiver triangle used to resolve the projection angle.
  • Extended analysis to micro-Doppler effects as a bonus investigation.

Technical Details

Signal Model:

  • Reference signal: y_ref(t) = α·x(t − τ_r) (direct path from base station)
  • Surveillance signal: y_sur(t) = β·x(t − τ_s)·e^(j2πf_D t) (reflected off the target)
  • Goal: estimate Δτ = |τ_r − τ_s| (range difference) and f_D (Doppler frequency shift)

Processing Pipeline (implemented in MATLAB):

  1. Load data: Read I/Q baseband samples from 20 data files (0.5 s segments, 25 MHz sample rate).
  2. Digital Down Convert (DDC): Frequency-shifted the 2110–2130 MHz band to baseband to isolate the signal of interest.
  3. Low-Pass Filtering: Butterworth LPF (order 10, cutoff 0.88×10⁷ Hz) to reject the adjacent 2130–2135 MHz band and reduce sample rate.
  4. Ambiguity Function:
    $$\text{Cor}(\tau, f_D) = \sum_{n=0}^{N-1} y_{\text{sur}}[nT_s] \cdot y_{\text{ref}}^*[nT_s - \tau] \cdot e^{-j2\pi f_D n T_s}$$
    • Swept over 6 range bins (0–5 samples) and 41 Doppler bins (−40 to +40 Hz).
    • Peak of the ambiguity function gives the best-fit (Δτ, f_D) estimate.
  5. Velocity estimation: Computed from v = λ·f_D / (2·cos(β/2)), where β was determined from the known geometry (base station at 247 m).
  6. Full heatmap: Stacked results across all 20 files to produce a Doppler frequency vs. time image showing the target’s motion trajectory.

Challenges

  • Cross-correlation computation: Naïve triple-loop implementation in MATLAB was extremely slow; required vectorization and careful indexing of delayed reference samples.
  • Frequency ambiguity: Multiple signal components in the raw spectrum made DDC center frequency selection critical; verified by inspecting spectrograms before and after processing.
  • Phase coherence: Small timing offsets between reference and surveillance channels had to be absorbed by the ambiguity function grid resolution.

Reflection and Insights

This project provided a concrete application of core signals-and-systems concepts — Fourier analysis, filtering, convolution/correlation, and Doppler physics — in a real-world sensing context. The ambiguity function framework elegantly unifies range and velocity estimation into a single 2D search problem, and the MATLAB implementation made the connection between the mathematical formulation and actual computational steps very direct.

The micro-Doppler extension raised interesting questions about how fine-grained body motion (arm swing, breathing) modulates the main Doppler return — relevant to applications like medical monitoring and security sensing.

Team and Role

  • Team: Two-person team.
  • My Role: Led signal processing pipeline implementation (DDC, filtering, ambiguity function); collaborated on heatmap generation and analysis.

ZeptoWatch — STM32-based Smartwatch with Python Script Runtime

Overview

ZeptoWatch is a from-scratch smartwatch project, developed as the DIY capstone for an analog circuits lab course. The core concept was to build a wearable device powered by an ordinary microcontroller (STM32F4) that could install and run user-written Python apps — bridging the gap between fixed-firmware fitness bands and full smartwatch operating systems. The Python interpreter (PikaScript) was embedded directly into the firmware, allowing users to write scripts, load them via USB, and execute them as apps.

ZeptoWatch hardware prototype

Key Features

  • Embedded Python interpreter: PikaScript runs user .py scripts stored on the device’s FAT file system.
  • USB mass storage: Plug into a computer — the watch appears as a USB drive; drag in Python scripts to install apps. Verified on Windows and Ubuntu.
  • Touch display: CST816 capacitive touch chip (I²C) + GC9A01 round LCD (SPI + DMA), with smooth LVGL animations.
  • Rich peripherals: EEPROM, IMU (MPU6050), microphone (I²S + DMA), vibration motor, Bluetooth module, battery voltage ADC.
  • FreeRTOS multi-tasking: Separate tasks for UI, sensor reading, and script execution with Mutex protection for LVGL thread safety.
  • FAT file system: FatFs on 64 KB EEPROM; supports file read/write from both device firmware and USB host.
  • Custom PCB: Four design iterations using KiCad / LCEDA; 0402 SMD components, two-layer stacked board for screen placement.

Technical Details

Hardware (Ver 3.0 — final):

  • STM32F4 as main controller; CST816 (touch), GC9A01 (display), MPU6050 (IMU), M24512 (EEPROM).
  • Two-board stacked design: top board holds the LCD, bottom board contains all active components connected via magnetic pogo pins.
  • Type-C connector for both charging (lithium battery management IC) and Full-speed USB 2.0 data.

Firmware Architecture:

  • Board-level drivers: Custom I²C bit-bang drivers for touch, EEPROM, IMU; hardware SPI + DMA for display; hardware I²S + DMA for microphone; ADC + DMA for battery voltage.
  • FreeRTOS: Multi-task architecture; LVGL resources protected by Mutex to prevent race conditions between tasks.
  • LVGL: Embedded GUI framework for all system UI (clock face, date/time settings, app launcher, dropdown menus). Extended pika_lvgl bindings to expose LVGL APIs to Python scripts.
  • PikaScript: Lightweight Python 3 subset interpreter. Custom extension packages written to expose hardware APIs (IMU, motor, display, timer) to user scripts.
  • App examples: Calculator, spectrum analyzer (FFT via ARM DSP library), gravity simulation (accelerometer-driven physics), electronic Muyu (wooden fish tapping), and more.

Challenges

  1. USB mass storage not recognized: CubeMX-generated USB code was broken until a library update resolved it — a week-long debugging ordeal.
  2. FatFs on small EEPROM: FatFs blocked formatting below a minimum sector count; resolved by patching the source. Windows failed to recognize the volume for an extended period (eventually fixed itself — suspected to be FAT16/FAT32 auto-detection logic).
  3. LVGL thread safety: The most persistent crash cause was concurrent LVGL access from multiple FreeRTOS tasks. Adding a Mutex lock resolved all unexplained freezes.
  4. PikaScript instability: As an early-stage open-source project, PikaScript lacked a crash handler; contributed __platform_panic override to enable graceful recovery from script crashes without rebooting the whole system.
  5. PCB soldering issues: 0402 components and near-BGA spring pins required careful soldering; cold joints caused intermittent issues throughout development.

Reflection and Insights

ZeptoWatch was the most complex embedded project undertaken at the undergraduate level — spanning schematic design, PCB layout, hardware bring-up, driver development, RTOS integration, GUI framework, file system, USB stack, and a scripting language runtime. The most impactful lesson was that system-level correctness requires holistic thinking: a thread-safety bug in LVGL manifested as random freezes everywhere else, and no amount of isolated debugging found it until the root cause was understood. The project also instilled the habit of reading official documentation and library changelogs carefully — the USB bug was silently fixed in a CubeMX update that would have been caught earlier with regular updates.

Team and Role

  • Team: Four-person team; responsibilities split between hardware design, firmware, and testing.
  • My Role: Contributed to firmware architecture design, peripheral driver development (serial communication, GY-25 interfacing), and Python app development; co-led system integration and debugging.

Speech Synthesis and Perception with Envelope Cue

Overview

This project was completed for the Signals and Systems lab course. It implemented a Tone Vocoder — a system that decomposes speech into frequency sub-bands, extracts the amplitude envelope of each band, re-modulates the envelopes onto sinusoidal carriers, and resynthesizes the signal. This mimics the processing strategy used in cochlear implants, which must transmit speech with a very limited number of independent channels.

ToneVocoder console and spectrum output

Results

  • Increasing the number of frequency bands N consistently improved perceptual quality of the resynthesized speech.
  • Increasing the low-pass filter cutoff frequency improved envelope fidelity and naturalness.
  • Bionic cochlear segmentation (logarithmically spaced bands) outperformed equal-interval segmentation at low N (e.g., N=4), because the low-frequency range carries disproportionately more speech energy.
  • At large N, equal-interval segmentation achieved higher upper-bound quality, but cochlear segmentation became unstable at N≈20 (narrow passbands caused filter instability).
  • Added Speech-Shaped Noise (SSN) at varying SNRs and confirmed that envelope-based synthesis degrades gracefully but becomes unintelligible at low SNR.
  • Developed a full MATLAB App Designer GUI for real-time parameter exploration.

Technical Details

Tone Vocoder Pipeline:

  1. Band-pass filtering: Split the 200–7000 Hz speech spectrum into N sub-bands using Butterworth BPFs.
    • Mode 0: Equal-frequency spacing.
    • Mode 1: Cochlear-length mapping (f = 165.4 × (10^(0.06d) − 1)), producing logarithmically spaced bands that match basilar membrane resonance distribution.
  2. Envelope extraction: Full-wave rectification (abs) followed by a low-pass Butterworth filter (cutoff Cf Hz) to extract the amplitude envelope of each sub-band.
  3. Carrier modulation: Each envelope multiplied by a sinusoidal carrier at the sub-band midpoint frequency.
  4. Synthesis & normalization: Sum all modulated sub-bands; normalize energy to match the input signal level.

Advanced Extensions:

  • Carrier frequency variants: Tested geometric mean, harmonic mean, arithmetic mean, and square mean as alternatives to the midpoint frequency, examining effects on reconstruction fidelity.
  • SSN generation: Synthesized speech-shaped noise matching the input’s power spectral density using pwelch + fir2, added at a controlled SNR.
  • MATLAB App Designer console: Interactive GUI with sliders for band count (0–150) and LPF cutoff (0–200 Hz), BPF mode toggle, SSN on/off switch, and real-time waveform + spectrum display.

Challenges

  • Filter instability at high N: Narrow passbands caused Butterworth BPF coefficients to become numerically unstable; identified N≈20 as the practical upper bound for cochlear-mode segmentation.
  • Energy normalization: Without explicit normalization, synthesized speech energy varied significantly with N and Cf, making perceptual comparisons across conditions unreliable.
  • Code modularity: Refactored the pipeline into reusable functions (Envelope, getSSN, alter) shared across standalone scripts and the App Designer class, which required careful handling of MATLAB’s function scoping rules.

Reflection and Insights

This project made abstract signal-processing concepts tangible: the effect of filter bank design on speech quality can be heard directly, not just measured. The cochlear-inspired logarithmic spacing illustrates a broader principle — domain-specific knowledge (here, auditory neuroscience) often provides better engineering priors than uniform mathematical choices. The project also demonstrated that building an interactive parameter-exploration tool, even a simple slider-based GUI, dramatically accelerates the insight cycle compared to running scripts with hardcoded values.

Team and Role

  • Team: Two-person team.
  • My Role: Implemented the core Tone Vocoder pipeline; designed and built the MATLAB App Designer console; led the cochlear segmentation analysis and carrier frequency experiments.