Adaptive Model Selection for Real-Time Heart Disease Detection

Overview

As a Research Assistant at North Carolina State University (under Prof. Zhishan Guo), I contributed to building and evaluating an Adaptive Model Selection (AMS) framework for real-time cardiovascular disease detection on wearable embedded hardware — targeting deployment on a Raspberry Pi 4.

The core problem: ECG inference latency is bounded by the patient’s instantaneous heart rate (higher HR = shorter beat deadline), but accuracy increases with a heavier model. A fixed-complexity model either misses deadlines at high heart rate or wastes capacity at low heart rate. Our AMS framework solves this by dynamically selecting from three model tiers at every beat window based on real-time HR.

The work was published as a research paper at an IEEE conference.

Publication: “Adaptive Model Selection for Real-Time Heart Disease Detection on Embedded Systems” (2nd author)
IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2025)

Task Definition

The system performs 5-class cardiac severity classification (severity levels 0–4) on single-lead ECG data in real-time. Rather than classifying individual disease types (the full dataset covers 72 disease categories), the system assigns a severity score to each heartbeat cycle — enabling timely alerts and continuous risk monitoring on a wearable without requiring a full diagnostic workup.

Dataset: PhysioNet 2021 Challenge — filtered to 22,359 single-label ECG recordings, split 64% training / 16% validation / 20% test.

Model Architecture

Each input segment contains β consecutive R–R cycles, each resampled to 256 samples. The model has two parallel branches fused via global attention:

ECG branch: A stem convolution → three Residual Blocks (channel widths 8β → 16β → 32β → 64β) each containing a Squeeze-and-Excitation (SE) unit for channel recalibration → adaptive average pooling to length α.

Period branch: The inter-beat period vector passes through a FC block (Linear → BatchNorm → ELU), mapping to the same 64β feature space.

Global Attention fusion: The flattened ECG features and period embedding are concatenated, fed to a two-layer attention module that produces a sigmoid mask modulating the ECG features — allowing the network to weight cycle regions by their rhythm context.

Output: The attended features pass through two FC layers to produce 5 logits (severity 0–4).

This architecture couples morphological feature extraction (ResBlocks + SE) with rhythm-aware re-weighting (period branch + global attention), kept compact for embedded deployment.

AMS Framework and Anytime CNN

Three model tiers share a common parameter-shared Anytime CNN backbone with early-exit heads:

  • High HR (≥ 90 bpm): Lightweight exit — fastest path, 0.57 ms, handles tight deadlines.
  • Moderate HR (70–90 bpm): Moderate exit — adds one ResBlock+SE, 1.79 ms.
  • Low HR (< 70 bpm): Advanced exit — full depth with global attention, 1.94 ms, highest accuracy.

At every shifted window, the AMS controller reads instantaneous HR and routes to the shallowest model that can meet the beat’s timing deadline. All three exits are jointly trained with deep supervision (equal-weight loss summing), so each head remains independently accurate while sharing the backbone weights — keeping the total checkpoint under 5 MB in the two-cycle configuration.

Results

Model Cycles Accuracy F1 Inference (ms) Deadline Misses
AMS + Anytime 2 91.5% 90.6% 1.33 0
Advanced (standalone) 2 92.6% 91.1% 1.94 431/1000
Moderate 2 87.8% 87.7% 1.79 259/1000
Lightweight 2 86.5% 86.6% 1.05 0
CNN-LSTM (baseline) 2 87.3% 87.6% 3.33 1000/1000

Key finding: Two cardiac cycles is the optimal input length — one extra beat provides enough temporal context to improve accuracy meaningfully, while three or four cycles push latency past the real-time budget. The AMS+Anytime configuration achieves the accuracy sweet spot (91.5%) with zero deadline misses across all heart-rate regimes.

Technical Details

Preprocessing:

  • R-peaks detected using Hamilton’s algorithm (BioSPPy library); heartbeat cycles extracted as R–R intervals and resampled to 256 points.
  • Labels assigned per-cycle based on the recording’s severity score; multi-label recordings excluded to eliminate annotation ambiguity.
  • Fixed preprocessing/label-alignment issues in the PhysioNet dataset that caused unstable cross-fold metrics.

Scheduling:

  • EDF (Earliest-Deadline-First) schedulability analysis verified the system can co-exist with other concurrent tasks (UI, Bluetooth, sensor fusion) on a uniprocessor without deadline violations.
  • A microsecond-resolution watchdog can pre-empt inference at a configurable fraction of the beat budget and fall back to a shallower exit if needed.

Training:

  • Adam optimizer, lr 0.001, batch size 128, early stopping on validation loss.
  • Multi-exit deep supervision: losses from all three exit heads summed with equal weights.
  • Evaluated on Raspberry Pi 4 (quad-core ARM Cortex-A72) as a proxy for commercial wearable SoCs.

Challenges

  1. Latency–accuracy trade-off at the per-beat level: No single fixed model can meet deadlines at high HR while maximizing accuracy at low HR. The AMS+Anytime design resolves this by making depth selection a runtime policy rather than a design-time choice.

  2. Label-alignment bugs in PhysioNet preprocessing: Early experiments showed high cross-fold metric variance. Root cause was windowing misalignment causing future-label leakage. Fixing alignment via Hamilton R-peak anchoring eliminated the variance.

  3. Memory budget on embedded SoC: Three independent checkpoints would exceed wearable SRAM. Parameter sharing via early-exit architecture brings the two-cycle AMS model to under 5 MB — feasible for a smartwatch.

Reflection and Insights

The most important insight from this project: adaptive depth selection is not an optimization — it is a prerequisite for correctness in real-time embedded ML. A model that achieves 92.6% accuracy in batch evaluation but misses 431 out of 1000 deadlines on-device is not a working real-time system. Framing the problem through the lens of schedulability analysis (EDF, utilization bounds) made this explicit and led directly to the AMS design. The secondary insight is that multi-exit parameter sharing is the right architectural response to memory-constrained deployment: all complexity levels coexist in one checkpoint, switchable with zero weight reload overhead.

Team and Role

Research at NCSU under Prof. Zhishan Guo. My responsibilities: co-designing the CNN architecture (ResBlocks + SE + Global Attention), debugging the PhysioNet preprocessing pipeline, benchmarking model tiers on Raspberry Pi, contributing to AMS framework design, and co-authoring the RTCSA 2025 paper.

eMeritBox

Overview

The eMeritBox project combines traditional Buddhist cultural elements with modern technology, creating an interactive gravity-sensing electronic donation box. Built with a Raspberry Pi, the system integrates PWM control, motion sensing, and a web-based interface to modernize the concept of a traditional donation box. This innovative design bridges traditional practices with digital solutions, offering a seamless and meaningful user experience.

Results

  • System Features:
    • Automatic wooden fish strikes with real-time donation ball accumulation.
    • Gravity-sensing motion control for dynamic donation ball movement.
    • Dual operational modes: manual and auto donation switching.
  • Achievements:
    • Successfully implemented a complete hardware-software system using Raspberry Pi and Flask.
    • Developed reusable classes for matrix display and gravity sensing, enabling future adaptations.

GitHub (Chinese README)

eMeritBox system overview

eMeritBox functional demonstration

eMeritBox functional demonstration

Technical Details

  • System Architecture:
    • Controller: Raspberry Pi handles signal processing, PWM control, and web server operations.
    • Modules:
      • MG-90 servo for wooden fish strikes.
      • GY-25 gyroscope for motion sensing.
      • MAX7219 matrix display for donation ball visualization.
  • Key Functionalities:
    • Gravity-Sensing Donation: Balls dynamically move based on the box’s tilt angle.
    • Flask Web Server: Supports browser-based remote operation of wooden fish strikes.
    • Matrix Display: Visualizes donation balls in real-time, reflecting their position and state.
  • Software Implementation:
    • Developed Python classes for modular control:
      • GY25Ctrl for gyroscope data processing.
      • MatrixCtrl for donation ball display updates.
      • BGMPlayer for background music playback.
    • Solved hardware conflicts by reconfiguring UART ports and enabling additional I²C channels.

Challenges

  • UART hardware resource conflicts: Raspberry Pi’s default UART settings caused resource contention.
    • Solution: Re-mapped hardware and mini UARTs (ttyAMA0 ↔ ttyS0) and configured multiple UART ports (+ttyAMA1, 2, …) for simultaneous operation.
  • I²C channel conflicts: Dual I²C channels on Raspberry Pi conflicted with camera usage.
    • Solution: Disabled the camera function and enabled additional I²C channels with dtparam=i2c_vc=on.
  • SPI and I²C competing with UART ports: Enabling SPI and I²C modules on the Raspberry Pi caused UART port contention.
    • Solution: Adjusted hardware configurations to optimize resource allocation.
  • Synchronization of Multiple Modules: Managing the simultaneous operation of PWM, matrix display, and motion sensing.
    • Solution: Utilized multi-threading to ensure real-time responsiveness and system stability.

Reflection and Insights

The eMeritBox represents a modernized approach to traditional Buddhist donation practices, seamlessly integrating spiritual elements with advanced technology. By reimagining the donation process with dynamic visuals and interactive controls, this project demonstrates the potential of technology to preserve and innovate cultural traditions. The challenges in hardware-software integration further highlighted the importance of modular design and multi-threaded programming in building robust embedded systems.

National Undergraduate Electronic Design Contest

Overview

In the National Undergraduate Electronics Design Contest (NUEDC, Aug 2023), our team designed and built a vision-guided pan–tilt auto-tracking system, earning Guangdong Provincial Second Prize. The system uses OpenMV for real-time target detection and an Arduino Mega as the main controller, with discrete-time PID and Kalman filtering for closed-loop tracking under competitive time constraints.

Results

  • Achievement: Guangdong Provincial Second Prize (National Undergraduate Electronics Design Contest, NUEDC).
  • Performance Metrics:
    • < 1 cm positioning error in static and dynamic target positioning tests.
    • 0.5–0.6 s response time from target detection to stable pan–tilt lock in dynamic tracking tests.
  • Presentation: Delivered a complete, working prototype under the 4-day competition time constraint.
On-site assembly system demonstration.

Technical Details

  • System Architecture:
    • Two independent servo-driven gimbals controlled by an Arduino Mega2560 microcontroller.
    • Image processing conducted via OpenMV H7 to identify and track red and green laser spots.
  • Algorithms:
    • Coordinate Mapping: Calibrated pixel-to-servo angle mapping via regression-based calibration to reduce geometric bias from camera lens distortion and gimbal nonlinearity.
    • Motion Control: Implemented discrete-time PID control for closed-loop pan–tilt positioning with trajectory interpolation for smooth motion.
    • Tracking: Applied Kalman filtering on the Arduino Mega to predict and correct target position under measurement noise, enabling stable tracking at higher target speeds.
  • Challenges:
    • Overcame mechanical inaccuracies by calibrating servo feedback with additional correction factors and replacing faulty servos to improve trajectory precision.
    • Improved image processing robustness under varying light conditions through optimized exposure settings and LAB color space filtering.
    • Enhanced tracking success rates for high-speed laser movements by adjusting control frequencies and refining PID parameters for smoother tracking.

Reflection and Insights

This competition was a test of endurance, adaptability, and teamwork. The intense four-day schedule required rapid problem-solving and collaboration. The experience highlighted the importance of robust system design and precise calibration in achieving high-performance results. It also reinforced the value of integrating advanced algorithms, such as Kalman filtering, to address real-world constraints.

Team and Role

  • Team: A three-member team collaboratively handled hardware design, software development, and optimization.
  • My Role:
    • Led the implementation of image processing algorithms and Kalman filtering.
    • Developed and tested the PID control system for trajectory tracking.
    • Conducted system debugging and parameter tuning under competition constraints.

National Undergraduate Embedded Chip and System Design Competition

Overview

In the National Undergraduate Embedded Chip and System Design Competition (July 2023), I developed a dual-quadcopter control system using STM32F4 and NRFx series controllers. The project demonstrated motion synchronization between a primary and a secondary quadcopter through innovative wireless communication and control algorithms. This work earned us district-level recognition for its technical complexity and application potential.

Results

  • Achievement: District-level recognition for innovative quadcopter motion synchronization.
  • Performance Metrics:
    • Wireless communication demonstrated stability over distances of up to 100 meters in controlled testing conditions.
    • Achieved synchronized motion control with precise PID tuning for smooth coordination.
  • Deliverables:
    • A functional dual-quadcopter prototype capable of real-time motion matching.
    • System robustness tested under various environmental conditions.
Two quadcopters used in the competition.

Technical Details

  • System Architecture:
    • Developed a control system with STM32F411 as the primary controller and NRF51822 for extended communication.
    • Integrated sensors, including MPU9250 for attitude measurement and BMP280 for altitude control.
    • Real-time task management implemented using FreeRTOS for multitasking.
  • Wireless Communication:
    • Leveraged NRF51822 and NRF24L01+ for long-range and low-latency communication.
    • Designed custom communication protocols to support bidirectional data flow and address synchronization issues.
  • Control Algorithms:
    • Implemented a closed-loop PID control system for stable quadcopter motion.
    • Enhanced robustness with signal filtering and fallback mechanisms to prevent flight instability.

Challenges

  • Signal Interference: Addressed challenges with RF signal collisions by implementing Enhanced ShockBurst protocols and custom filtering methods.
  • Synchronization Complexity: Fine-tuned PID parameters to handle discrepancies in quadcopter structure, weight, and motion dynamics.
  • Hardware Limitations: Overcame initial instability in the flight control system by incorporating open-source flight control adaptations.

Reflection and Insights

This project offered valuable insights into embedded system design and wireless communication. Working through challenges such as signal interference and parameter tuning enhanced my problem-solving skills. The experience also underscored the importance of collaboration and iterative debugging in achieving stable and synchronized quadcopter control.

Lever-based Non-strain Mass Measurement Sensing System

Overview

This project was completed for the Sensing and Measurement lab course (SDM273). The goal was to design a mass measurement system for objects in the 75–750 g range with error within ±5 g, without using any strain-gauge sensors. The solution used a lever-balance principle, driven by a stepper motor, with a GY-25 gyroscope detecting the tilt angle of the lever arm.

Results

  • Measurement accuracy: Relative error consistently within ±1% across the 75–750 g test range.
  • Linear calibration: Fitted a first-order relationship between stepper motor position and mass: d = −1.4194m + 1193.7, validated through least-squares regression on calibration data.
  • Notable features:
    • Voice broadcast of measurement results.
    • No additional distance sensor needed — displacement computed from stepper motor step count.
    • Extreme pin-resource utilization via combined hard and soft UART.

System Design

Hardware:

  • Core structure: Roller-screw linear module acting as a lever arm, pivoting on flange-bearing supports to minimize friction.
  • Counterweight: Two 500 g weights on a slider; the stepper motor drives the slider along the lever to balance the unknown mass.
  • Sensing: GY-25 module (MPU6050-based) attached to the lever to read real-time pitch angle and determine which side is heavier.
  • Controller: Arduino UNO; stepper motor driver board (HPD970).

Software:

  • Balance detection: The system drives the slider in the direction that reduces tilt angle; balance is detected when the pitch sign reverses on consecutive readings (zero-crossing logic).
  • Friction compensation: Due to bearing friction, the equilibrium position differs depending on approach direction. The final position estimate averages two convergence positions: d = (d₁ + d₂) / 2.
  • Kalman filter: Applied to MPU6050 angular data to suppress noise and drift; the raw sensor exhibited >2% fluctuation even at rest.
  • Serial communication: Combined hardware UART and software serial (SoftwareSerial) to manage the GY-25 module, voice broadcast module, and PC debug output simultaneously within Arduino UNO’s limited pin set.

Challenges

  1. MPU6050 noise and zero-drift: Raw gyroscope data was highly noisy; addressed with Kalman filtering and GY-25’s onboard attitude fusion.
  2. Friction asymmetry: The approaching-from-above vs. approaching-from-below slider positions differed due to pulley friction, causing measurement bias. Averaging the two crossing positions significantly reduced this error.
  3. Limited Arduino UNO pins: Needed simultaneous UART communication with GY-25 (115200 baud), the voice module, and the PC. Solved by remapping hardware/software serial ports and enabling additional UART channels.
  4. Small calibration dataset: Only a limited number of reference masses were available, risking underfitting/overfitting of the linear model; addressed by careful selection of calibration points across the full range.

Reflection and Insights

This project demonstrated that non-electrical sensing principles — in this case, mechanical leverage — can achieve high-precision measurements when combined with careful signal processing and systematic calibration. The Kalman filter was essential to making the gyroscope data useful in practice, and the friction-averaging technique was a simple but highly effective engineering heuristic. The resource constraints of the Arduino UNO also provided practical experience with low-level embedded systems optimization.

Team and Role

  • Team: Two-person team sharing hardware construction and software development.
  • My Role: Primarily responsible for the control algorithm, Kalman filter implementation, and serial communication configuration.

SUSTech Electronic Design Competition

Overview

In the SUSTech College Student Electronic Design Competition, our team developed an embedded system for a smart medicine delivery vehicle. The system utilized modular design to complete tasks such as path tracking, room number recognition, and automatic medicine delivery and return. Using Arduino Mega2560 as the main controller and OpenMV Cam H7 Plus for vision-based recognition, we successfully implemented a functional prototype that met the core requirements.

Results

  • Achievements:
    • Designed and built a smart medicine delivery vehicle capable of transporting 200g medicines to specific rooms and returning to the starting point autonomously.
    • Achieved stable path tracking and room number recognition using color sensors and OpenMV-based vision modules.
  • Performance Metrics:
    • Accuracy: Room number recognition improved from low precision (~80%) to over 95% with increased training data.
    • Stability: Optimized dual-motor control for smooth motion and reduced trajectory deviation.
Datasets collected for training visual models.

Number recognition test.

Technical Details

  • System Architecture:
    • Controllers: Used Arduino Mega2560 for motion control and OpenMV Cam H7 Plus for visual recognition.
    • Sensors: Incorporated TCS230 color sensors for red-white line tracking and ultrasonic sensors for medicine placement detection.
    • Power Management: Employed LM2596 DC-DC modules for stable power supply to the motors and sensors.
  • Vision-Based Recognition:
    • Trained a MobileNet V2 model on OpenMV for room number detection, improving accuracy with an expanded dataset of over 1,000 images.
    • Integrated servo-driven camera adjustments to widen the field of view for larger room number recognition.
  • Motion Control:
    • Implemented dual-motor PWM control for precise trajectory adjustments.
    • Addressed power distribution imbalances to enhance stability and reduce deviation during turns.

Challenges

  • Path Tracking: Gray sensors failed to distinguish red and white lines due to spectral similarity, resolved by switching to TCS230 color sensors.
  • Recognition Precision: Initial low accuracy in room number detection improved through iterative dataset expansion and neural network fine-tuning.
  • Motor Synchronization: Addressed power discrepancies by using independent motor drivers for improved stability.

Reflection and Insights

This competition underscored the importance of modular system design in solving complex real-world problems. From refining vision models to debugging hardware components, the experience deepened my understanding of embedded systems and control algorithms. It also highlighted the critical role of interdisciplinary approaches in achieving reliable system performance.

Team and Role

  • Team: Collaborated with two teammates on design, implementation, and testing.
  • My Role:
    • Designed and implemented the vision system for room number recognition.
    • Led the integration of the motion control and tracking subsystems.
    • Conducted iterative testing and fine-tuning for system optimization.

ZeptoWatch — STM32-based Smartwatch with Python Script Runtime

Overview

ZeptoWatch is a from-scratch smartwatch project, developed as the DIY capstone for an analog circuits lab course. The core concept was to build a wearable device powered by an ordinary microcontroller (STM32F4) that could install and run user-written Python apps — bridging the gap between fixed-firmware fitness bands and full smartwatch operating systems. The Python interpreter (PikaScript) was embedded directly into the firmware, allowing users to write scripts, load them via USB, and execute them as apps.

ZeptoWatch hardware prototype

Key Features

  • Embedded Python interpreter: PikaScript runs user .py scripts stored on the device’s FAT file system.
  • USB mass storage: Plug into a computer — the watch appears as a USB drive; drag in Python scripts to install apps. Verified on Windows and Ubuntu.
  • Touch display: CST816 capacitive touch chip (I²C) + GC9A01 round LCD (SPI + DMA), with smooth LVGL animations.
  • Rich peripherals: EEPROM, IMU (MPU6050), microphone (I²S + DMA), vibration motor, Bluetooth module, battery voltage ADC.
  • FreeRTOS multi-tasking: Separate tasks for UI, sensor reading, and script execution with Mutex protection for LVGL thread safety.
  • FAT file system: FatFs on 64 KB EEPROM; supports file read/write from both device firmware and USB host.
  • Custom PCB: Four design iterations using KiCad / LCEDA; 0402 SMD components, two-layer stacked board for screen placement.

Technical Details

Hardware (Ver 3.0 — final):

  • STM32F4 as main controller; CST816 (touch), GC9A01 (display), MPU6050 (IMU), M24512 (EEPROM).
  • Two-board stacked design: top board holds the LCD, bottom board contains all active components connected via magnetic pogo pins.
  • Type-C connector for both charging (lithium battery management IC) and Full-speed USB 2.0 data.

Firmware Architecture:

  • Board-level drivers: Custom I²C bit-bang drivers for touch, EEPROM, IMU; hardware SPI + DMA for display; hardware I²S + DMA for microphone; ADC + DMA for battery voltage.
  • FreeRTOS: Multi-task architecture; LVGL resources protected by Mutex to prevent race conditions between tasks.
  • LVGL: Embedded GUI framework for all system UI (clock face, date/time settings, app launcher, dropdown menus). Extended pika_lvgl bindings to expose LVGL APIs to Python scripts.
  • PikaScript: Lightweight Python 3 subset interpreter. Custom extension packages written to expose hardware APIs (IMU, motor, display, timer) to user scripts.
  • App examples: Calculator, spectrum analyzer (FFT via ARM DSP library), gravity simulation (accelerometer-driven physics), electronic Muyu (wooden fish tapping), and more.

Challenges

  1. USB mass storage not recognized: CubeMX-generated USB code was broken until a library update resolved it — a week-long debugging ordeal.
  2. FatFs on small EEPROM: FatFs blocked formatting below a minimum sector count; resolved by patching the source. Windows failed to recognize the volume for an extended period (eventually fixed itself — suspected to be FAT16/FAT32 auto-detection logic).
  3. LVGL thread safety: The most persistent crash cause was concurrent LVGL access from multiple FreeRTOS tasks. Adding a Mutex lock resolved all unexplained freezes.
  4. PikaScript instability: As an early-stage open-source project, PikaScript lacked a crash handler; contributed __platform_panic override to enable graceful recovery from script crashes without rebooting the whole system.
  5. PCB soldering issues: 0402 components and near-BGA spring pins required careful soldering; cold joints caused intermittent issues throughout development.

Reflection and Insights

ZeptoWatch was the most complex embedded project undertaken at the undergraduate level — spanning schematic design, PCB layout, hardware bring-up, driver development, RTOS integration, GUI framework, file system, USB stack, and a scripting language runtime. The most impactful lesson was that system-level correctness requires holistic thinking: a thread-safety bug in LVGL manifested as random freezes everywhere else, and no amount of isolated debugging found it until the root cause was understood. The project also instilled the habit of reading official documentation and library changelogs carefully — the USB bug was silently fixed in a CubeMX update that would have been caught earlier with regular updates.

Team and Role

  • Team: Four-person team; responsibilities split between hardware design, firmware, and testing.
  • My Role: Contributed to firmware architecture design, peripheral driver development (serial communication, GY-25 interfacing), and Python app development; co-led system integration and debugging.